Sample records for auditory spatial receptive

  1. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  2. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  3. Auditory and visual interactions between the superior and inferior colliculi in the ferret.

    PubMed

    Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K

    2015-05-01

    The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus.

    PubMed

    Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J

    1998-12-01

    Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.

  5. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons.

    PubMed

    Harper, Nicol S; Schoppe, Oliver; Willmore, Ben D B; Cui, Zhanfeng; Schnupp, Jan W H; King, Andrew J

    2016-11-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1-7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context.

  6. Network Receptive Field Modeling Reveals Extensive Integration and Multi-feature Selectivity in Auditory Cortical Neurons

    PubMed Central

    Willmore, Ben D. B.; Cui, Zhanfeng; Schnupp, Jan W. H.; King, Andrew J.

    2016-01-01

    Cortical sensory neurons are commonly characterized using the receptive field, the linear dependence of their response on the stimulus. In primary auditory cortex neurons can be characterized by their spectrotemporal receptive fields, the spectral and temporal features of a sound that linearly drive a neuron. However, receptive fields do not capture the fact that the response of a cortical neuron results from the complex nonlinear network in which it is embedded. By fitting a nonlinear feedforward network model (a network receptive field) to cortical responses to natural sounds, we reveal that primary auditory cortical neurons are sensitive over a substantially larger spectrotemporal domain than is seen in their standard spectrotemporal receptive fields. Furthermore, the network receptive field, a parsimonious network consisting of 1–7 sub-receptive fields that interact nonlinearly, consistently better predicts neural responses to auditory stimuli than the standard receptive fields. The network receptive field reveals separate excitatory and inhibitory sub-fields with different nonlinear properties, and interaction of the sub-fields gives rise to important operations such as gain control and conjunctive feature detection. The conjunctive effects, where neurons respond only if several specific features are present together, enable increased selectivity for particular complex spectrotemporal structures, and may constitute an important stage in sound recognition. In conclusion, we demonstrate that fitting auditory cortical neural responses with feedforward network models expands on simple linear receptive field models in a manner that yields substantially improved predictive power and reveals key nonlinear aspects of cortical processing, while remaining easy to interpret in a physiological context. PMID:27835647

  7. Idealized Computational Models for Auditory Receptive Fields

    PubMed Central

    Lindeberg, Tony; Friberg, Anders

    2015-01-01

    We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i) enable invariance of receptive field responses under natural sound transformations and (ii) ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC) and primary auditory cortex (A1) of mammals. PMID:25822973

  8. Central auditory neurons have composite receptive fields.

    PubMed

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  9. Story retelling and language ability in school-aged children with cerebral palsy and speech impairment.

    PubMed

    Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela

    2015-01-01

    Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the retelling ability of children with CP and speech impairment. Fifteen speaking children with speech impairment (seven girls, eight boys) (mean age = 11 years, SD = 1;4 years), and different types of CP and different levels of gross motor and cognitive function participated in the present study. Story retelling skills were tested and analysed with the Bus Story Test (BST) and the Narrative Assessment Profile (NAP). Receptive language ability was tested with the Test for Reception of Grammar-2 (TROG-2) and the Peabody Picture Vocabulary Test - IV (PPVT-IV). Non-verbal cognitive level was tested with the Raven's coloured progressive matrices (RCPM), memory functions assessed with the Corsi block-tapping task (CB) and the Digit Span from the Wechsler Intelligence Scale for Children-III. ToM was assessed with the false belief items of the two story tests "Kiki and the Cat" and "Birthday Puppy". The children had severe problems with retelling ability corresponding to an age-equivalent of 5;2-6;9 years. Receptive and expressive language, visuo-spatial and auditory memory, non-verbal cognitive level and ToM varied widely within and among the children. Both expressive and receptive language correlated significantly with narrative ability in terms of NAP total scores, so did auditory memory. The results suggest that retelling ability in the children with CP in the present study is dependent on language comprehension and production, and memory functions. Consequently, it is important to examine retelling ability together with language and cognitive abilities in these children in order to provide appropriate support. © 2015 Royal College of Speech and Language Therapists.

  10. Listening to Sentences in Noise: Revealing Binaural Hearing Challenges in Patients with Schizophrenia.

    PubMed

    Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily

    2017-11-01

    The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.

  11. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  12. Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.

  13. [Receptive and expressive speech development in children with cochlear implant].

    PubMed

    Streicher, B; Kral, K; Hahn, M; Lang-Roth, R

    2015-04-01

    This study's aim is the assessment of language development of children with Cochlea Implant (CI). It focusses on receptive and expressive language development as well as auditory memory skills. Grimm's language development test (SETK 3-5) evaluates receptive, expressive language development and auditory memory. Data of 49 children who received their implant within their first 3 years of life were compared to the norms of hearing children at the age of 3.0-3.5 years. According to the age at implantation the cohort was subdivided in 3 groups: cochlear implantation within the first 12 months of life (group 1), during the 13th and 24th months of life (group 2) and after 25 or more months of life (group 3). It was possible to collect complete data of all SETK 3-5 subtests in 63% of the participants. A homogeneous profile of all subtests indicates a balanced receptive and expressive language development. Thus reduces the gap between hearing/language age and chronological age. Receptive and expressive language and auditory memory milestones in children implanted within their first year of life are achieved earlier in comparison to later implanted children. The Language Test for Children (SETK 3-5) is an appropriate test procedure to be used for language assessment of children who received a CI. It can be used from age 3 on to administer data on receptive and expressive language development and auditory memory. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Surgical factors in pediatric cochlear implantation and their early effects on electrode activation and functional outcomes.

    PubMed

    Francis, Howard W; Buchman, Craig A; Visaya, Jiovani M; Wang, Nae-Yuh; Zwolan, Teresa A; Fink, Nancy E; Niparko, John K

    2008-06-01

    To assess the impact of surgical factors on electrode status and early communication outcomes in young children in the first 2 years of cochlear implantation. Prospective multicenter cohort study. Six tertiary referral centers. Children 5 years or younger before implantation with normal nonverbal intelligence. Cochlear implant operations in 209 ears of 188 children. Percent active channels, auditory behavior as measured by the Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale and Reynell receptive language scores. Stable insertion of the full electrode array was accomplished in 96.2% of ears. At least 75% of electrode channels were active in 88% of ears. Electrode deactivation had a significant negative effect on Infant Toddler Meaningful Auditory Integration Scale/Meaningful Auditory Integration Scale scores at 24 months but no effect on receptive language scores. Significantly fewer active electrodes were associated with a history of meningitis. Surgical complications requiring additional hospitalization and/or revision surgery occurred in 6.7% of patients but had no measurable effect on the development of auditory behavior within the first 2 years. Negative, although insignificant, associations were observed between the need for perioperative revision of the device and 1) the percent of active electrodes and 2) the receptive language level at 2-year follow-up. Activation of the entire electrode array is associated with better early auditory outcomes. Decrements in the number of active electrodes and lower gains of receptive language after manipulation of the newly implanted device were not statistically significant but may be clinically relevant, underscoring the importance of surgical technique and the effective placement of the electrode array.

  15. Outline for Remediation of Problem Areas for Children with Learning Disabilities. Revised. = Bosquejo para la Correccion de Areas Problematicas para Ninos con Impedimientos del Aprendizaje.

    ERIC Educational Resources Information Center

    Bornstein, Joan L.

    The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…

  16. Compression of auditory space during forward self-motion.

    PubMed

    Teramoto, Wataru; Sakamoto, Shuichi; Furune, Fumimasa; Gyoba, Jiro; Suzuki, Yôiti

    2012-01-01

    Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point). In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial shifts in the auditory receptive field locations driven by afferent signals from vestibular system.

  17. Cognitive abilities relate to self-reported hearing disability.

    PubMed

    Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E

    2013-10-01

    In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.

  18. Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.

    PubMed

    Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M

    2003-05-13

    Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.

  19. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain.

    PubMed

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.

  20. Frequency-specific adaptation and its underlying circuit model in the auditory midbrain

    PubMed Central

    Shen, Li; Zhao, Lingyun; Hong, Bo

    2015-01-01

    Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry. PMID:26483641

  1. Spectrotemporal dynamics of auditory cortical synaptic receptive field plasticity.

    PubMed

    Froemke, Robert C; Martins, Ana Raquel O

    2011-09-01

    The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Spectrotemporal Dynamics of Auditory Cortical Synaptic Receptive Field Plasticity

    PubMed Central

    Froemke, Robert C.; Martins, Ana Raquel O.

    2011-01-01

    The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. PMID:21426927

  3. Human Engineer’s Guide to Auditory Displays. Volume 2. Elements of Signal Reception and Resolution Affecting Auditory Displays.

    DTIC Science & Technology

    1984-08-01

    90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection

  4. A physiologically-inspired model reproducing the speech intelligibility benefit in cochlear implant listeners with residual acoustic hearing.

    PubMed

    Zamaninezhad, Ladan; Hohmann, Volker; Büchner, Andreas; Schädler, Marc René; Jürgens, Tim

    2017-02-01

    This study introduces a speech intelligibility model for cochlear implant users with ipsilateral preserved acoustic hearing that aims at simulating the observed speech-in-noise intelligibility benefit when receiving simultaneous electric and acoustic stimulation (EA-benefit). The model simulates the auditory nerve spiking in response to electric and/or acoustic stimulation. The temporally and spatially integrated spiking patterns were used as the final internal representation of noisy speech. Speech reception thresholds (SRTs) in stationary noise were predicted for a sentence test using an automatic speech recognition framework. The model was employed to systematically investigate the effect of three physiologically relevant model factors on simulated SRTs: (1) the spatial spread of the electric field which co-varies with the number of electrically stimulated auditory nerves, (2) the "internal" noise simulating the deprivation of auditory system, and (3) the upper bound frequency limit of acoustic hearing. The model results show that the simulated SRTs increase monotonically with increasing spatial spread for fixed internal noise, and also increase with increasing the internal noise strength for a fixed spatial spread. The predicted EA-benefit does not follow such a systematic trend and depends on the specific combination of the model parameters. Beyond 300 Hz, the upper bound limit for preserved acoustic hearing is less influential on speech intelligibility of EA-listeners in stationary noise. The proposed model-predicted EA-benefits are within the range of EA-benefits shown by 18 out of 21 actual cochlear implant listeners with preserved acoustic hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Basic Auditory Processing Skills and Phonological Awareness in Low-IQ Readers and Typically Developing Controls

    ERIC Educational Resources Information Center

    Kuppen, Sarah; Huss, Martina; Fosker, Tim; Fegan, Natasha; Goswami, Usha

    2011-01-01

    We explore the relationships between basic auditory processing, phonological awareness, vocabulary, and word reading in a sample of 95 children, 55 typically developing children, and 40 children with low IQ. All children received nonspeech auditory processing tasks, phonological processing and literacy measures, and a receptive vocabulary task.…

  6. Topographic mapping of a hierarchy of temporal receptive windows using a narrated story

    PubMed Central

    Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.

    2011-01-01

    Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912

  7. Effect of subliminal visual material on an auditory signal detection task.

    PubMed

    Moroney, E; Bross, M

    1984-02-01

    An experiment assessed the effect of subliminally embedded, visual material on an auditory detection task. 22 women and 19 men were presented tachistoscopically with words designated as "emotional" or "neutral" on the basis of prior GSRs and a Word Rating List under four conditions: (a) Unembedded Neutral, (b) Embedded Neutral, (c) Unembedded Emotional, and (d) Embedded Emotional. On each trial subjects made forced choices concerning the presence or absence of an auditory tone (1000 Hz) at threshold level; hits and false alarm rates were used to compute non-parametric indices for sensitivity (A') and response bias (B"). While over-all analyses of variance yielded no significant differences, further examination of the data suggests the presence of subliminally "receptive" and "non-receptive" subpopulations.

  8. The Auditory Anatomy of the Minke Whale (Balaenoptera acutorostrata): A Potential Fatty Sound Reception Pathway in a Baleen Whale

    PubMed Central

    Yamato, Maya; Ketten, Darlene R; Arruda, Julie; Cramer, Scott; Moore, Kathleen

    2012-01-01

    Cetaceans possess highly derived auditory systems adapted for underwater hearing. Odontoceti (toothed whales) are thought to receive sound through specialized fat bodies that contact the tympanoperiotic complex, the bones housing the middle and inner ears. However, sound reception pathways remain unknown in Mysticeti (baleen whales), which have very different cranial anatomies compared to odontocetes. Here, we report a potential fatty sound reception pathway in the minke whale (Balaenoptera acutorostrata), a mysticete of the balaenopterid family. The cephalic anatomy of seven minke whales was investigated using computerized tomography and magnetic resonance imaging, verified through dissections. Findings include a large, well-formed fat body lateral, dorsal, and posterior to the mandibular ramus and lateral to the tympanoperiotic complex. This fat body inserts into the tympanoperiotic complex at the lateral aperture between the tympanic and periotic bones and is in contact with the ossicles. There is also a second, smaller body of fat found within the tympanic bone, which contacts the ossicles as well. This is the first analysis of these fatty tissues' association with the auditory structures in a mysticete, providing anatomical evidence that fatty sound reception pathways may not be a unique feature of odontocete cetaceans. Anat Rec, 2012. © 2012 Wiley Periodicals, Inc. PMID:22488847

  9. Patterns of Auditory Perception Skills in Children with Learning Disabilities: A Computer-Assisted Approach.

    ERIC Educational Resources Information Center

    Pressman, E.; And Others

    1986-01-01

    The auditory receptive language skills of 40 learning disabled (LD) and 40 non-disabled boys (all 7 - 11 years old) were assessed via computerized versions of subtests of the Goldman-Fristoe-Woodcock Auditory Skills Test Battery. The computerized assessment correctly identified 92.5% of the LD group and 65% of the normal control children. (DB)

  10. Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions.

    PubMed

    Carriere, Brian N; Royal, David W; Wallace, Mark T

    2008-05-01

    Investigations of multisensory processing at the level of the single neuron have illustrated the importance of the spatial and temporal relationship of the paired stimuli and their relative effectiveness in determining the product of the resultant interaction. Although these principles provide a good first-order description of the interactive process, they were derived by treating space, time, and effectiveness as independent factors. In the anterior ectosylvian sulcus (AES) of the cat, previous work hinted that the spatial receptive field (SRF) architecture of multisensory neurons might play an important role in multisensory processing due to differences in the vigor of responses to identical stimuli placed at different locations within the SRF. In this study the impact of SRF architecture on cortical multisensory processing was investigated using semichronic single-unit electrophysiological experiments targeting a multisensory domain of the cat AES. The visual and auditory SRFs of AES multisensory neurons exhibited striking response heterogeneity, with SRF architecture appearing to play a major role in the multisensory interactions. The deterministic role of SRF architecture was tightly coupled to the manner in which stimulus location modulated the responsiveness of the neuron. Thus multisensory stimulus combinations at weakly effective locations within the SRF resulted in large (often superadditive) response enhancements, whereas combinations at more effective spatial locations resulted in smaller (additive/subadditive) interactions. These results provide important insights into the spatial organization and processing capabilities of cortical multisensory neurons, features that may provide important clues as to the functional roles played by this area in spatially directed perceptual processes.

  11. Sequencing Stories in Spanish and English.

    ERIC Educational Resources Information Center

    Steckbeck, Pamela Meza

    The guide was designed for speech pathologists, bilingual teachers, and specialists in English as a second language who work with Spanish-speaking children. The guide contains twenty illustrated stories that facilitate the learning of auditory sequencing, auditory and visual memory, receptive and expressive vocabulary, and expressive language…

  12. Incorporating Midbrain Adaptation to Mean Sound Level Improves Models of Auditory Cortical Processing

    PubMed Central

    Schoppe, Oliver; King, Andrew J.; Schnupp, Jan W.H.; Harper, Nicol S.

    2016-01-01

    Adaptation to stimulus statistics, such as the mean level and contrast of recently heard sounds, has been demonstrated at various levels of the auditory pathway. It allows the nervous system to operate over the wide range of intensities and contrasts found in the natural world. Yet current standard models of the response properties of auditory neurons do not incorporate such adaptation. Here we present a model of neural responses in the ferret auditory cortex (the IC Adaptation model), which takes into account adaptation to mean sound level at a lower level of processing: the inferior colliculus (IC). The model performs high-pass filtering with frequency-dependent time constants on the sound spectrogram, followed by half-wave rectification, and passes the output to a standard linear–nonlinear (LN) model. We find that the IC Adaptation model consistently predicts cortical responses better than the standard LN model for a range of synthetic and natural stimuli. The IC Adaptation model introduces no extra free parameters, so it improves predictions without sacrificing parsimony. Furthermore, the time constants of adaptation in the IC appear to be matched to the statistics of natural sounds, suggesting that neurons in the auditory midbrain predict the mean level of future sounds and adapt their responses appropriately. SIGNIFICANCE STATEMENT An ability to accurately predict how sensory neurons respond to novel stimuli is critical if we are to fully characterize their response properties. Attempts to model these responses have had a distinguished history, but it has proven difficult to improve their predictive power significantly beyond that of simple, mostly linear receptive field models. Here we show that auditory cortex receptive field models benefit from a nonlinear preprocessing stage that replicates known adaptation properties of the auditory midbrain. This improves their predictive power across a wide range of stimuli but keeps model complexity low as it introduces no new free parameters. Incorporating the adaptive coding properties of neurons will likely improve receptive field models in other sensory modalities too. PMID:26758822

  13. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    PubMed Central

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  14. High-Risk Infants: Auditory Processing Deficits in Later Childhood.

    ERIC Educational Resources Information Center

    Gilbride, Kathleen E.; And Others

    To determine whether deficits warranting intervention are present in the later functioning of high-risk infants, 22 premature infants who experienced asphyxia or chronic lung disease (CLD) but who had no gross developmental abnormalities were evaluated. Assessments of auditory perception and receptive language ability were made during later…

  15. Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks

    ERIC Educational Resources Information Center

    Petursdottir, Anna Ingeborg; Aguilar, Gabriella

    2016-01-01

    Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…

  16. Rapid tuning shifts in human auditory cortex enhance speech intelligibility

    PubMed Central

    Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.

    2016-01-01

    Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965

  17. Auditory Spatial Attention Representations in the Human Cerebral Cortex

    PubMed Central

    Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.

    2014-01-01

    Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753

  18. Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language

    ERIC Educational Resources Information Center

    Reel, Leigh Ann; Hicks, Candace Bourland

    2012-01-01

    Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…

  19. The Use of Picture Prompts and Prompt Delay to Teach Receptive Labeling

    ERIC Educational Resources Information Center

    Vedora, Joseph; Barry, Tiffany

    2016-01-01

    The current study extended research on picture prompts by using them with a progressive prompt delay to teach receptive labeling of pictures to 2 teenagers with autism. The procedure differed from prior research because the auditory stimulus was not presented or was presented only once during the picture-prompt condition. The results indicated…

  20. Adaptive training diminishes distractibility in aging across species.

    PubMed

    Mishra, Jyoti; de Villers-Sidani, Etienne; Merzenich, Michael; Gazzaley, Adam

    2014-12-03

    Aging is associated with deficits in the ability to ignore distractions, which has not yet been remediated by any neurotherapeutic approach. Here, in parallel auditory experiments with older rats and humans, we evaluated a targeted cognitive training approach that adaptively manipulated distractor challenge. Training resulted in enhanced discrimination abilities in the setting of irrelevant information in both species that was driven by selectively diminished distraction-related errors. Neural responses to distractors in auditory cortex were selectively reduced in both species, mimicking the behavioral effects. Sensory receptive fields in trained rats exhibited improved spectral and spatial selectivity. Frontal theta measures of top-down engagement with distractors were selectively restrained in trained humans. Finally, training gains generalized to group and individual level benefits in aspects of working memory and sustained attention. Thus, we demonstrate converging cross-species evidence for training-induced selective plasticity of distractor processing at multiple neural scales, benefitting distractor suppression and cognitive control. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  2. Aberrant Lateralization of Brainstem Auditory Evoked Responses by Individuals with Down Syndrome.

    ERIC Educational Resources Information Center

    Miezejeski, Charles M.; And Others

    1994-01-01

    Brainstem auditory evoked response latencies were studied in 80 males (13 with Down's syndrome). Latencies for waves P3 and P5 were shorter for Down's syndrome subjects, who also showed a different pattern of left versus right ear responses. Results suggest decreased lateralization and receptive and expressive language ability among people with…

  3. The Effects of Sound-Field Amplification on Children with Hearing Impairment and Other Diagnoses in Preschool and Primary Classes

    ERIC Educational Resources Information Center

    Furno, Lois Ehrler

    2012-01-01

    Effective learning occurs in auditory environments. Background noise is inherent to classrooms with recommended levels 15 decibels softer than instruction, which is rarely achieved. Learning is diminished by interference to the auditory reception of information, especially for students who are hard of hearing other diagnoses. Sound-field…

  4. A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training

    ERIC Educational Resources Information Center

    Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.

    2012-01-01

    This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…

  5. Thresholding of auditory cortical representation by background noise

    PubMed Central

    Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju

    2014-01-01

    It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029

  6. Using the structure of natural scenes and sounds to predict neural response properties in the brain

    NASA Astrophysics Data System (ADS)

    Deweese, Michael

    2014-03-01

    The natural scenes and sounds we encounter in the world are highly structured. The fact that animals and humans are so efficient at processing these sensory signals compared with the latest algorithms running on the fastest modern computers suggests that our brains can exploit this structure. We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogra representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus (MGBv) and primary auditory cortex (A1), and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds. We have also developed a biologically-inspired neural network model of primary visual cortex (V1) that can learn a sparse representation of natural scenes using spiking neurons and strictly local plasticity rules. The representation learned by our model is in good agreement with measured receptive fields in V1, demonstrating that sparse sensory coding can be achieved in a realistic biological setting.

  7. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  8. Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study

    PubMed Central

    Ursino, Mauro; Crisafulli, Andrea; di Pellegrino, Giuseppe; Magosso, Elisa; Cuppini, Cristiano

    2017-01-01

    The brain integrates information from different sensory modalities to generate a coherent and accurate percept of external events. Several experimental studies suggest that this integration follows the principle of Bayesian estimate. However, the neural mechanisms responsible for this behavior, and its development in a multisensory environment, are still insufficiently understood. We recently presented a neural network model of audio-visual integration (Neural Computation, 2017) to investigate how a Bayesian estimator can spontaneously develop from the statistics of external stimuli. Model assumes the presence of two unimodal areas (auditory and visual) topologically organized. Neurons in each area receive an input from the external environment, computed as the inner product of the sensory-specific stimulus and the receptive field synapses, and a cross-modal input from neurons of the other modality. Based on sensory experience, synapses were trained via Hebbian potentiation and a decay term. Aim of this work is to improve the previous model, including a more realistic distribution of visual stimuli: visual stimuli have a higher spatial accuracy at the central azimuthal coordinate and a lower accuracy at the periphery. Moreover, their prior probability is higher at the center, and decreases toward the periphery. Simulations show that, after training, the receptive fields of visual and auditory neurons shrink to reproduce the accuracy of the input (both at the center and at the periphery in the visual case), thus realizing the likelihood estimate of unimodal spatial position. Moreover, the preferred positions of visual neurons contract toward the center, thus encoding the prior probability of the visual input. Finally, a prior probability of the co-occurrence of audio-visual stimuli is encoded in the cross-modal synapses. The model is able to simulate the main properties of a Bayesian estimator and to reproduce behavioral data in all conditions examined. In particular, in unisensory conditions the visual estimates exhibit a bias toward the fovea, which increases with the level of noise. In cross modal conditions, the SD of the estimates decreases when using congruent audio-visual stimuli, and a ventriloquism effect becomes evident in case of spatially disparate stimuli. Moreover, the ventriloquism decreases with the eccentricity. PMID:29046631

  9. Comparing the effect of auditory-only and auditory-visual modes in two groups of Persian children using cochlear implants: a randomized clinical trial.

    PubMed

    Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam

    2013-09-01

    Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school when communicating with their parents and educators prior to and after implantation. The trial has been registered at IRCT.ir, number IRCT201109267637N1. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Individual Differences in Auditory Sentence Comprehension in Children: An Exploratory Event-Related Functional Magnetic Resonance Imaging Investigation

    ERIC Educational Resources Information Center

    Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.

    2010-01-01

    The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…

  11. Auditory-Visual Speech Perception in Three- and Four-Year-Olds and Its Relationship to Perceptual Attunement and Receptive Vocabulary

    ERIC Educational Resources Information Center

    Erdener, Dogu; Burnham, Denis

    2018-01-01

    Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…

  12. Auditory-Verbal Therapy as an Intervention Approach for Children Who Are Deaf: A Review of the Evidence. EBP Briefs. Volume 11, Issue 6

    ERIC Educational Resources Information Center

    Bowers, Lisa M.

    2017-01-01

    Clinical Question: Would young deaf children who participate in Auditory-Verbal Therapy (AVT) provided by a Listening and Spoken Language Specialist (LSLS) certified in AVT demonstrate gains in receptive and expressive language skills similar to their typical hearing peers? Method: Systematic Review. Study Sources: EBSCOhost databases: Academic…

  13. Spectral and Temporal Processing in Rat Posterior Auditory Cortex

    PubMed Central

    Pandya, Pritesh K.; Rathbun, Daniel L.; Moucha, Raluca; Engineer, Navzer D.; Kilgard, Michael P.

    2009-01-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex. PMID:17615251

  14. Hearing in Insects.

    PubMed

    Göpfert, Martin C; Hennig, R Matthias

    2016-01-01

    Insect hearing has independently evolved multiple times in the context of intraspecific communication and predator detection by transforming proprioceptive organs into ears. Research over the past decade, ranging from the biophysics of sound reception to molecular aspects of auditory transduction to the neuronal mechanisms of auditory signal processing, has greatly advanced our understanding of how insects hear. Apart from evolutionary innovations that seem unique to insect hearing, parallels between insect and vertebrate auditory systems have been uncovered, and the auditory sensory cells of insects and vertebrates turned out to be evolutionarily related. This review summarizes our current understanding of insect hearing. It also discusses recent advances in insect auditory research, which have put forward insect auditory systems for studying biological aspects that extend beyond hearing, such as cilium function, neuronal signal computation, and sensory system evolution.

  15. Spatial selectivity and binaural responses in the inferior colliculus of the great horned owl.

    PubMed

    Volman, S F; Konishi, M

    1989-09-01

    In this study we have investigated the processing of auditory cues for sound localization in the great horned owl (Bubo virginianus). Previous studies have shown that the barn owl, whose ears are asymmetrically oriented in the vertical plane, has a 2-dimensional, topographic representation of auditory space in the external division of the inferior colliculus (ICx). As in the barn owl, the great horned owl's ICx is anatomically distinct and projects to the optic tectum. Neurons in ICx respond over only a small range of azimuths (mean = 32 degrees), and azimuth is topographically mapped. In contrast to the barn owl, the great horned owl has bilaterally symmetrical ears and its receptive fields are not restricted in elevation. The binaural cues available for sound localization were measured both with cochlear microphonic recordings and with a microphone attached to a probe tube in the auditory canal. Interaural time disparity (ITD) varied monotonically with azimuth. Interaural intensity differences (IID) also changed with azimuth, but the largest IIDs were less than 15 dB, and the variation was not monotonic. Neither ITD nor IID varied systematically with changes in the vertical position of a sound source. We used dichotic stimulation to determine the sensitivity of ICx neurons to these binaural cues. Best ITD of ICx units was topographically mapped and strongly correlated with receptive-field azimuth. The width of ITD tuning curves, measured at 50% of the maximum response, averaged 72 microseconds. All ICx neurons responded only to binaural stimulation and had nonmonotonic IID tuning curves. Best IID was weakly, but significantly, correlated with best ITD (r = 0.39, p less than 0.05). The IID tuning curves, however, were broad (mean 50% width = 24 dB), and 67% of the units had best IIDs within 5 dB of 0 dB IID. ITD tuning was sensitive to variations in IID in the direction opposite to that expected for time-intensity trading, but the magnitude of this effect was only 1.5 microseconds/dB IID. We conclude that, in the great horned owl, the spatial selectivity of ICx neurons arises primarily from their ITD tuning. Except for the absence of elevation selectivity and the narrow range of best IIDs, ICx in the great horned owl appears to be organized much the same as in the barn owl.

  16. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  17. Single-Sided Deafness: Impact of Cochlear Implantation on Speech Perception in Complex Noise and on Auditory Localization Accuracy.

    PubMed

    Döge, Julia; Baumann, Uwe; Weissgerber, Tobias; Rader, Tobias

    2017-12-01

    To assess auditory localization accuracy and speech reception threshold (SRT) in complex noise conditions in adult patients with acquired single-sided deafness, after intervention with a cochlear implant (CI) in the deaf ear. Nonrandomized, open, prospective patient series. Tertiary referral university hospital. Eleven patients with late-onset single-sided deafness (SSD) and normal hearing in the unaffected ear, who received a CI. All patients were experienced CI users. Unilateral cochlear implantation. Speech perception was tested in a complex multitalker equivalent noise field consisting of multiple sound sources. Speech reception thresholds in noise were determined in aided (with CI) and unaided conditions. Localization accuracy was assessed in complete darkness. Acoustic stimuli were radiated by multiple loudspeakers distributed in the frontal horizontal plane between -60 and +60 degrees. In the aided condition, results show slightly improved speech reception scores compared with the unaided condition in most of the patients. For 8 of the 11 subjects, SRT was improved between 0.37 and 1.70 dB. Three of the 11 subjects showed deteriorations between 1.22 and 3.24 dB SRT. Median localization error decreased significantly by 12.9 degrees compared with the unaided condition. CI in single-sided deafness is an effective treatment to improve the auditory localization accuracy. Speech reception in complex noise conditions is improved to a lesser extent in 73% of the participating CI SSD patients. However, the absence of true binaural interaction effects (summation, squelch) impedes further improvements. The development of speech processing strategies that respect binaural interaction seems to be mandatory to advance speech perception in demanding listening situations in SSD patients.

  18. Assessment of short-term memory in Arabic speaking children with specific language impairment.

    PubMed

    Kaddah, F A; Shoeib, R M; Mahmoud, H E

    2010-12-15

    Children with Specific Language Impairment (SLI) may have some kind of memory disorder that could increase their linguistic impairment. This study assessed the short-term memory skills in Arabic speaking children with either Expressive Language Impairment (ELI) or Receptive/Expressive Language Impairment (R/ELI) in comparison to controls in order to estimate the nature and extent of any specific deficits in these children that could explain the different prognostic results of language intervention. Eighteen children were included in each group. Receptive, expressive and total language quotients were calculated using the Arabic language test. Assessment of auditory and visual short-term memory was done using the Arabic version of the Illinois Test of Psycholinguistic Abilities. Both groups of SLI performed significantly lower linguistic abilities and poorer auditory and visual short-term memory in comparison to normal children. The R/ELI group presented an inferior performance than the ELI group in all measured parameters. Strong association was found between most tasks of auditory and visual short-term memory and linguistic abilities. The results of this study highlighted a specific degree of deficit of auditory and visual short-term memories in both groups of SLI. These deficits were more prominent in R/ELI group. Moreover, the strong association between the different auditory and visual short-term memories and language abilities in children with SLI must be taken into account when planning an intervention program for these children.

  19. Attention operates uniformly throughout the classical receptive field and the surround.

    PubMed

    Verhoef, Bram-Ernst; Maunsell, John Hr

    2016-08-22

    Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround.

  20. Auditory skills, language development, and adaptive behavior of children with cochlear implants and additional disabilities

    PubMed Central

    Beer, Jessica; Harris, Michael S.; Kronenberger, William G.; Holt, Rachael Frush; Pisoni, David B.

    2012-01-01

    Objective The objective of this study was to evaluate the development of functional auditory skills, language, and adaptive behavior in deaf children with cochlear implants (CI) who also have additional disabilities (AD). Design A two-group, pre-test versus post-test design was used. Study sample Comparisons were made between 23 children with CIs and ADs, and an age-matched comparison group of 23 children with CIs without ADs (No-AD). Assessments were obtained pre-CI and within 12 months post-CI. Results All but two deaf children with ADs improved in auditory skills using the IT-MAIS. Most deaf children in the AD group made progress in receptive but not expressive language using the Preschool Language Scale, but their language quotients were lower than the No-AD group. Five of eight children with ADs made progress in daily living skills and socialization skills; two made progress in motor skills. Children with ADs who did not make progress in language, did show progress in adaptive behavior. Conclusions Children with deafness and ADs made progress in functional auditory skills, receptive language, and adaptive behavior. Expanded assessment that includes adaptive functioning and multi-center collaboration is recommended to best determine benefits of implantation in areas of expected growth in this clinical population. PMID:22509948

  1. Investigation of the neurological correlates of information reception

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Animals trained to respond to a given pattern of electrical stimuli applied to pathways or centers of the auditory nervous system respond also to certain patterns of acoustic stimuli without additional training. Likewise, only certain electrical stimuli elicit responses after training to a given acoustic signal. In most instances, if a response has been learned to a given electrical stimulus applied to one center of the auditory nervous system, the same stimulus applied to another auditory center at either a higher or lower level will also elicit the response. This kind of transfer of response does not take place when a stimulus is applied through electrodes implanted in neural tissue outside of the auditory system.

  2. Auditory-vocal mirroring in songbirds.

    PubMed

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  3. Receptive vocabulary development in deaf children with cochlear implants: achievement in an intensive auditory-oral educational setting.

    PubMed

    Hayes, Heather; Geers, Ann E; Treiman, Rebecca; Moog, Jean Sachar

    2009-02-01

    Deaf children with cochlear implants are at a disadvantage in learning vocabulary when compared with hearing peers. Past research has reported that children with implants have lower receptive vocabulary scores and less growth over time than hearing children. Research findings are mixed as to the effects of age at implantation on vocabulary skills and development. One goal of the current study is to determine how children with cochlear implants educated in an auditory-oral environment compared with their hearing peers on a receptive vocabulary measure in overall achievement and growth rates. This study will also investigate the effects of age at implant on vocabulary abilities and growth rates. We expect that the children with implants will have smaller vocabularies than their hearing peers but will achieve similar rates of growth as their implant experience increases. We also expect that children who receive their implants at young ages will have better overall vocabulary and higher growth rates than older-at-implant children. Repeated assessments using the Peabody Picture Vocabulary Test were given to 65 deaf children with cochlear implants who used oral communication, who were implanted under the age of 5 yr, and who attended an intensive auditory-oral education program. Multilevel modeling was used to describe overall abilities and rates of receptive vocabulary growth over time. On average, the deaf children with cochlear implants had lower vocabulary scores than their hearing peers. However, the deaf children demonstrated substantial vocabulary growth, making more than 1 yr's worth of progress in a year. This finding contrasts with those of previous studies of children with implants, which found lower growth rates. A negative quadratic trend indicated that growth decelerated with time. Age at implantation significantly affected linear and quadratic growth. Younger-at-implant children had steeper growth rates but more tapering off with time than children implanted later in life. Growth curves indicate that children who are implanted by the age of 2 yr can achieve receptive vocabulary skills within the average range for hearing children.

  4. Attention operates uniformly throughout the classical receptive field and the surround

    PubMed Central

    Verhoef, Bram-Ernst; Maunsell, John HR

    2016-01-01

    Shifting attention among visual stimuli at different locations modulates neuronal responses in heterogeneous ways, depending on where those stimuli lie within the receptive fields of neurons. Yet how attention interacts with the receptive-field structure of cortical neurons remains unclear. We measured neuronal responses in area V4 while monkeys shifted their attention among stimuli placed in different locations within and around neuronal receptive fields. We found that attention interacts uniformly with the spatially-varying excitation and suppression associated with the receptive field. This interaction explained the large variability in attention modulation across neurons, and a non-additive relationship among stimulus selectivity, stimulus-induced suppression and attention modulation that has not been previously described. A spatially-tuned normalization model precisely accounted for all observed attention modulations and for the spatial summation properties of neurons. These results provide a unified account of spatial summation and attention-related modulation across both the classical receptive field and the surround. DOI: http://dx.doi.org/10.7554/eLife.17256.001 PMID:27547989

  5. Auditory and visual spatial impression: Recent studies of three auditoria

    NASA Astrophysics Data System (ADS)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  6. Human engineer's guide to auditory displays. Volume 2: Elements of signal reception and resolution affecting auditory displays

    NASA Astrophysics Data System (ADS)

    Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.

    1984-08-01

    This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to acoustic signals. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays. Appendix 1 also contains citations of the scientific literature on which was based the answers to each question. There are nineteen questions and answers, and more than two hundred citations contained in the list of references given in Appendix 2. This is one of two related works, the other of which reviewed the literature in the areas of auditory attention, recognition memory, and auditory perception of patterns, pitch, and loudness.

  7. Auditory spatial processing in the human cortex.

    PubMed

    Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C

    2012-12-01

    The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.

  8. The Essential Complexity of Auditory Receptive Fields

    PubMed Central

    Thorson, Ivar L.; Liénard, Jean; David, Stephen V.

    2015-01-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  9. [Fragile X syndrome with Dandy-Walker variant: a clinical study of oral and written communicative manifestations].

    PubMed

    Lamônica, Dionísia Aparecida Cusin; Ferraz, Plínio Marcos Duarte Pinto; Ferreira, Amanda Tragueta; Prado, Lívia Maria do; Abramides, Dagma Venturini Marquez; Gejão, Mariana Germano

    2011-01-01

    The Fragile X syndrome is the most frequent cause of inherited intellectual disability. The Dandy-Walker variant is a specific constellation of neuroradiological findings. The present study reports oral and written communication findings in a 15-year-old boy with clinical and molecular diagnosis of Fragile X syndrome and neuroimaging findings consistent with Dandy-Walker variant. The speech-language pathology and audiology evaluation was carried out using the Communicative Behavior Observation, the Phonology assessment of the ABFW - Child Language Test, the Phonological Abilities Profile, the Test of School Performance, and the Illinois Test of Psycholinguistic Abilities. Stomatognathic system and hearing assessments were also performed. It was observed: phonological, semantic, pragmatic and morphosyntactic deficits in oral language; deficits in psycholinguistic abilities (auditory reception, verbal expression, combination of sounds, auditory and visual sequential memory, auditory closure, auditory and visual association); and morphological and functional alterations in the stomatognathic system. Difficulties in decoding the graphical symbols were observed in reading. In writing, the subject presented omissions, agglutinations and multiple representations with the predominant use of vowels, besides difficulties in visuo-spatial organization. In mathematics, in spite of the numeric recognition, the participant didn't accomplish arithmetic operations. No alterations were observed in the peripheral hearing evaluation. The constellation of behavioral, cognitive, linguistic and perceptual symptoms described for Fragile X syndrome, in addition to the structural central nervous alterations observed in the Dandy-Walker variant, caused outstanding interferences in the development of communicative abilities, in reading and writing learning, and in the individual's social integration.

  10. Nonvisual influences on visual-information processing in the superior colliculus.

    PubMed

    Stein, B E; Jiang, W; Wallace, M T; Stanford, T R

    2001-01-01

    Although visually responsive neurons predominate in the deep layers of the superior colliculus (SC), the majority of them also receive sensory inputs from nonvisual sources (i.e. auditory and/or somatosensory). Most of these 'multisensory' neurons are able to synthesize their cross-modal inputs and, as a consequence, their responses to visual stimuli can be profoundly enhanced or depressed in the presence of a nonvisual cue. Whether response enhancement or response depression is produced by this multisensory interaction is predictable based on several factors. These include: the organization of a neuron's visual and nonvisual receptive fields; the relative spatial relationships of the different stimuli (to their respective receptive fields and to one another); and whether or not the neuron is innervated by a select population of cortical neurons. The response enhancement or depression of SC neurons via multisensory integration has significant survival value via its profound impact on overt attentive/orientation behaviors. Nevertheless, these multisensory processes are not present at birth, and require an extensive period of postnatal maturation. It seems likely that the sensory experiences obtained during this period play an important role in crafting the processes underlying these multisensory interactions.

  11. Population receptive field (pRF) measurements of chromatic responses in human visual cortex using fMRI.

    PubMed

    Welbourne, Lauren E; Morland, Antony B; Wade, Alex R

    2018-02-15

    The spatial sensitivity of the human visual system depends on stimulus color: achromatic gratings can be resolved at relatively high spatial frequencies while sensitivity to isoluminant color contrast tends to be more low-pass. Models of early spatial vision often assume that the receptive field size of pattern-sensitive neurons is correlated with their spatial frequency sensitivity - larger receptive fields are typically associated with lower optimal spatial frequency. A strong prediction of this model is that neurons coding isoluminant chromatic patterns should have, on average, a larger receptive field size than neurons sensitive to achromatic patterns. Here, we test this assumption using functional magnetic resonance imaging (fMRI). We show that while spatial frequency sensitivity depends on chromaticity in the manner predicted by behavioral measurements, population receptive field (pRF) size measurements show no such dependency. At any given eccentricity, the mean pRF size for neuronal populations driven by luminance, opponent red/green and S-cone isolating contrast, are identical. Changes in pRF size (for example, an increase with eccentricity and visual area hierarchy) are also identical across the three chromatic conditions. These results suggest that fMRI measurements of receptive field size and spatial resolution can be decoupled under some circumstances - potentially reflecting a fundamental dissociation between these parameters at the level of neuronal populations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Dynamic plasticity in coupled avian midbrain maps

    NASA Astrophysics Data System (ADS)

    Atwal, Gurinder Singh

    2004-12-01

    Internal mapping of the external environment is carried out using the receptive fields of topographic neurons in the brain, and in a normal barn owl the aural and visual subcortical maps are aligned from early experiences. However, instantaneous misalignment of the aural and visual stimuli has been observed to result in adaptive behavior, manifested by functional and anatomical changes of the auditory processing system. Using methods of information theory and statistical mechanics a model of the adaptive dynamics of the aural receptive field is presented and analyzed. The dynamics is determined by maximizing the mutual information between the neural output and the weighted sensory neural inputs, admixed with noise, subject to biophysical constraints. The reduced costs of neural rewiring, as in the case of young barn owls, reveal two qualitatively different types of receptive field adaptation depending on the magnitude of the audiovisual misalignment. By letting the misalignment increase with time, it is shown that the ability to adapt can be increased even when neural rewiring costs are high, in agreement with recent experimental reports of the increased plasticity of the auditory space map in adult barn owls due to incremental learning. Finally, a critical speed of misalignment is identified, demarcating the crossover from adaptive to nonadaptive behavior.

  13. The use of picture prompts and prompt delay to teach receptive labeling.

    PubMed

    Vedora, Joseph; Barry, Tiffany

    2016-12-01

    The current study extended research on picture prompts by using them with a progressive prompt delay to teach receptive labeling of pictures to 2 teenagers with autism. The procedure differed from prior research because the auditory stimulus was not presented or was presented only once during the picture-prompt condition. The results indicated that the combination of picture prompts and prompt delay was effective, although 1 participant required a procedural modification. © 2016 Society for the Experimental Analysis of Behavior.

  14. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  15. Differential Receptive Field Properties of Parvalbumin and Somatostatin Inhibitory Neurons in Mouse Auditory Cortex.

    PubMed

    Li, Ling-Yun; Xiong, Xiaorui R; Ibrahim, Leena A; Yuan, Wei; Tao, Huizhong W; Zhang, Li I

    2015-07-01

    Cortical inhibitory circuits play important roles in shaping sensory processing. In auditory cortex, however, functional properties of genetically identified inhibitory neurons are poorly characterized. By two-photon imaging-guided recordings, we specifically targeted 2 major types of cortical inhibitory neuron, parvalbumin (PV) and somatostatin (SOM) expressing neurons, in superficial layers of mouse auditory cortex. We found that PV cells exhibited broader tonal receptive fields with lower intensity thresholds and stronger tone-evoked spike responses compared with SOM neurons. The latter exhibited similar frequency selectivity as excitatory neurons. The broader/weaker frequency tuning of PV neurons was attributed to a broader range of synaptic inputs and stronger subthreshold responses elicited, which resulted in a higher efficiency in the conversion of input to output. In addition, onsets of both the input and spike responses of SOM neurons were significantly delayed compared with PV and excitatory cells. Our results suggest that PV and SOM neurons engage in auditory cortical circuits in different manners: while PV neurons may provide broadly tuned feedforward inhibition for a rapid control of ascending inputs to excitatory neurons, the delayed and more selective inhibition from SOM neurons may provide a specific modulation of feedback inputs on their distal dendrites. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. The Influence of Linguistic Proficiency on Masked Text Recognition Performance in Adults With and Without Congenital Hearing Impairment.

    PubMed

    Huysmans, Elke; Bolk, Elske; Zekveld, Adriana A; Festen, Joost M; de Groot, Annette M B; Goverts, S Theo

    2016-01-01

    The authors first examined the influence of moderate to severe congenital hearing impairment (CHI) on the correctness of samples of elicited spoken language. Then, the authors used this measure as an indicator of linguistic proficiency and examined its effect on performance in language reception, independent of bottom-up auditory processing. In groups of adults with normal hearing (NH, n = 22), acquired hearing impairment (AHI, n = 22), and moderate to severe CHI (n = 21), the authors assessed linguistic proficiency by analyzing the morphosyntactic correctness of their spoken language production. Language reception skills were examined with a task for masked sentence recognition in the visual domain (text), at a readability level of 50%, using grammatically correct sentences and sentences with distorted morphosyntactic cues. The actual performance on the tasks was compared between groups. Adults with CHI made more morphosyntactic errors in spoken language production than adults with NH, while no differences were observed between the AHI and NH group. This outcome pattern sustained when comparisons were restricted to subgroups of AHI and CHI adults, matched for current auditory speech reception abilities. The data yielded no differences between groups in performance in masked text recognition of grammatically correct sentences in a test condition in which subjects could fully take advantage of their linguistic knowledge. Also, no difference between groups was found in the sensitivity to morphosyntactic distortions when processing short masked sentences, presented visually. These data showed that problems with the correct use of specific morphosyntactic knowledge in spoken language production are a long-term effect of moderate to severe CHI, independent of current auditory processing abilities. However, moderate to severe CHI generally does not impede performance in masked language reception in the visual modality, as measured in this study with short, degraded sentences. Aspects of linguistic proficiency that are affected by CHI thus do not seem to play a role in masked sentence recognition in the visual modality.

  17. A single-band envelope cue as a supplement to speechreading of segmentals: a comparison of auditory versus tactual presentation.

    PubMed

    Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G

    2001-06-01

    The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.

  18. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions

    PubMed Central

    Intskirveli, Irakli

    2017-01-01

    Abstract Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral “notch” of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing. PMID:28660244

  19. Systemic Nicotine Increases Gain and Narrows Receptive Fields in A1 via Integrated Cortical and Subcortical Actions.

    PubMed

    Askew, Caitlin; Intskirveli, Irakli; Metherate, Raju

    2017-01-01

    Nicotine enhances sensory and cognitive processing via actions at nicotinic acetylcholine receptors (nAChRs), yet the precise circuit- and systems-level mechanisms remain unclear. In sensory cortex, nicotinic modulation of receptive fields (RFs) provides a model to probe mechanisms by which nAChRs regulate cortical circuits. Here, we examine RF modulation in mouse primary auditory cortex (A1) using a novel electrophysiological approach: current-source density (CSD) analysis of responses to tone-in-notched-noise (TINN) acoustic stimuli. TINN stimuli consist of a tone at the characteristic frequency (CF) of the recording site embedded within a white noise stimulus filtered to create a spectral "notch" of variable width centered on CF. Systemic nicotine (2.1 mg/kg) enhanced responses to the CF tone and to narrow-notch stimuli, yet reduced the response to wider-notch stimuli, indicating increased response gain within a narrowed RF. Subsequent manipulations showed that modulation of cortical RFs by systemic nicotine reflected effects at several levels in the auditory pathway: nicotine suppressed responses in the auditory midbrain and thalamus, with suppression increasing with spectral distance from CF so that RFs became narrower, and facilitated responses in the thalamocortical pathway, while nicotinic actions within A1 further contributed to both suppression and facilitation. Thus, multiple effects of systemic nicotine integrate along the ascending auditory pathway. These actions at nAChRs in cortical and subcortical circuits, which mimic effects of auditory attention, likely contribute to nicotinic enhancement of sensory and cognitive processing.

  20. Auditory evoked fields predict language ability and impairment in children.

    PubMed

    Oram Cardy, Janis E; Flagg, Elissa J; Roberts, Wendy; Roberts, Timothy P L

    2008-05-01

    Recent evidence suggests that a subgroup of children with autism show similarities to children with Specific Language Impairment (SLI) in the pattern of their linguistic impairments, but the source of this overlap is unclear. We examined the ability of auditory evoked magnetic fields to predict language and other developmental abilities in children and adolescents. Following standardized assessment of language ability, nonverbal IQ, and autism-associated behaviors, 110 trails of a tone were binaurally presented to 45 7-18 year olds who had typical development, autism (with LI), Asperger Syndrome (i.e., without LI), or SLI. Using a 151-channel MEG system, latency of left hemisphere (LH) and right hemisphere (RH) auditory M50 and M100 peaks was recorded. RH M50 latency (and to a lesser extent, RH M100 latency) predicted overall oral language ability, accounting for 36% of the variance. Nonverbal IQ and autism behavior ratings were not predicted by any of the evoked fields. Latency of the RH M50 was the best predictor of clinical LI (i.e., irrespective of autism diagnosis), and demonstrated 82% accuracy in predicting Receptive LI; a cutoff of 84.6 ms achieved 92% specificity and 70% sensitivity in classifying children with and without Receptive LI. Auditory evoked responses appear to reflect language functioning and impairment rather than non-specific brain (dys)function (e.g., IQ, behavior). RH M50 latency proved to be a relatively useful indicator of impaired language comprehension, suggesting that delayed auditory perceptual processing in the RH may be a key neural dysfunction underlying the overlap between subgroups of children with autism and SLI.

  1. Auditory, speech and language development in young children with cochlear implants compared with children with normal hearing.

    PubMed

    Schramm, Bianka; Bohnert, Andrea; Keilmann, Annerose

    2010-07-01

    This study had two aims: (1) to document the auditory and lexical development of children who are deaf and received the first cochlear implant (CI) by the age of 16 months and the second CI by the age of 31 months and (2) to compare these children's results with those of children with normal hearing (NH). This longitudinal study included five children with NH and five with sensorineural deafness. All children of the second group were observed for 36 months after the first fitting of the device (cochlear implant). The auditory development of the CI group was documented every 3 months up to the age of two years in hearing age and chronological age and for the NH group in chronological age. The language development of each NH child was assessed at 12, 18, 24 and 36 months of chronological age. Children with CIs were examined at the same age intervals at chronological and hearing age. In both groups, children showed individual patterns of auditory and language development. The children with CIs developed differently in the amount of receptive and expressive vocabulary compared with the NH control group. Three children in the CI group needed almost 6 months to make gains in speech development that were consistent with what would be expected for their chronological age. Overall, the receptive and expressive development in all children of the implanted group increased with their hearing age. These results indicate that early identification and early implantation is advisable to give children with sensorineural hearing loss a realistic chance to develop satisfactory expressive and receptive vocabulary and also to develop stable phonological, morphological and syntactical skills for school life. On the basis of these longitudinal data, we will be able to develop new diagnostic tools that enable clinicians to assess child's progress in hearing and speech development. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    ERIC Educational Resources Information Center

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  3. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  4. Discriminative Learning of Receptive Fields from Responses to Non-Gaussian Stimulus Ensembles

    PubMed Central

    Meyer, Arne F.; Diepenbrock, Jan-Philipp; Happel, Max F. K.; Ohl, Frank W.; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design. PMID:24699631

  5. Discriminative learning of receptive fields from responses to non-Gaussian stimulus ensembles.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn

    2014-01-01

    Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and in settings where rapid adaptation is induced by experimental design.

  6. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing

    PubMed Central

    Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias

    2018-01-01

    Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637

  7. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation.

    PubMed

    Brito, Carlos S N; Gerstner, Wulfram

    2016-09-01

    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities.

  8. Nonlinear Hebbian Learning as a Unifying Principle in Receptive Field Formation

    PubMed Central

    Gerstner, Wulfram

    2016-01-01

    The development of sensory receptive fields has been modeled in the past by a variety of models including normative models such as sparse coding or independent component analysis and bottom-up models such as spike-timing dependent plasticity or the Bienenstock-Cooper-Munro model of synaptic plasticity. Here we show that the above variety of approaches can all be unified into a single common principle, namely nonlinear Hebbian learning. When nonlinear Hebbian learning is applied to natural images, receptive field shapes were strongly constrained by the input statistics and preprocessing, but exhibited only modest variation across different choices of nonlinearities in neuron models or synaptic plasticity rules. Neither overcompleteness nor sparse network activity are necessary for the development of localized receptive fields. The analysis of alternative sensory modalities such as auditory models or V2 development lead to the same conclusions. In all examples, receptive fields can be predicted a priori by reformulating an abstract model as nonlinear Hebbian learning. Thus nonlinear Hebbian learning and natural statistics can account for many aspects of receptive field formation across models and sensory modalities. PMID:27690349

  9. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    PubMed

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  10. Does attention play a role in dynamic receptive field adaptation to changing acoustic salience in A1?

    PubMed

    Fritz, Jonathan B; Elhilali, Mounya; David, Stephen V; Shamma, Shihab A

    2007-07-01

    Acoustic filter properties of A1 neurons can dynamically adapt to stimulus statistics, classical conditioning, instrumental learning and the changing auditory attentional focus. We have recently developed an experimental paradigm that allows us to view cortical receptive field plasticity on-line as the animal meets different behavioral challenges by attending to salient acoustic cues and changing its cortical filters to enhance performance. We propose that attention is the key trigger that initiates a cascade of events leading to the dynamic receptive field changes that we observe. In our paradigm, ferrets were initially trained, using conditioned avoidance training techniques, to discriminate between background noise stimuli (temporally orthogonal ripple combinations) and foreground tonal target stimuli. They learned to generalize the task for a wide variety of distinct background and foreground target stimuli. We recorded cortical activity in the awake behaving animal and computed on-line spectrotemporal receptive fields (STRFs) of single neurons in A1. We observed clear, predictable task-related changes in STRF shape while the animal performed spectral tasks (including single tone and multi-tone detection, and two-tone discrimination) with different tonal targets. A different set of task-related changes occurred when the animal performed temporal tasks (including gap detection and click-rate discrimination). Distinctive cortical STRF changes may constitute a "task-specific signature". These spectral and temporal changes in cortical filters occur quite rapidly, within 2min of task onset, and fade just as quickly after task completion, or in some cases, persisted for hours. The same cell could multiplex by differentially changing its receptive field in different task conditions. On-line dynamic task-related changes, as well as persistent plastic changes, were observed at a single-unit, multi-unit and population level. Auditory attention is likely to be pivotal in mediating these task-related changes since the magnitude of STRF changes correlated with behavioral performance on tasks with novel targets. Overall, these results suggest the presence of an attention-triggered plasticity algorithm in A1 that can swiftly change STRF shape by transforming receptive fields to enhance figure/ground separation, by using a contrast matched filter to filter out the background, while simultaneously enhancing the salient acoustic target in the foreground. These results favor the view of a nimble, dynamic, attentive and adaptive brain that can quickly reshape its sensory filter properties and sensori-motor links on a moment-to-moment basis, depending upon the current challenges the animal faces. In this review, we summarize our results in the context of a broader survey of the field of auditory attention, and then consider neuronal networks that could give rise to this phenomenon of attention-driven receptive field plasticity in A1.

  11. Predicting bias in perceived position using attention field models.

    PubMed

    Klein, Barrie P; Paffen, Chris L E; Pas, Susan F Te; Dumoulin, Serge O

    2016-05-01

    Attention is the mechanism through which we select relevant information from our visual environment. We have recently demonstrated that attention attracts receptive fields across the visual hierarchy (Klein, Harvey, & Dumoulin, 2014). We captured this receptive field attraction using an attention field model. Here, we apply this model to human perception: We predict that receptive field attraction results in a bias in perceived position, which depends on the size of the underlying receptive fields. We instructed participants to compare the relative position of Gabor stimuli, while we manipulated the focus of attention using exogenous cueing. We varied the eccentric position and spatial frequency of the Gabor stimuli to vary underlying receptive field size. The positional biases as a function of eccentricity matched the predictions by an attention field model, whereas the bias as a function of spatial frequency did not. As spatial frequency and eccentricity are encoded differently across the visual hierarchy, we speculate that they might interact differently with the attention field that is spatially defined.

  12. Modulation of auditory stimulus processing by visual spatial or temporal cue: an event-related potentials study.

    PubMed

    Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong

    2013-10-11

    Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Audio-visual speech intelligibility benefits with bilateral cochlear implants when talker location varies.

    PubMed

    van Hoesel, Richard J M

    2015-04-01

    One of the key benefits of using cochlear implants (CIs) in both ears rather than just one is improved localization. It is likely that in complex listening scenes, improved localization allows bilateral CI users to orient toward talkers to improve signal-to-noise ratios and gain access to visual cues, but to date, that conjecture has not been tested. To obtain an objective measure of that benefit, seven bilateral CI users were assessed for both auditory-only and audio-visual speech intelligibility in noise using a novel dynamic spatial audio-visual test paradigm. For each trial conducted in spatially distributed noise, first, an auditory-only cueing phrase that was spoken by one of four talkers was selected and presented from one of four locations. Shortly afterward, a target sentence was presented that was either audio-visual or, in another test configuration, audio-only and was spoken by the same talker and from the same location as the cueing phrase. During the target presentation, visual distractors were added at other spatial locations. Results showed that in terms of speech reception thresholds (SRTs), the average improvement for bilateral listening over the better performing ear alone was 9 dB for the audio-visual mode, and 3 dB for audition-alone. Comparison of bilateral performance for audio-visual and audition-alone showed that inclusion of visual cues led to an average SRT improvement of 5 dB. For unilateral device use, no such benefit arose, presumably due to the greatly reduced ability to localize the target talker to acquire visual information. The bilateral CI speech intelligibility advantage over the better ear in the present study is much larger than that previously reported for static talker locations and indicates greater everyday speech benefits and improved cost-benefit than estimated to date.

  14. Relation between language, audio-vocal psycholinguistic abilities and P300 in children having specific language impairment.

    PubMed

    Shaheen, Elham Ahmed; Shohdy, Sahar Saad; Abd Al Raouf, Mahmoud; Mohamed El Abd, Shereen; Abd Elhamid, Asmss

    2011-09-01

    Specific language impairment is a relatively common developmental condition in which a child fails to develop language at the typical rate despite normal general intellectual abilities, adequate exposure to language, and in the absence of hearing impairments, or neurological or psychiatric disorders. There is much controversy about the extent to which the auditory processing deficits are important in the genesis specific language impairment. The objective of this paper is to assess the higher cortical functions in children with specific language impairment, through assessing neurophysiological changes in order to correlate the results with the clinical picture of the patients to choose the proper rehabilitation training program. This study was carried out on 40 children diagnosed to have specific language impairment and 20 normal children as a control group. All children were subjected to the assessment protocol applied in Kasr El-Aini hospital. They were also subjected to a language test (receptive, expressive and total language items), the audio-vocal items of Illinois test of psycholinguistic (auditory reception, auditory association, verbal expression, grammatical closure, auditory sequential memory and sound blending) as well as audiological assessment that included peripheral audiological and P300 amplitude and latency assessment. The results revealed a highly significant difference in P300 amplitude and latency between specific language impairment group and control group. There is also strong correlations between P300 latency and the grammatical closure, auditory sequential memory and sound blending, while significant correlation between the P300 amplitude and auditory association and verbal expression. Children with specific language impairment, in spite of the normal peripheral hearing, have evidence of cognitive and central auditory processing defects as evidenced by P300 auditory event related potential in the form of prolonged latency which indicate a slow rate of processing and defective memory as evidenced by small amplitude. These findings affect cognitive and language development in specific language impairment children and should be considered during planning the intervention program. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  15. The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals

    ERIC Educational Resources Information Center

    Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios

    2012-01-01

    The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…

  16. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    PubMed

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.

  17. The role of auditory and cognitive factors in understanding speech in noise by normal-hearing older listeners

    PubMed Central

    Schoof, Tim; Rosen, Stuart

    2014-01-01

    Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60–72 years) and younger (19–29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed. PMID:25429266

  18. Reduced auditory efferent activity in childhood selective mutism.

    PubMed

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  19. Reorganization in processing of spectral and temporal input in the rat posterior auditory field induced by environmental enrichment

    PubMed Central

    Jakkamsetti, Vikram; Chang, Kevin Q.

    2012-01-01

    Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375

  20. [The vocal behavior of telemarketing operators before and after a working day].

    PubMed

    Amorim, Geová Oliveira de; Bommarito, Silvana; Kanashiro, Célia Akemi; Chiari, Brasilia Maria

    2011-01-01

    To evaluate the vocal behavior of receptive telemarketing operators in pre- and post-work shift moments, and to relate the results to the variable gender. Participants were 55 telemarketing operators (11 men and 44 women) working in a receptive mode in the city of Maceió (Alagoas, Brazil). A questionnaire was applied before the work shift to initially identify the vocal complaints. After that, vocal samples were recorded, comprising sustained emissions and connected speech produced 10 minutes before and 10 minutes after the workday to be later evaluated. Auditory-perceptual and acoustic analyses of voice were conducted. Vocal complaints and symptoms reported by the operators after the work shift were: dry throat (64%); neck and cervix pain (33%); hoarseness (31%); voice failure (26%); and vocal fatigue (22%).Telemarketing operators presented reduced maximum phonation time before and after the day of work (p=0.645). Data from the auditory-perceptual assessment of voice were similar in pre- and post-shift moments (p=0.645). No difference was found between moments also on acoustic analysis data (p=0.738). Telemarketing operators have high indexes of vocal symptoms after the work shift, and there are no differences between pre- and post-work shift in auditory-perceptual and acoustic assessments of voice.

  1. Comparison of Congruence Judgment and Auditory Localization Tasks for Assessing the Spatial Limits of Visual Capture

    PubMed Central

    Bosen, Adam K.; Fleming, Justin T.; Brown, Sarah E.; Allen, Paul D.; O'Neill, William E.; Paige, Gary D.

    2016-01-01

    Vision typically has better spatial accuracy and precision than audition, and as a result often captures auditory spatial perception when visual and auditory cues are presented together. One determinant of visual capture is the amount of spatial disparity between auditory and visual cues: when disparity is small visual capture is likely to occur, and when disparity is large visual capture is unlikely. Previous experiments have used two methods to probe how visual capture varies with spatial disparity. First, congruence judgment assesses perceived unity between cues by having subjects report whether or not auditory and visual targets came from the same location. Second, auditory localization assesses the graded influence of vision on auditory spatial perception by having subjects point to the remembered location of an auditory target presented with a visual target. Previous research has shown that when both tasks are performed concurrently they produce similar measures of visual capture, but this may not hold when tasks are performed independently. Here, subjects alternated between tasks independently across three sessions. A Bayesian inference model of visual capture was used to estimate perceptual parameters for each session, which were compared across tasks. Results demonstrated that the range of audio-visual disparities over which visual capture was likely to occur were narrower in auditory localization than in congruence judgment, which the model indicates was caused by subjects adjusting their prior expectation that targets originated from the same location in a task-dependent manner. PMID:27815630

  2. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Non-visual spatial tasks reveal increased interactions with stance postural control.

    PubMed

    Woollacott, Marjorie; Vander Velde, Timothy

    2008-05-07

    The current investigation aimed to contrast the level and quality of dual-task interactions resulting from the combined performance of a challenging primary postural task and three specific, yet categorically dissociated, secondary central executive tasks. Experiments determined the extent to which modality (visual vs. auditory) and code (non-spatial vs. spatial) specific cognitive resources contributed to postural interference in young adults (n=9) in a dual-task setting. We hypothesized that the different forms of executive n-back task processing employed (visual-object, auditory-object and auditory-spatial) would display contrasting levels of interactions with tandem Romberg stance postural control, and that interactions within the spatial domain would be revealed as most vulnerable to dual-task interactions. Across all cognitive tasks employed, including auditory-object (aOBJ), auditory-spatial (aSPA), and visual-object (vOBJ) tasks, increasing n-back task complexity produced correlated increases in verbal reaction time measures. Increasing cognitive task complexity also resulted in consistent decreases in judgment accuracy. Postural performance was significantly influenced by the type of cognitive loading delivered. At comparable levels of cognitive task difficulty (n-back demands and accuracy judgments) the performance of challenging auditory-spatial tasks produced significantly greater levels of postural sway than either the auditory-object or visual-object based tasks. These results suggest that it is the employment of limited non-visual spatially based coding resources that may underlie previously observed visual dual-task interference effects with stance postural control in healthy young adults.

  4. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  5. Response to own name in children: ERP study of auditory social information processing.

    PubMed

    Key, Alexandra P; Jones, Dorita; Peters, Sarika U

    2016-09-01

    Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Response to Own Name in Children: ERP Study of Auditory Social Information Processing

    PubMed Central

    Key, Alexandra P.; Jones, Dorita; Peters, Sarika U.

    2016-01-01

    Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85 years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other’s names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. PMID:27456543

  7. Co-localisation of abnormal brain structure and function in specific language impairment

    PubMed Central

    Badcock, Nicholas A.; Bishop, Dorothy V.M.; Hardiman, Mervyn J.; Barry, Johanna G.; Watkins, Kate E.

    2012-01-01

    We assessed the relationship between brain structure and function in 10 individuals with specific language impairment (SLI), compared to six unaffected siblings, and 16 unrelated control participants with typical language. Voxel-based morphometry indicated that grey matter in the SLI group, relative to controls, was increased in the left inferior frontal cortex and decreased in the right caudate nucleus and superior temporal cortex bilaterally. The unaffected siblings also showed reduced grey matter in the caudate nucleus relative to controls. In an auditory covert naming task, the SLI group showed reduced activation in the left inferior frontal cortex, right putamen, and in the superior temporal cortex bilaterally. Despite spatially coincident structural and functional abnormalities in frontal and temporal areas, the relationships between structure and function in these regions were different. These findings suggest multiple structural and functional abnormalities in SLI that are differently associated with receptive and expressive language processing. PMID:22137677

  8. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  9. The North American Listening in Spatialized Noise-Sentences test (NA LiSN-S): normative data and test-retest reliability studies for adolescents and young adults.

    PubMed

    Brown, David K; Cameron, Sharon; Martin, Jeffrey S; Watson, Charlene; Dillon, Harvey

    2010-01-01

    The Listening in Spatialized Noise-Sentences test (LiSN-S; Cameron and Dillon, 2009) was originally developed to assess auditory stream segregation skills in children aged 6 to 11 yr with suspected central auditory processing disorder. The LiSN-S creates a three-dimensional auditory environment under headphones. A simple repetition-response protocol is used to assess a listener's speech reception threshold (SRT) for target sentences presented in competing speech maskers. Performance is measured as the improvement in SRT in dB gained when either pitch, spatial, or both pitch and spatial cues are incorporated in the maskers. A North American-accented version of the LiSN-S (NA LiSN-S) is available for use in the United States and Canada. To develop normative data for adolescents and adults on the NA LiSN-S, to compare these data with those of children aged 6 to 11 yr as documented in Cameron et al (2009), and to consolidate the child, adolescent, and adult normative and retest data to allow the software to be used with a wider population. In a descriptive design, normative data and test-retest reliability data were collected. One hundred and twenty normally hearing participants took part in the normative data study (67 adolescents aged 12 yr, 1 mo, to 17 yr, 10 mo, and 53 adults aged 19 yr, 10 mo, to 30 yr, 30 mo). Forty-nine participants returned between 1 and 4 mo after the initial assessment for retesting. Participants were recruited from sites in Cincinnati, Dallas, and Calgary. When combined with data collected from children aged 6 to 11 yr, a trend of improved performance as a function of increasing age was found across performance measures. ANOVA (analysis of variance) revealed a significant effect of age on performance. Planned contrasts revealed that there were no significant differences between adults and children aged 13 yr and older on the low-cue SRT; 14 yr and older on talker and spatial advantage; 15 yr and older on total advantage; and 16 yr and older on the high-cue SRT. Mean test-retest differences on the various NA LiSN-S performance measures for the combined child, adult, and adolescent data ranged from 0.05 to 0.5 dB. Paired comparisons revealed test-retest differences were not significant on any measure of the NA LiSN-S except low-cue SRT. Test-retest differences across measures did not differ as a function of age. Test and retest scores were significantly correlated for all NA LiSN-S measures. The ability to use either spatial or talker cues in isolation becomes adultlike by about 14 yr of age, whereas the ability to combine spatial and talker cues does not fully mature until closer to adulthood. By consolidating child, adolescent, and adult normative and retest data the NA LiSN-S can now been utilized to assess auditory processing skills in a wider population. American Academy of Audiology.

  10. Proceedings of the Lake Wilderness Attention Conference Held at Seattle Washington, 22-24 September 1980.

    DTIC Science & Technology

    1981-07-10

    Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual

  11. Blissymbols and Manual Signs: A Multimodal Approach to Intervention in a Case of Multiple Disability.

    ERIC Educational Resources Information Center

    Hooper, Janice; And Others

    1987-01-01

    A multimodal intervention program designed for a nine-year-old with severe communication problems (secondary to cerebral palsy, receptive dysphasia, and auditory agnosia) combined manual signs and graphic symbols to help her communicate. The intensive, highly structured program had significant positive results. (Author/CB)

  12. Effects of Amplification, Speechreading, and Classroom Environments on Reception of Speech

    ERIC Educational Resources Information Center

    Blair, James C.

    1977-01-01

    Compared with 18 hard-of-hearing students (7 to 14-years-old) were two sources of amplification (binaural ear-level hearing aids and R F auditory training units with environmental microphones on) in "ideal" and "typical" classroom noise levels, with and without visual speechreading cues provided. (Author/IM)

  13. Nonlinear Y-Like Receptive Fields in the Early Visual Cortex: An Intermediate Stage for Building Cue-Invariant Receptive Fields from Subcortical Y Cells.

    PubMed

    Gharat, Amol; Baker, Curtis L

    2017-01-25

    Many of the neurons in early visual cortex are selective for the orientation of boundaries defined by first-order cues (luminance) as well as second-order cues (contrast, texture). The neural circuit mechanism underlying this selectivity is still unclear, but some studies have proposed that it emerges from spatial nonlinearities of subcortical Y cells. To understand how inputs from the Y-cell pathway might be pooled to generate cue-invariant receptive fields, we recorded visual responses from single neurons in cat Area 18 using linear multielectrode arrays. We measured responses to drifting and contrast-reversing luminance gratings as well as contrast modulation gratings. We found that a large fraction of these neurons have nonoriented responses to gratings, similar to those of subcortical Y cells: they respond at the second harmonic (F2) to high-spatial frequency contrast-reversing gratings and at the first harmonic (F1) to low-spatial frequency drifting gratings ("Y-cell signature"). For a given neuron, spatial frequency tuning for linear (F1) and nonlinear (F2) responses is quite distinct, similar to orientation-selective cue-invariant neurons. Also, these neurons respond to contrast modulation gratings with selectivity for the carrier (texture) spatial frequency and, in some cases, orientation. Their receptive field properties suggest that they could serve as building blocks for orientation-selective cue-invariant neurons. We propose a circuit model that combines ON- and OFF-center cortical Y-like cells in an unbalanced push-pull manner to generate orientation-selective, cue-invariant receptive fields. A significant fraction of neurons in early visual cortex have specialized receptive fields that allow them to selectively respond to the orientation of boundaries that are invariant to the cue (luminance, contrast, texture, motion) that defines them. However, the neural mechanism to construct such versatile receptive fields remains unclear. Using multielectrode recording, we found a large fraction of neurons in early visual cortex with receptive fields not selective for orientation that have spatial nonlinearities like those of subcortical Y cells. These are strong candidates for building cue-invariant orientation-selective neurons; we present a neural circuit model that pools such neurons in an imbalanced "push-pull" manner, to generate orientation-selective cue-invariant receptive fields. Copyright © 2017 the authors 0270-6474/17/370998-16$15.00/0.

  14. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Word Processing in Children With Autism Spectrum Disorders: Evidence From Event-Related Potentials.

    PubMed

    Sandbank, Micheal; Yoder, Paul; Key, Alexandra P

    2017-12-20

    This investigation was conducted to determine whether young children with autism spectrum disorders exhibited a canonical neural response to word stimuli and whether putative event-related potential (ERP) measures of word processing were correlated with a concurrent measure of receptive language. Additional exploratory analyses were used to examine whether the magnitude of the association between ERP measures of word processing and receptive language varied as a function of the number of word stimuli the participants reportedly understood. Auditory ERPs were recorded in response to spoken words and nonwords presented with equal probability in 34 children aged 2-5 years with a diagnosis of autism spectrum disorder who were in the early stages of language acquisition. Average amplitudes and amplitude differences between word and nonword stimuli within 200-500 ms were examined at left temporal (T3) and parietal (P3) electrode clusters. Receptive vocabulary size and the number of experimental stimuli understood were concurrently measured using the MacArthur-Bates Communicative Development Inventories. Across the entire participant group, word-nonword amplitude differences were diminished. The average word-nonword amplitude difference at T3 was related to receptive vocabulary only if 5 or more word stimuli were understood. If ERPs are to ever have clinical utility, their construct validity must be established by investigations that confirm their associations with predictably related constructs. These results contribute to accruing evidence, suggesting that a valid measure of auditory word processing can be derived from the left temporal response to words and nonwords. In addition, this measure can be useful even for participants who do not reportedly understand all of the words presented as experimental stimuli, though it will be important for researchers to track familiarity with word stimuli in future investigations. https://doi.org/10.23641/asha.5614840.

  16. Modality-specificity of Selective Attention Networks.

    PubMed

    Stewart, Hannah J; Amitay, Sygal

    2015-01-01

    To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.

  17. Speech processing in children with functional articulation disorders.

    PubMed

    Gósy, Mária; Horváth, Viktória

    2015-03-01

    This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.

  18. Audiologic Management of Older Adults With Hearing Loss and Compromised Cognitive/Psychoacoustic Auditory Processing Capabilities

    PubMed Central

    Kricos, Patricia B.

    2006-01-01

    The number and proportion of older adults in the United States population is increasing, and more clinical audiologists will be called upon to deliver hearing care to the approximately 35% to 50% of them who experience hearing difficulties. In recent years, the characteristics and sources of receptive communication difficulties in older individuals have been investigated by hearing scientists, cognitive psychologists, and audiologists. It is becoming increasingly apparent that cognitive compromises and psychoacoustic auditory processing disorders associated with aging may contribute to communication difficulties in this population. This paper presents an overview of best practices, based on our current knowledge base, for clinical management of older individuals with limitations in cognitive or psychoacoustic auditory processing capabilities, or both, that accompany aging. PMID:16528428

  19. The Relationship between Form and Function Level Receptive Prosodic Abilities in Autism

    ERIC Educational Resources Information Center

    Jarvinen-Pasley, Anna; Peppe, Susan; King-Smith, Gavin; Heaton, Pamela

    2008-01-01

    Prosody can be conceived as having form (auditory-perceptual characteristics) and function (pragmatic/linguistic meaning). No known studies have examined the relationship between form- and function-level prosodic skills in relation to the effects of stimulus length and/or complexity upon such abilities in autism. Research in this area is both…

  20. A Pre and Post-Practicum Comparison of Teacher Interns' Perceptions of Diagnostic Information.

    ERIC Educational Resources Information Center

    Berkell, Dianne E.; Schmelkin, Liora Pedhazur

    A study examined the perceptions of special education interns on a set of diagnostic constructs: (1) mental age; (2) developmental history; (3) IQ; (4) identifying information; (5) family history; (6) medical history; (7) receptive language; (8) fine motor coordination; (9) auditory discrimination; (10) memory; (11) written language; (12) self…

  1. The Auditory System of the Minke Whale (Balaenoptera Acutorostrata): A Potential Fatty Sound Reception Pathway in a Mysticete Cetacean

    DTIC Science & Technology

    2012-09-01

    thermoregulate their acoustic fats through increased blood flow to both the melon and peri-mandibular fats (Houser et al., 2004). Cetaceans inhabiting...Fleischer G. 1976. Hearing in Extinct Cetaceans as Determined by Cochlear Structure. Journal of Paleontology 50:133-152. Fraser FC, Purves PE

  2. WISC-R Scatter and Patterns in Three Types of Learning Disabled Children.

    ERIC Educational Resources Information Center

    Tabachnick, Barbara G.; Turbey, Carolyn B.

    Wechsler Intelligence Scale for Children-Revised (WISC-R) subtest scatter and Bannatyne recategorization scores were investigated with three types of learning disabilities in children 6 to 16 years old: visual-motor and visual-perceptual disability (N=66); auditory-perceptual and receptive language deficit (N=18); and memory deficit (N=12). Three…

  3. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  4. The spatial structure of a nonlinear receptive field.

    PubMed

    Schwartz, Gregory W; Okawa, Haruhisa; Dunn, Felice A; Morgan, Josh L; Kerschensteiner, Daniel; Wong, Rachel O; Rieke, Fred

    2012-11-01

    Understanding a sensory system implies the ability to predict responses to a variety of inputs from a common model. In the retina, this includes predicting how the integration of signals across visual space shapes the outputs of retinal ganglion cells. Existing models of this process generalize poorly to predict responses to new stimuli. This failure arises in part from properties of the ganglion cell response that are not well captured by standard receptive-field mapping techniques: nonlinear spatial integration and fine-scale heterogeneities in spatial sampling. Here we characterize a ganglion cell's spatial receptive field using a mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina. The resulting simplified circuit model successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.

  5. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events.

    PubMed

    Stekelenburg, Jeroen J; Vroomen, Jean

    2012-01-01

    In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  6. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex

    PubMed Central

    Jenison, Rick L.; Reale, Richard A.; Armstrong, Amanda L.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.

    2015-01-01

    Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli. PMID:26367010

  7. PRESCHOOL SPEECH ARTICULATION AND NONWORD REPETITION ABILITIES MAY HELP PREDICT EVENTUAL RECOVERY OR PERSISTENCE OF STUTTERING

    PubMed Central

    Spencer, Caroline; Weber-Fox, Christine

    2014-01-01

    Purpose In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Methods Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9–5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. Results CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Conclusion Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. PMID:25173455

  8. Atypical long-latency auditory event-related potentials in a subset of children with specific language impairment

    PubMed Central

    Bishop, Dorothy VM; Hardiman, Mervyn; Uwer, Ruth; von Suchodoletz, Waldemar

    2007-01-01

    It has been proposed that specific language impairment (SLI) is the consequence of low-level abnormalities in auditory perception. However, studies of long-latency auditory ERPs in children with SLI have generated inconsistent findings. A possible reason for this inconsistency is the heterogeneity of SLI. The intraclass correlation (ICC) has been proposed as a useful statistic for evaluating heterogeneity because it allows one to compare an individual's auditory ERP with the grand average waveform from a typically developing reference group. We used this method to reanalyse auditory ERPs from a sample previously described by Uwer, Albrecht and von Suchodoletz (2002). In a subset of children with receptive SLI, there was less correspondence (i.e. lower ICC) with the normative waveform (based on the control grand average) than for typically developing children. This poorer correspondence was seen in responses to both tone and speech stimuli for the period 100–228 ms post stimulus onset. The effect was lateralized and seen at right- but not left-sided electrodes. PMID:17683344

  9. Integrating Information from Different Senses in the Auditory Cortex

    PubMed Central

    King, Andrew J.; Walker, Kerry M.M.

    2015-01-01

    Multisensory integration was once thought to be the domain of brain areas high in the cortical hierarchy, with early sensory cortical fields devoted to unisensory processing of inputs from their given set of sensory receptors. More recently, a wealth of evidence documenting visual and somatosensory responses in auditory cortex, even as early as the primary fields, has changed this view of cortical processing. These multisensory inputs may serve to enhance responses to sounds that are accompanied by other sensory cues, effectively making them easier to hear, but may also act more selectively to shape the receptive field properties of auditory cortical neurons to the location or identity of these events. We discuss the new, converging evidence that multiplexing of neural signals may play a key role in informatively encoding and integrating signals in auditory cortex across multiple sensory modalities. We highlight some of the many open research questions that exist about the neural mechanisms that give rise to multisensory integration in auditory cortex, which should be addressed in future experimental and theoretical studies. PMID:22798035

  10. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  11. Modality-specificity of Selective Attention Networks

    PubMed Central

    Stewart, Hannah J.; Amitay, Sygal

    2015-01-01

    Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709

  12. A neural network model of ventriloquism effect and aftereffect.

    PubMed

    Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro

    2012-01-01

    Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.

  13. A computational theory of visual receptive fields.

    PubMed

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative agreement are obtained for (i) spatial on-center/off-surround and off-center/on-surround receptive fields in the fovea and the LGN, (ii) simple cells with spatial directional preference in V1, (iii) spatio-chromatic double-opponent neurons in V1, (iv) space-time separable spatio-temporal receptive fields in the LGN and V1, and (v) non-separable space-time tilted receptive fields in V1, all within the same unified theory. In addition, the paper presents a more general framework for relating and interpreting these receptive fields conceptually and possibly predicting new receptive field profiles as well as for pre-wiring covariance under scaling, affine, and Galilean transformations into the representations of visual stimuli. This paper describes the basic structure of the necessity results concerning receptive field profiles regarding the mathematical foundation of the theory and outlines how the proposed theory could be used in further studies and modelling of biological vision. It is also shown how receptive field responses can be interpreted physically, as the superposition of relative variations of surface structure and illumination variations, given a logarithmic brightness scale, and how receptive field measurements will be invariant under multiplicative illumination variations and exposure control mechanisms.

  14. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    PubMed

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.

  15. Order of stimulus presentation influences children's acquisition in receptive identification tasks.

    PubMed

    Petursdottir, Anna Ingeborg; Aguilar, Gabriella

    2016-03-01

    Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.

  16. Diversity in spatial scope of contrast adaptation among mouse retinal ganglion cells.

    PubMed

    Khani, Mohammad Hossein; Gollisch, Tim

    2017-12-01

    Retinal ganglion cells adapt to changes in visual contrast by adjusting their response kinetics and sensitivity. While much work has focused on the time scales of these adaptation processes, less is known about the spatial scale of contrast adaptation. For example, do small, localized contrast changes affect a cell's signal processing across its entire receptive field? Previous investigations have provided conflicting evidence, suggesting that contrast adaptation occurs either locally within subregions of a ganglion cell's receptive field or globally over the receptive field in its entirety. Here, we investigated the spatial extent of contrast adaptation in ganglion cells of the isolated mouse retina through multielectrode-array recordings. We applied visual stimuli so that ganglion cell receptive fields contained regions where the average contrast level changed periodically as well as regions with constant average contrast level. This allowed us to analyze temporal stimulus integration and sensitivity separately for stimulus regions with and without contrast changes. We found that the spatial scope of contrast adaptation depends strongly on cell identity, with some ganglion cells displaying clear local adaptation, whereas others, in particular large transient ganglion cells, adapted globally to contrast changes. Thus, the spatial scope of contrast adaptation in mouse retinal ganglion cells appears to be cell-type specific. This could reflect differences in mechanisms of contrast adaptation and may contribute to the functional diversity of different ganglion cell types. NEW & NOTEWORTHY Understanding whether adaptation of a neuron in a sensory system can occur locally inside the receptive field or whether it always globally affects the entire receptive field is important for understanding how the neuron processes complex sensory stimuli. For mouse retinal ganglion cells, we here show that both local and global contrast adaptation exist and that this diversity in spatial scope can contribute to the functional diversity of retinal ganglion cell types. Copyright © 2017 the American Physiological Society.

  17. Diversity in spatial scope of contrast adaptation among mouse retinal ganglion cells

    PubMed Central

    Khani, Mohammad Hossein

    2017-01-01

    Retinal ganglion cells adapt to changes in visual contrast by adjusting their response kinetics and sensitivity. While much work has focused on the time scales of these adaptation processes, less is known about the spatial scale of contrast adaptation. For example, do small, localized contrast changes affect a cell’s signal processing across its entire receptive field? Previous investigations have provided conflicting evidence, suggesting that contrast adaptation occurs either locally within subregions of a ganglion cell’s receptive field or globally over the receptive field in its entirety. Here, we investigated the spatial extent of contrast adaptation in ganglion cells of the isolated mouse retina through multielectrode-array recordings. We applied visual stimuli so that ganglion cell receptive fields contained regions where the average contrast level changed periodically as well as regions with constant average contrast level. This allowed us to analyze temporal stimulus integration and sensitivity separately for stimulus regions with and without contrast changes. We found that the spatial scope of contrast adaptation depends strongly on cell identity, with some ganglion cells displaying clear local adaptation, whereas others, in particular large transient ganglion cells, adapted globally to contrast changes. Thus, the spatial scope of contrast adaptation in mouse retinal ganglion cells appears to be cell-type specific. This could reflect differences in mechanisms of contrast adaptation and may contribute to the functional diversity of different ganglion cell types. NEW & NOTEWORTHY Understanding whether adaptation of a neuron in a sensory system can occur locally inside the receptive field or whether it always globally affects the entire receptive field is important for understanding how the neuron processes complex sensory stimuli. For mouse retinal ganglion cells, we here show that both local and global contrast adaptation exist and that this diversity in spatial scope can contribute to the functional diversity of retinal ganglion cell types. PMID:28904106

  18. Action Enhances Acoustic Cues for 3-D Target Localization by Echolocating Bats

    PubMed Central

    Wohlgemuth, Melville J.

    2016-01-01

    Under natural conditions, animals encounter a barrage of sensory information from which they must select and interpret biologically relevant signals. Active sensing can facilitate this process by engaging motor systems in the sampling of sensory information. The echolocating bat serves as an excellent model to investigate the coupling between action and sensing because it adaptively controls both the acoustic signals used to probe the environment and movements to receive echoes at the auditory periphery. We report here that the echolocating bat controls the features of its sonar vocalizations in tandem with the positioning of the outer ears to maximize acoustic cues for target detection and localization. The bat’s adaptive control of sonar vocalizations and ear positioning occurs on a millisecond timescale to capture spatial information from arriving echoes, as well as on a longer timescale to track target movement. Our results demonstrate that purposeful control over sonar sound production and reception can serve to improve acoustic cues for localization tasks. This finding also highlights the general importance of movement to sensory processing across animal species. Finally, our discoveries point to important parallels between spatial perception by echolocation and vision. PMID:27608186

  19. A low-cost, multiplexed μECoG system for high-density recordings in freely moving rodents

    NASA Astrophysics Data System (ADS)

    Insanally, Michele; Trumpis, Michael; Wang, Charles; Chiang, Chia-Han; Woods, Virginia; Palopoli-Trojani, Kay; Bossi, Silvia; Froemke, Robert C.; Viventi, Jonathan

    2016-04-01

    Objective. Micro-electrocorticography (μECoG) offers a minimally invasive neural interface with high spatial resolution over large areas of cortex. However, electrode arrays with many contacts that are individually wired to external recording systems are cumbersome and make recordings in freely behaving rodents challenging. We report a novel high-density 60-electrode system for μECoG recording in freely moving rats. Approach. Multiplexed headstages overcome the problem of wiring complexity by combining signals from many electrodes to a smaller number of connections. We have developed a low-cost, multiplexed recording system with 60 contacts at 406 μm spacing. We characterized the quality of the electrode signals using multiple metrics that tracked spatial variation, evoked-response detectability, and decoding value. Performance of the system was validated both in anesthetized animals and freely moving awake animals. Main results. We recorded μECoG signals over the primary auditory cortex, measuring responses to acoustic stimuli across all channels. Single-trial responses had high signal-to-noise ratios (SNR) (up to 25 dB under anesthesia), and were used to rapidly measure network topography within ˜10 s by constructing all single-channel receptive fields in parallel. We characterized evoked potential amplitudes and spatial correlations across the array in the anesthetized and awake animals. Recording quality in awake animals was stable for at least 30 days. Finally, we used these responses to accurately decode auditory stimuli on single trials. Significance. This study introduces (1) a μECoG recording system based on practical hardware design and (2) a rigorous analytical method for characterizing the signal characteristics of μECoG electrode arrays. This methodology can be applied to evaluate the fidelity and lifetime of any μECoG electrode array. Our μECoG-based recording system is accessible and will be useful for studies of perception and decision-making in rodents, particularly over the entire time course of behavioral training and learning.

  20. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  1. Does Hearing Several Speakers Reduce Foreign Word Learning?

    ERIC Educational Resources Information Center

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  2. Story Retelling and Language Ability in School-Aged Children with Cerebral Palsy and Speech Impairment

    ERIC Educational Resources Information Center

    Nordberg, Ann; Dahlgren Sandberg, Annika; Miniscalco, Carmela

    2015-01-01

    Background: Research on retelling ability and cognition is limited in children with cerebral palsy (CP) and speech impairment. Aims: To explore the impact of expressive and receptive language, narrative discourse dimensions (Narrative Assessment Profile measures), auditory and visual memory, theory of mind (ToM) and non-verbal cognition on the…

  3. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  4. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  5. Speech and language development in cognitively delayed children with cochlear implants.

    PubMed

    Holt, Rachael Frush; Kirk, Karen Iler

    2005-04-01

    The primary goals of this investigation were to examine the speech and language development of deaf children with cochlear implants and mild cognitive delay and to compare their gains with those of children with cochlear implants who do not have this additional impairment. We retrospectively examined the speech and language development of 69 children with pre-lingual deafness. The experimental group consisted of 19 children with cognitive delays and no other disabilities (mean age at implantation = 38 months). The control group consisted of 50 children who did not have cognitive delays or any other identified disability. The control group was stratified by primary communication mode: half used total communication (mean age at implantation = 32 months) and the other half used oral communication (mean age at implantation = 26 months). Children were tested on a variety of standard speech and language measures and one test of auditory skill development at 6-month intervals. The results from each test were collapsed from blocks of two consecutive 6-month intervals to calculate group mean scores before implantation and at 1-year intervals after implantation. The children with cognitive delays and those without such delays demonstrated significant improvement in their speech and language skills over time on every test administered. Children with cognitive delays had significantly lower scores than typically developing children on two of the three measures of receptive and expressive language and had significantly slower rates of auditory-only sentence recognition development. Finally, there were no significant group differences in auditory skill development based on parental reports or in auditory-only or multimodal word recognition. The results suggest that deaf children with mild cognitive impairments benefit from cochlear implantation. Specifically, improvements are evident in their ability to perceive speech and in their reception and use of language. However, it may be reduced relative to their typically developing peers with cochlear implants, particularly in domains that require higher level skills, such as sentence recognition and receptive and expressive language. These findings suggest that children with mild cognitive deficits be considered for cochlear implantation with less trepidation than has been the case in the past. Although their speech and language gains may be tempered by their cognitive abilities, these limitations do not appear to preclude benefit from cochlear implant stimulation, as assessed by traditional measures of speech and language development.

  6. Auditory Sensitivity and Masking Profiles for the Sea Otter (Enhydra lutris).

    PubMed

    Ghoul, Asila; Reichmuth, Colleen

    2016-01-01

    Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.

  7. Effects of in-vehicle warning information displays with or without spatial compatibility on driving behaviors and response performance.

    PubMed

    Liu, Yung-Ching; Jhuang, Jing-Wun

    2012-07-01

    A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    A spatial auditory display was used to convolve speech stimuli, consisting of 130 different call signs used in the communications protocol of NASA's John F. Kennedy Space Center, to different virtual auditory positions. An adaptive staircase method was used to determine intelligibility levels of the signal against diotic speech babble, with spatial positions at 30 deg azimuth increments. Non-individualized, minimum-phase approximations of head-related transfer functions were used. The results showed a maximal intelligibility improvement of about 6 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  9. Spectrotemporal Processing in Spectral Tuning Modules of Cat Primary Auditory Cortex

    PubMed Central

    Atencio, Craig A.; Schreiner, Christoph E.

    2012-01-01

    Spectral integration properties show topographical order in cat primary auditory cortex (AI). Along the iso-frequency domain, regions with predominantly narrowly tuned (NT) neurons are segregated from regions with more broadly tuned (BT) neurons, forming distinct processing modules. Despite their prominent spatial segregation, spectrotemporal processing has not been compared for these regions. We identified these NT and BT regions with broad-band ripple stimuli and characterized processing differences between them using both spectrotemporal receptive fields (STRFs) and nonlinear stimulus/firing rate transformations. The durations of STRF excitatory and inhibitory subfields were shorter and the best temporal modulation frequencies were higher for BT neurons than for NT neurons. For NT neurons, the bandwidth of excitatory and inhibitory subfields was matched, whereas for BT neurons it was not. Phase locking and feature selectivity were higher for NT neurons. Properties of the nonlinearities showed only slight differences across the bandwidth modules. These results indicate fundamental differences in spectrotemporal preferences - and thus distinct physiological functions - for neurons in BT and NT spectral integration modules. However, some global processing aspects, such as spectrotemporal interactions and nonlinear input/output behavior, appear to be similar for both neuronal subgroups. The findings suggest that spectral integration modules in AI differ in what specific stimulus aspects are processed, but they are similar in the manner in which stimulus information is processed. PMID:22384036

  10. Two-Microphone Spatial Filtering Improves Speech Reception for Cochlear-Implant Users in Reverberant Conditions With Multiple Noise Sources

    PubMed Central

    2014-01-01

    This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772

  11. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  12. Binaural speech processing in individuals with auditory neuropathy.

    PubMed

    Rance, G; Ryan, M M; Carew, P; Corben, L A; Yiu, E; Tan, J; Delatycki, M B

    2012-12-13

    Auditory neuropathy disrupts the neural representation of sound and may therefore impair processes contingent upon inter-aural integration. The aims of this study were to investigate binaural auditory processing in individuals with axonal (Friedreich ataxia) and demyelinating (Charcot-Marie-Tooth disease type 1A) auditory neuropathy and to evaluate the relationship between the degree of auditory deficit and overall clinical severity in patients with neuropathic disorders. Twenty-three subjects with genetically confirmed Friedreich ataxia and 12 subjects with Charcot-Marie-Tooth disease type 1A underwent psychophysical evaluation of basic auditory processing (intensity discrimination/temporal resolution) and binaural speech perception assessment using the Listening in Spatialized Noise test. Age, gender and hearing-level-matched controls were also tested. Speech perception in noise for individuals with auditory neuropathy was abnormal for each listening condition, but was particularly affected in circumstances where binaural processing might have improved perception through spatial segregation. Ability to use spatial cues was correlated with temporal resolution suggesting that the binaural-processing deficit was the result of disordered representation of timing cues in the left and right auditory nerves. Spatial processing was also related to overall disease severity (as measured by the Friedreich Ataxia Rating Scale and Charcot-Marie-Tooth Neuropathy Score) suggesting that the degree of neural dysfunction in the auditory system accurately reflects generalized neuropathic changes. Measures of binaural speech processing show promise for application in the neurology clinic. In individuals with auditory neuropathy due to both axonal and demyelinating mechanisms the assessment provides a measure of functional hearing ability, a biomarker capable of tracking the natural history of progressive disease and a potential means of evaluating the effectiveness of interventions. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  13. From innervation density to tactile acuity: 1. Spatial representation.

    PubMed

    Brown, Paul B; Koerber, H Richard; Millecchia, Ronald

    2004-06-11

    We tested the hypothesis that the population receptive field representation (a superposition of the excitatory receptive field areas of cells responding to a tactile stimulus) provides spatial information sufficient to mediate one measure of static tactile acuity. In psychophysical tests, two-point discrimination thresholds on the hindlimbs of adult cats varied as a function of stimulus location and orientation, as they do in humans. A statistical model of the excitatory low threshold mechanoreceptive fields of spinocervical, postsynaptic dorsal column and spinothalamic tract neurons was used to simulate the population receptive field representations in this neural population of the one- and two-point stimuli used in the psychophysical experiments. The simulated and observed thresholds were highly correlated. Simulated and observed thresholds' relations to physiological and anatomical variables such as stimulus location and orientation, receptive field size and shape, map scale, and innervation density were strikingly similar. Simulated and observed threshold variations with receptive field size and map scale obeyed simple relationships predicted by the signal detection model, and were statistically indistinguishable from each other. The population receptive field representation therefore contains information sufficient for this discrimination.

  14. Plasticity of spatial hearing: behavioural effects of cortical inactivation

    PubMed Central

    Nodal, Fernando R; Bajo, Victoria M; King, Andrew J

    2012-01-01

    The contribution of auditory cortex to spatial information processing was explored behaviourally in adult ferrets by reversibly deactivating different cortical areas by subdural placement of a polymer that released the GABAA agonist muscimol over a period of weeks. The spatial extent and time course of cortical inactivation were determined electrophysiologically. Muscimol-Elvax was placed bilaterally over the anterior (AEG), middle (MEG) or posterior ectosylvian gyrus (PEG), so that different regions of the auditory cortex could be deactivated in different cases. Sound localization accuracy in the horizontal plane was assessed by measuring both the initial head orienting and approach-to-target responses made by the animals. Head orienting behaviour was unaffected by silencing any region of the auditory cortex, whereas the accuracy of approach-to-target responses to brief sounds (40 ms noise bursts) was reduced by muscimol-Elvax but not by drug-free implants. Modest but significant localization impairments were observed after deactivating the MEG, AEG or PEG, although the largest deficits were produced in animals in which the MEG, where the primary auditory fields are located, was silenced. We also examined experience-induced spatial plasticity by reversibly plugging one ear. In control animals, localization accuracy for both approach-to-target and head orienting responses was initially impaired by monaural occlusion, but recovered with training over the next few days. Deactivating any part of the auditory cortex resulted in less complete recovery than in controls, with the largest deficits observed after silencing the higher-level cortical areas in the AEG and PEG. Although suggesting that each region of auditory cortex contributes to spatial learning, differences in the localization deficits and degree of adaptation between groups imply a regional specialization in the processing of spatial information across the auditory cortex. PMID:22547635

  15. Auditory peripersonal space in humans.

    PubMed

    Farnè, Alessandro; Làdavas, Elisabetta

    2002-10-01

    In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.

  16. Evaluation of Domain-Specific Collaboration Interfaces for Team Command and Control Tasks

    DTIC Science & Technology

    2012-05-01

    Technologies 1.1.1. Virtual Whiteboard Cognitive theories relating the utilization, storage, and retrieval of verbal and spatial information, such as...AE Spatial emergent SE Auditory linguistic AL Spatial positional SP Facial figural FF Spatial quantitative SQ Facial motive FM Tactile figural...driven by the auditory linguistic (AL), short-term memory (STM), spatial attentive (SA), visual temporal (VT), and vocal process (V) subscales. 0

  17. An fMRI Study of the Neural Systems Involved in Visually Cued Auditory Top-Down Spatial and Temporal Attention

    PubMed Central

    Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong

    2012-01-01

    Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800

  18. Co-localisation of abnormal brain structure and function in specific language impairment.

    PubMed

    Badcock, Nicholas A; Bishop, Dorothy V M; Hardiman, Mervyn J; Barry, Johanna G; Watkins, Kate E

    2012-03-01

    We assessed the relationship between brain structure and function in 10 individuals with specific language impairment (SLI), compared to six unaffected siblings, and 16 unrelated control participants with typical language. Voxel-based morphometry indicated that grey matter in the SLI group, relative to controls, was increased in the left inferior frontal cortex and decreased in the right caudate nucleus and superior temporal cortex bilaterally. The unaffected siblings also showed reduced grey matter in the caudate nucleus relative to controls. In an auditory covert naming task, the SLI group showed reduced activation in the left inferior frontal cortex, right putamen, and in the superior temporal cortex bilaterally. Despite spatially coincident structural and functional abnormalities in frontal and temporal areas, the relationships between structure and function in these regions were different. These findings suggest multiple structural and functional abnormalities in SLI that are differently associated with receptive and expressive language processing. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. Prenatal and Infant Exposure to an Environmental Pollutant Damages Brain Architecture and Plasticity. Science Briefs

    ERIC Educational Resources Information Center

    National Scientific Council on the Developing Child, 2007

    2007-01-01

    "Science Briefs" summarize the findings and implications of a recent study in basic science or clinical research. This Brief reports on the study "Perinatal Exposure to a Noncoplanar Bichlorinated Biphenol Alters Tonotopy, Receptive Fields and Plasticity in the Auditory Cortex" (T. Kenet; R. C. Froemke; C. E. Schreiner; I. N. Pessah; and M. M.…

  20. Perceptual Literacy and the Construction of Significant Meanings within Art Education

    ERIC Educational Resources Information Center

    Cerkez, Beatriz Tomsic

    2014-01-01

    In order to verify how important the ability to process visual images and sounds in a holistic way can be, we developed an experiment based on the production and reception of an art work that was conceived as a multi-sensorial experience and implied a complex understanding of visual and auditory information. We departed from the idea that to…

  1. Cognitive And Neural Sciences Division 1992 Programs

    DTIC Science & Technology

    1992-08-01

    Thalamic short-term plasticity in the auditory system: Associative retuning of receptive fields in the ventral medial geniculate body . Behavioral...prediction and enhancement of human performance in training and operational environments. A second goal is to understand the neurobiological constraints and...such complex, structured bodies of knowledge and skill are acquired. Fourth, to provide a precise theory of instruction, founded on cognitive theory

  2. The impact of variation in low-frequency interaural cross correlation on auditory spatial imagery in stereophonic loudspeaker reproduction

    NASA Astrophysics Data System (ADS)

    Martens, William

    2005-04-01

    Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.

  3. Gamma-band activation predicts both associative memory and cortical plasticity

    PubMed Central

    Headley, Drew B.; Weinberger, Norman M.

    2011-01-01

    Gamma-band oscillations are a ubiquitous phenomenon in the nervous system and have been implicated in multiple aspects of cognition. In particular, the strength of gamma oscillations at the time a stimulus is encoded predicts its subsequent retrieval, suggesting that gamma may reflect enhanced mnemonic processing. Likewise, activity in the gamma-band can modulate plasticity in vitro. However, it is unclear whether experience-dependent plasticity in vivo is also related to gamma-band activation. The aim of the present study is to determine whether gamma activation in primary auditory cortex modulates both the associative memory for an auditory stimulus during classical conditioning and its accompanying specific receptive field plasticity. Rats received multiple daily sessions of single tone/shock trace and two-tone discrimination conditioning, during which local field potentials and multiunit discharges were recorded from chronically implanted electrodes. We found that the strength of tone-induced gamma predicted the acquisition of associative memory 24 h later, and ceased to predict subsequent performance once asymptote was reached. Gamma activation also predicted receptive field plasticity that specifically enhanced representation of the signal tone. This concordance provides a long-sought link between gamma oscillations, cortical plasticity and the formation of new memories. PMID:21900554

  4. Y-cell receptive field and collicular projection of parasol ganglion cells in macaque monkey retina

    PubMed Central

    Crook, Joanna D.; Peterson, Beth B.; Packer, Orin S.; Robinson, Farrel R.; Troy, John B.; Dacey, Dennis M.

    2009-01-01

    The distinctive parasol ganglion cell of the primate retina transmits a transient, spectrally non-opponent signal to the magnocellular layers of the lateral geniculate nucleus (LGN). Parasol cells show well-recognized parallels with the alpha-Y cell of other mammals, yet two key alpha-Y cell properties, a collateral projection to the superior colliculus and nonlinear spatial summation, have not been clearly established for parasol cells. Here we show by retrograde photodynamic staining that parasol cells project to the superior colliculus. Photostained dendritic trees formed characteristic spatial mosaics and afforded unequivocal identification of the parasol cells among diverse collicular-projecting cell types. Loose-patch recordings were used to demonstrate for all parasol cells a distinct Y-cell receptive field ‘signature’ marked by a non-linear mechanism that responded to contrast-reversing gratings at twice the stimulus temporal frequency (second Fourier harmonic, F2) independent of stimulus spatial phase. The F2 component showed high contrast gain and temporal sensitivity and appeared to originate from a region coextensive with that of the linear receptive field center. The F2 spatial frequency response peaked well beyond the resolution limit of the linear receptive field center, showing a Gaussian center radius of ~15 μm. Blocking inner retinal inhibition elevated the F2 response, suggesting that amacrine circuitry does not generate this non-linearity. Our data are consistent with a pooled-subunit model of the parasol-Y cell receptive field in which summation from an array of transient, partially rectifying cone bipolar cells accounts for both linear and non-linear components of the receptive field. PMID:18971470

  5. Auditory and visual cortex of primates: a comparison of two sensory systems

    PubMed Central

    Rauschecker, Josef P.

    2014-01-01

    A comparative view of the brain, comparing related functions across species and sensory systems, offers a number of advantages. In particular, it allows separating the formal purpose of a model structure from its implementation in specific brains. Models of auditory cortical processing can be conceived by analogy to the visual cortex, incorporating neural mechanisms that are found in both the visual and auditory systems. Examples of such canonical features on the columnar level are direction selectivity, size/bandwidth selectivity, as well as receptive fields with segregated versus overlapping on- and off-sub-regions. On a larger scale, parallel processing pathways have been envisioned that represent the two main facets of sensory perception: 1) identification of objects and 2) processing of space. Expanding this model in terms of sensorimotor integration and control offers an overarching view of cortical function independent of sensory modality. PMID:25728177

  6. Tracking the voluntary control of auditory spatial attention with event-related brain potentials.

    PubMed

    Störmer, Viola S; Green, Jessica J; McDonald, John J

    2009-03-01

    A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.

  7. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  8. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  9. Features and functions of nonlinear spatial integration by retinal ganglion cells.

    PubMed

    Gollisch, Tim

    2013-11-01

    Ganglion cells in the vertebrate retina integrate visual information over their receptive fields. They do so by pooling presynaptic excitatory inputs from typically many bipolar cells, which themselves collect inputs from several photoreceptors. In addition, inhibitory interactions mediated by horizontal cells and amacrine cells modulate the structure of the receptive field. In many models, this spatial integration is assumed to occur in a linear fashion. Yet, it has long been known that spatial integration by retinal ganglion cells also incurs nonlinear phenomena. Moreover, several recent examples have shown that nonlinear spatial integration is tightly connected to specific visual functions performed by different types of retinal ganglion cells. This work discusses these advances in understanding the role of nonlinear spatial integration and reviews recent efforts to quantitatively study the nature and mechanisms underlying spatial nonlinearities. These new insights point towards a critical role of nonlinearities within ganglion cell receptive fields for capturing responses of the cells to natural and behaviorally relevant visual stimuli. In the long run, nonlinear phenomena of spatial integration may also prove important for implementing the actual neural code of retinal neurons when designing visual prostheses for the eye. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Local Diversity and Fine-Scale Organization of Receptive Fields in Mouse Visual Cortex

    PubMed Central

    Histed, Mark H.; Yurgenson, Sergey

    2011-01-01

    Many thousands of cortical neurons are activated by any single sensory stimulus, but the organization of these populations is poorly understood. For example, are neurons in mouse visual cortex—whose preferred orientations are arranged randomly—organized with respect to other response properties? Using high-speed in vivo two-photon calcium imaging, we characterized the receptive fields of up to 100 excitatory and inhibitory neurons in a 200 μm imaged plane. Inhibitory neurons had nonlinearly summating, complex-like receptive fields and were weakly tuned for orientation. Excitatory neurons had linear, simple receptive fields that can be studied with noise stimuli and system identification methods. We developed a wavelet stimulus that evoked rich population responses and yielded the detailed spatial receptive fields of most excitatory neurons in a plane. Receptive fields and visual responses were locally highly diverse, with nearby neurons having largely dissimilar receptive fields and response time courses. Receptive-field diversity was consistent with a nearly random sampling of orientation, spatial phase, and retinotopic position. Retinotopic positions varied locally on average by approximately half the receptive-field size. Nonetheless, the retinotopic progression across the cortex could be demonstrated at the scale of 100 μm, with a magnification of ∼10 μm/°. Receptive-field and response similarity were in register, decreasing by 50% over a distance of 200 μm. Together, the results indicate considerable randomness in local populations of mouse visual cortical neurons, with retinotopy as the principal source of organization at the scale of hundreds of micrometers. PMID:22171051

  11. Call sign intelligibility improvement using a spatial auditory display

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1994-01-01

    A spatial auditory display was designed for separating the multiple communication channels usually heard over one ear to different virtual auditory positions. The single 19 foot rack mount device utilizes digital filtering algorithms to separate up to four communication channels. The filters use four different binaural transfer functions, synthesized from actual outer ear measurements, to impose localization cues on the incoming sound. Hardware design features include 'fail-safe' operation in the case of power loss, and microphone/headset interfaces to the mobile launch communication system in use at KSC. An experiment designed to verify the intelligibility advantage of the display used 130 different call signs taken from the communications protocol used at NASA KSC. A 6 to 7 dB intelligibility advantage was found when multiple channels were spatially displayed, compared to monaural listening. The findings suggest that the use of a spatial auditory display could enhance both occupational and operational safety and efficiency of NASA operations.

  12. Spatial processing in the auditory cortex of the macaque monkey

    NASA Astrophysics Data System (ADS)

    Recanzone, Gregg H.

    2000-10-01

    The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

  13. Spatial gradient for unique-feature detection in patients with unilateral neglect: evidence from auditory and visual search.

    PubMed

    Eramudugolla, Ranmalee; Mattingley, Jason B

    2008-01-01

    Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.

  14. Reliability-Weighted Integration of Audiovisual Signals Can Be Modulated by Top-down Attention

    PubMed Central

    Noppeney, Uta

    2018-01-01

    Abstract Behaviorally, it is well established that human observers integrate signals near-optimally weighted in proportion to their reliabilities as predicted by maximum likelihood estimation. Yet, despite abundant behavioral evidence, it is unclear how the human brain accomplishes this feat. In a spatial ventriloquist paradigm, participants were presented with auditory, visual, and audiovisual signals and reported the location of the auditory or the visual signal. Combining psychophysics, multivariate functional MRI (fMRI) decoding, and models of maximum likelihood estimation (MLE), we characterized the computational operations underlying audiovisual integration at distinct cortical levels. We estimated observers’ behavioral weights by fitting psychometric functions to participants’ localization responses. Likewise, we estimated the neural weights by fitting neurometric functions to spatial locations decoded from regional fMRI activation patterns. Our results demonstrate that low-level auditory and visual areas encode predominantly the spatial location of the signal component of a region’s preferred auditory (or visual) modality. By contrast, intraparietal sulcus forms spatial representations by integrating auditory and visual signals weighted by their reliabilities. Critically, the neural and behavioral weights and the variance of the spatial representations depended not only on the sensory reliabilities as predicted by the MLE model but also on participants’ modality-specific attention and report (i.e., visual vs. auditory). These results suggest that audiovisual integration is not exclusively determined by bottom-up sensory reliabilities. Instead, modality-specific attention and report can flexibly modulate how intraparietal sulcus integrates sensory signals into spatial representations to guide behavioral responses (e.g., localization and orienting). PMID:29527567

  15. Chromatic summation and receptive field properties of blue-on and blue-off cells in marmoset lateral geniculate nucleus.

    PubMed

    Eiber, C D; Pietersen, A N J; Zeater, N; Solomon, S G; Martin, P R

    2017-11-22

    The "blue-on" and "blue-off" receptive fields in retina and dorsal lateral geniculate nucleus (LGN) of diurnal primates combine signals from short-wavelength sensitive (S) cone photoreceptors with signals from medium/long wavelength sensitive (ML) photoreceptors. Three questions about this combination remain unresolved. Firstly, is the combination of S and ML signals in these cells linear or non-linear? Secondly, how does the timing of S and ML inputs to these cells influence their responses? Thirdly, is there spatial antagonism within S and ML subunits of the receptive field of these cells? We measured contrast sensitivity and spatial frequency tuning for four types of drifting sine gratings: S cone isolating, ML cone isolating, achromatic (S + ML), and counterphase chromatic (S - ML), in extracellular recordings from LGN of marmoset monkeys. We found that responses to stimuli which modulate both S and ML cones are well predicted by a linear sum of S and ML signals, followed by a saturating contrast-response relation. Differences in sensitivity and timing (i.e. vector combination) between S and ML inputs are needed to explain the amplitude and phase of responses to achromatic (S + ML) and counterphase chromatic (S - ML) stimuli. Best-fit spatial receptive fields for S and/or ML subunits in most cells (>80%) required antagonistic surrounds, usually in the S subunit. The surrounds were however generally weak and had little influence on spatial tuning. The sensitivity and size of S and ML subunits were correlated on a cell-by-cell basis, adding to evidence that blue-on and blue-off receptive fields are specialised to signal chromatic but not spatial contrast. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Auditory spatial attention to speech and complex non-speech sounds in children with autism spectrum disorder.

    PubMed

    Soskey, Laura N; Allen, Paul D; Bennetto, Loisa

    2017-08-01

    One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  17. Perinatal exposure to a noncoplanar polychlorinated biphenyl alters tonotopy, receptive fields, and plasticity in rat primary auditory cortex

    PubMed Central

    Kenet, T.; Froemke, R. C.; Schreiner, C. E.; Pessah, I. N.; Merzenich, M. M.

    2007-01-01

    Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossly abnormal or reversed in about half of the exposed pups, the balance of neuronal inhibition to excitation for A1 neurons was disturbed, and the critical period plasticity that underlies normal postnatal auditory system development was significantly altered. These findings demonstrate that developmental exposure to this class of environmental contaminant alters cortical development. It is proposed that exposure to noncoplanar PCBs may contribute to common developmental disorders, especially in populations with heritable imbalances in neurotransmitter systems that regulate the ratio of inhibition and excitation in the brain. We conclude that the health implications associated with exposure to noncoplanar PCBs in human populations merit a more careful examination. PMID:17460041

  18. Impaired auditory temporal selectivity in the inferior colliculus of aged Mongolian gerbils.

    PubMed

    Khouri, Leila; Lesica, Nicholas A; Grothe, Benedikt

    2011-07-06

    Aged humans show severe difficulties in temporal auditory processing tasks (e.g., speech recognition in noise, low-frequency sound localization, gap detection). A degradation of auditory function with age is also evident in experimental animals. To investigate age-related changes in temporal processing, we compared extracellular responses to temporally variable pulse trains and human speech in the inferior colliculus of young adult (3 month) and aged (3 years) Mongolian gerbils. We observed a significant decrease of selectivity to the pulse trains in neuronal responses from aged animals. This decrease in selectivity led, on the population level, to an increase in signal correlations and therefore a decrease in heterogeneity of temporal receptive fields and a decreased efficiency in encoding of speech signals. A decrease in selectivity to temporal modulations is consistent with a downregulation of the inhibitory transmitter system in aged animals. These alterations in temporal processing could underlie declines in the aging auditory system, which are unrelated to peripheral hearing loss. These declines cannot be compensated by traditional hearing aids (that rely on amplification of sound) but may rather require pharmacological treatment.

  19. Short-term memory stores organized by information domain.

    PubMed

    Noyce, Abigail L; Cestero, Nishmar; Shinn-Cunningham, Barbara G; Somers, David C

    2016-04-01

    Vision and audition have complementary affinities, with vision excelling in spatial resolution and audition excelling in temporal resolution. Here, we investigated the relationships among the visual and auditory modalities and spatial and temporal short-term memory (STM) using change detection tasks. We created short sequences of visual or auditory items, such that each item within a sequence arose at a unique spatial location at a unique time. On each trial, two successive sequences were presented; subjects attended to either space (the sequence of locations) or time (the sequence of inter item intervals) and reported whether the patterns of locations or intervals were identical. Each subject completed blocks of unimodal trials (both sequences presented in the same modality) and crossmodal trials (Sequence 1 visual, Sequence 2 auditory, or vice versa) for both spatial and temporal tasks. We found a strong interaction between modality and task: Spatial performance was best on unimodal visual trials, whereas temporal performance was best on unimodal auditory trials. The order of modalities on crossmodal trials also mattered, suggesting that perceptual fidelity at encoding is critical to STM. Critically, no cost was attributable to crossmodal comparison: In both tasks, performance on crossmodal trials was as good as or better than on the weaker unimodal trials. STM representations of space and time can guide change detection in either the visual or the auditory modality, suggesting that the temporal or spatial organization of STM may supersede sensory-specific organization.

  20. A comparison of methods for teaching receptive labeling to children with autism spectrum disorders.

    PubMed

    Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N

    2011-01-01

    Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed.

  1. Cortical feedback signals generalise across different spatial frequencies of feedforward inputs.

    PubMed

    Revina, Yulia; Petro, Lucy S; Muckli, Lars

    2017-09-22

    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. On the spatial specificity of audiovisual crossmodal exogenous cuing effects.

    PubMed

    Lee, Jae; Spence, Charles

    2017-06-01

    It is generally-accepted that the presentation of an auditory cue will direct an observer's spatial attention to the region of space from where it originates and therefore facilitate responses to visual targets presented there rather than from a different position within the cued hemifield. However, to date, there has been surprisingly limited evidence published in support of such within-hemifield crossmodal exogenous spatial cuing effects. Here, we report two experiments designed to investigate within- and between-hemifield spatial cuing effects in the case of audiovisual exogenous covert orienting. Auditory cues were presented from one of four frontal loudspeakers (two on either side of central fixation). There were eight possible visual target locations (one above and another below each of the loudspeakers). The auditory cues were evenly separated laterally by 30° in Experiment 1, and by 10° in Experiment 2. The potential cue and target locations were separated vertically by approximately 19° in Experiment 1, and by 4° in Experiment 2. On each trial, the participants made a speeded elevation (i.e., up vs. down) discrimination response to the visual target following the presentation of a spatially-nonpredictive auditory cue. Within-hemifield spatial cuing effects were observed only when the auditory cues were presented from the inner locations. Between-hemifield spatial cuing effects were observed in both experiments. Taken together, these results demonstrate that crossmodal exogenous shifts of spatial attention depend on the eccentricity of both the cue and target in a way that has not been made explicit by previous research. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Two-microphone spatial filtering provides speech reception benefits for cochlear implant users in difficult acoustic environments

    PubMed Central

    Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.

    2014-01-01

    This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120

  4. The frequency modulated auditory evoked response (FMAER), a technical advance for study of childhood language disorders: cortical source localization and selected case studies

    PubMed Central

    2013-01-01

    Background Language comprehension requires decoding of complex, rapidly changing speech streams. Detecting changes of frequency modulation (FM) within speech is hypothesized as essential for accurate phoneme detection, and thus, for spoken word comprehension. Despite past demonstration of FM auditory evoked response (FMAER) utility in language disorder investigations, it is seldom utilized clinically. This report's purpose is to facilitate clinical use by explaining analytic pitfalls, demonstrating sites of cortical origin, and illustrating potential utility. Results FMAERs collected from children with language disorders, including Developmental Dysphasia, Landau-Kleffner syndrome (LKS), and autism spectrum disorder (ASD) and also normal controls - utilizing multi-channel reference-free recordings assisted by discrete source analysis - provided demonstratrions of cortical origin and examples of clinical utility. Recordings from inpatient epileptics with indwelling cortical electrodes provided direct assessment of FMAER origin. The FMAER is shown to normally arise from bilateral posterior superior temporal gyri and immediate temporal lobe surround. Childhood language disorders associated with prominent receptive deficits demonstrate absent left or bilateral FMAER temporal lobe responses. When receptive language is spared, the FMAER may remain present bilaterally. Analyses based upon mastoid or ear reference electrodes are shown to result in erroneous conclusions. Serial FMAER studies may dynamically track status of underlying language processing in LKS. FMAERs in ASD with language impairment may be normal or abnormal. Cortical FMAERs can locate language cortex when conventional cortical stimulation does not. Conclusion The FMAER measures the processing by the superior temporal gyri and adjacent cortex of rapid frequency modulation within an auditory stream. Clinical disorders associated with receptive deficits are shown to demonstrate absent left or bilateral responses. Serial FMAERs may be useful for tracking language change in LKS. Cortical FMAERs may augment invasive cortical language testing in epilepsy surgical patients. The FMAER may be normal in ASD and other language disorders when pathology spares the superior temporal gyrus and surround but presumably involves other brain regions. Ear/mastoid reference electrodes should be avoided and multichannel, reference free recordings utilized. Source analysis may assist in better understanding of complex FMAER findings. PMID:23351174

  5. Attentional enhancement of spatial resolution: linking behavioural and neurophysiological evidence

    PubMed Central

    Anton-Erxleben, Katharina; Carrasco, Marisa

    2014-01-01

    Attention allows us to select relevant sensory information for preferential processing. Behaviourally, it improves performance in various visual tasks. One prominent effect of attention is the modulation of performance in tasks that involve the visual system’s spatial resolution. Physiologically, attention modulates neuronal responses and alters the profile and position of receptive fields near the attended location. Here, we develop a hypothesis linking the behavioural and electrophysiological evidence. The proposed framework seeks to explain how these receptive field changes enhance the visual system’s effective spatial resolution and how the same mechanisms may also underlie attentional effects on the representation of spatial information. PMID:23422910

  6. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  7. Effects of Secondary Task Modality and Processing Code on Automation Trust and Utilization During Simulated Airline Luggage Screening

    NASA Technical Reports Server (NTRS)

    Phillips, Rachel; Madhavan, Poornima

    2010-01-01

    The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.

  8. Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex

    PubMed Central

    Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2010-01-01

    Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343

  9. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.

    PubMed

    Golob, Edward J; Winston, Jenna; Mock, Jeffrey R

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.

  10. Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients

    PubMed Central

    Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.

    2017-01-01

    Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024

  11. Visually induced plasticity of auditory spatial perception in macaques.

    PubMed

    Woods, Timothy M; Recanzone, Gregg H

    2004-09-07

    When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.

  12. On the Etiology of Listening Difficulties in Noise Despite Clinically Normal Audiograms

    PubMed Central

    2017-01-01

    Many people with difficulties following conversations in noisy settings have “clinically normal” audiograms, that is, tone thresholds better than 20 dB HL from 0.1 to 8 kHz. This review summarizes the possible causes of such difficulties, and examines established as well as promising new psychoacoustic and electrophysiologic approaches to differentiate between them. Deficits at the level of the auditory periphery are possible even if thresholds remain around 0 dB HL, and become probable when they reach 10 to 20 dB HL. Extending the audiogram beyond 8 kHz can identify early signs of noise-induced trauma to the vulnerable basal turn of the cochlea, and might point to “hidden” losses at lower frequencies that could compromise speech reception in noise. Listening difficulties can also be a consequence of impaired central auditory processing, resulting from lesions affecting the auditory brainstem or cortex, or from abnormal patterns of sound input during developmental sensitive periods and even in adulthood. Such auditory processing disorders should be distinguished from (cognitive) linguistic deficits, and from problems with attention or working memory that may not be specific to the auditory modality. Improved diagnosis of the causes of listening difficulties in noise should lead to better treatment outcomes, by optimizing auditory training procedures to the specific deficits of individual patients, for example. PMID:28002080

  13. Entorhinal cortex receptive fields are modulated by spatial attention, even without movement

    PubMed Central

    König, Peter; König, Seth; Buffalo, Elizabeth A

    2018-01-01

    Grid cells in the entorhinal cortex allow for the precise decoding of position in space. Along with potentially playing an important role in navigation, grid cells have recently been hypothesized to make a general contribution to mental operations. A prerequisite for this hypothesis is that grid cell activity does not critically depend on physical movement. Here, we show that movement of covert attention, without any physical movement, also elicits spatial receptive fields with a triangular tiling of space. In monkeys trained to maintain central fixation while covertly attending to a stimulus moving in the periphery we identified a significant population (20/141, 14% neurons at a FDR <5%) of entorhinal cells with spatially structured receptive fields. This contrasts with recordings obtained in the hippocampus, where grid-like representations were not observed. Our results provide evidence that neurons in macaque entorhinal cortex do not rely on physical movement. PMID:29537964

  14. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  15. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  16. Interdependent encoding of pitch, timbre and spatial location in auditory cortex

    PubMed Central

    Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.

    2009-01-01

    Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960

  17. [Identification of auditory laterality by means of a new dichotic digit test in Spanish, and body laterality and spatial orientation in children with dyslexia and in controls].

    PubMed

    Olivares-García, M R; Peñaloza-López, Y R; García-Pedroza, F; Jesús-Pérez, S; Uribe-Escamilla, R; Jiménez-de la Sancha, S

    In this study, a new dichotic digit test in Spanish (NDDTS) was applied in order to identify auditory laterality. We also evaluated body laterality and spatial location using the Subirana test. Both the dichotic test and the Subirana test for body laterality and spatial location were applied in a group of 40 children with dyslexia and in a control group made up of 40 children who were paired according to age and gender. The results of the three evaluations were analysed using the SPSS 10 software application, with Pearson's chi-squared test. It was seen that 42.5% of the children in the group of dyslexics had mixed auditory laterality, compared to 7.5% in the control group (p < or = 0.05). Body laterality was mixed in 25% of dyslexic children and in 2.5% in the control group (p < or = 0.05) and there was 72.5% spatial disorientation in the group of dyslexics, whereas only 15% (p < or = 0.05) was found in the control group. The NDDTS proved to be a useful tool for demonstrating that mixed auditory laterality and auditory predominance of the left ear are linked to dyslexia. The results of this test exceed those obtained for body laterality. Spatial orientation is indeed altered in children with dyslexia. The importance of this finding makes it necessary to study the central auditory processes in all cases in order to define better rehabilitation strategies in Spanish-speaking children.

  18. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. "Where do auditory hallucinations come from?"--a brain morphometry study of schizophrenia patients with inner or outer space hallucinations.

    PubMed

    Plaze, Marion; Paillère-Martinot, Marie-Laure; Penttilä, Jani; Januel, Dominique; de Beaurepaire, Renaud; Bellivier, Franck; Andoh, Jamila; Galinowski, André; Gallarda, Thierry; Artiges, Eric; Olié, Jean-Pierre; Mangin, Jean-François; Martinot, Jean-Luc; Cachia, Arnaud

    2011-01-01

    Auditory verbal hallucinations are a cardinal symptom of schizophrenia. Bleuler and Kraepelin distinguished 2 main classes of hallucinations: hallucinations heard outside the head (outer space, or external, hallucinations) and hallucinations heard inside the head (inner space, or internal, hallucinations). This distinction has been confirmed by recent phenomenological studies that identified 3 independent dimensions in auditory hallucinations: language complexity, self-other misattribution, and spatial location. Brain imaging studies in schizophrenia patients with auditory hallucinations have already investigated language complexity and self-other misattribution, but the neural substrate of hallucination spatial location remains unknown. Magnetic resonance images of 45 right-handed patients with schizophrenia and persistent auditory hallucinations and 20 healthy right-handed subjects were acquired. Two homogeneous subgroups of patients were defined based on the hallucination spatial location: patients with only outer space hallucinations (N=12) and patients with only inner space hallucinations (N=15). Between-group differences were then assessed using 2 complementary brain morphometry approaches: voxel-based morphometry and sulcus-based morphometry. Convergent anatomical differences were detected between the patient subgroups in the right temporoparietal junction (rTPJ). In comparison to healthy subjects, opposite deviations in white matter volumes and sulcus displacements were found in patients with inner space hallucination and patients with outer space hallucination. The current results indicate that spatial location of auditory hallucinations is associated with the rTPJ anatomy, a key region of the "where" auditory pathway. The detected tilt in the sulcal junction suggests deviations during early brain maturation, when the superior temporal sulcus and its anterior terminal branch appear and merge.

  20. The relationship between form and function level receptive prosodic abilities in autism.

    PubMed

    Järvinen-Pasley, Anna; Peppé, Susan; King-Smith, Gavin; Heaton, Pamela

    2008-08-01

    Prosody can be conceived as having form (auditory-perceptual characteristics) and function (pragmatic/linguistic meaning). No known studies have examined the relationship between form- and function-level prosodic skills in relation to the effects of stimulus length and/or complexity upon such abilities in autism. Research in this area is both insubstantial and inconclusive. Children with autism and controls completed the receptive tasks of the Profiling Elements of Prosodic Systems in Children (PEPS-C) test, which examines both form- and function-level skills, and a sentence-level task assessing the understanding of intonation. While children with autism were unimpaired in both form and function tasks at the single-word level, they showed significantly poorer performance in the corresponding sentence-level tasks than controls. Implications for future research are discussed.

  1. Generic HRTFs May be Good Enough in Virtual Reality. Improving Source Localization through Cross-Modal Plasticity.

    PubMed

    Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou

    2018-01-01

    Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.

  2. Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?

    PubMed Central

    Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe

    2017-01-01

    It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770

  3. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration

    PubMed Central

    Ghose, Dipanwita; Wallace, Mark T.

    2013-01-01

    Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location – highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC mediated behaviors. PMID:24183964

  4. The thalamo-cortical auditory receptive fields: regulation by the states of vigilance, learning and the neuromodulatory systems.

    PubMed

    Edeline, Jean-Marc

    2003-12-01

    The goal of this review is twofold. First, it aims to describe the dynamic regulation that constantly shapes the receptive fields (RFs) and maps in the thalamo-cortical sensory systems of undrugged animals. Second, it aims to discuss several important issues that remain unresolved at the intersection between behavioral neurosciences and sensory physiology. A first section presents the RF modulations observed when an undrugged animal spontaneously shifts from waking to slow-wave sleep or to paradoxical sleep (also called REM sleep). A second section shows that, in contrast with the general changes described in the first section, behavioral training can induce selective effects which favor the stimulus that has acquired significance during learning. A third section reviews the effects triggered by two major neuromodulators of the thalamo-cortical system--acetylcholine and noradrenaline--which are traditionally involved both in the switch of vigilance states and in learning experiences. The conclusion argues that because the receptive fields and maps of an awake animal are continuously modulated from minute to minute, learning-induced sensory plasticity can be viewed as a "crystallization" of the receptive fields and maps in one of the multiple possible states. Studying the interplays between neuromodulators can help understanding the neurobiological foundations of this dynamic regulation.

  5. The Speech Intelligibility Index and the pure-tone average as predictors of lexical ability in children fit with hearing AIDS.

    PubMed

    Stiles, Derek J; Bentler, Ruth A; McGregor, Karla K

    2012-06-01

    To determine whether a clinically obtainable measure of audibility, the aided Speech Intelligibility Index (SII; American National Standards Institute, 2007), is more sensitive than the pure-tone average (PTA) at predicting the lexical abilities of children who wear hearing aids (CHA). School-age CHA and age-matched children with normal hearing (CNH) repeated words and nonwords, learned novel words, and completed a standardized receptive vocabulary test. Analyses of covariance allowed comparison of the 2 groups. For CHA, regression analyses determined whether SII held predictive value over and beyond PTA. CHA demonstrated poorer performance than CNH on tests of word and nonword repetition and receptive vocabulary. Groups did not differ on word learning. Aided SII was a stronger predictor of word and nonword repetition and receptive vocabulary than PTA. After accounting for PTA, aided SII remained a significant predictor of nonword repetition and receptive vocabulary. Despite wearing hearing aids, CHA performed more poorly on 3 of 4 lexical measures. Individual differences among CHA were predicted by aided SII. Unlike PTA, aided SII incorporates hearing aid amplification characteristics and speech-frequency weightings and may provide a more valid estimate of the child's access to and ability to learn from auditory input in real-world environments.

  6. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  7. Evidence for multisensory spatial-to-motor transformations in aiming movements of children.

    PubMed

    King, Bradley R; Kagerer, Florian A; Contreras-Vidal, Jose L; Clark, Jane E

    2009-01-01

    The extant developmental literature investigating age-related differences in the execution of aiming movements has predominantly focused on visuomotor coordination, despite the fact that additional sensory modalities, such as audition and somatosensation, may contribute to motor planning, execution, and learning. The current study investigated the execution of aiming movements toward both visual and acoustic stimuli. In addition, we examined the interaction between visuomotor and auditory-motor coordination as 5- to 10-yr-old participants executed aiming movements to visual and acoustic stimuli before and after exposure to a visuomotor rotation. Children in all age groups demonstrated significant improvement in performance under the visuomotor perturbation, as indicated by decreased initial directional and root mean squared errors. Moreover, children in all age groups demonstrated significant visual aftereffects during the postexposure phase, suggesting a successful update of their spatial-to-motor transformations. Interestingly, these updated spatial-to-motor transformations also influenced auditory-motor performance, as indicated by distorted movement trajectories during the auditory postexposure phase. The distorted trajectories were present during auditory postexposure even though the auditory-motor relationship was not manipulated. Results suggest that by the age of 5 yr, children have developed a multisensory spatial-to-motor transformation for the execution of aiming movements toward both visual and acoustic targets.

  8. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  9. Chimaeric sounds reveal dichotomies in auditory perception

    PubMed Central

    Smith, Zachary M.; Delgutte, Bertrand; Oxenham, Andrew J.

    2008-01-01

    By Fourier's theorem1, signals can be decomposed into a sum of sinusoids of different frequencies. This is especially relevant for hearing, because the inner ear performs a form of mechanical Fourier transform by mapping frequencies along the length of the cochlear partition. An alternative signal decomposition, originated by Hilbert2, is to factor a signal into the product of a slowly varying envelope and a rapidly varying fine time structure. Neurons in the auditory brainstem3–6 sensitive to these features have been found in mammalian physiological studies. To investigate the relative perceptual importance of envelope and fine structure, we synthesized stimuli that we call ‘auditory chimaeras’, which have the envelope of one sound and the fine structure of another. Here we show that the envelope is most important for speech reception, and the fine structure is most important for pitch perception and sound localization. When the two features are in conflict, the sound of speech is heard at a location determined by the fine structure, but the words are identified according to the envelope. This finding reveals a possible acoustic basis for the hypothesized ‘what’ and ‘where’ pathways in the auditory cortex7–10. PMID:11882898

  10. The head-centered meridian effect: auditory attention orienting in conditions of impaired visuo-spatial information.

    PubMed

    Olivetti Belardinelli, Marta; Santangelo, Valerio

    2005-07-08

    This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.

  11. Effects of laterality and pitch height of an auditory accessory stimulus on horizontal response selection: the Simon effect and the SMARC effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2009-08-01

    In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.

  12. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity.

    PubMed

    Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija

    2014-09-01

    Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    PubMed

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  14. Potential for using visual, auditory, and olfactory cues to manage foraging behaviour and spatial distribution of rangeland livestock

    USDA-ARS?s Scientific Manuscript database

    This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...

  15. Early continuous white noise exposure alters l-alpha-amino-3-hydroxy-5-methyl-4-isoxazole propionic acid receptor subunit glutamate receptor 2 and gamma-aminobutyric acid type a receptor subunit beta3 protein expression in rat auditory cortex.

    PubMed

    Xu, Jinghong; Yu, Liping; Zhang, Jiping; Cai, Rui; Sun, Xinde

    2010-02-15

    Auditory experience during the postnatal critical period is essential for the normal maturation of auditory function. Previous studies have shown that rearing infant rat pups under conditions of continuous moderate-level noise delayed the emergence of adult-like topographic representational order and the refinement of response selectivity in the primary auditory cortex (A1) beyond normal developmental benchmarks and indefinitely blocked the closure of a brief, critical-period window. To gain insight into the molecular mechanisms of these physiological changes after noise rearing, we studied expression of the AMPA receptor subunit GluR2 and GABA(A) receptor subunit beta3 in the auditory cortex after noise rearing. Our results show that continuous moderate-level noise rearing during the early stages of development decreases the expression levels of GluR2 and GABA(A)beta3. Furthermore, noise rearing also induced a significant decrease in the level of GABA(A) receptors relative to AMPA receptors. However, in adult rats, noise rearing did not have significant effects on GluR2 and GABA(A)beta3 expression or the ratio between the two units. These changes could have a role in the cellular mechanisms involved in the delayed maturation of auditory receptive field structure and topographic organization of A1 after noise rearing. Copyright 2009 Wiley-Liss, Inc.

  16. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds.

    PubMed

    Williamson, Ross S; Ahrens, Misha B; Linden, Jennifer F; Sahani, Maneesh

    2016-07-20

    Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds-a modulation of "input-specific gain" rather than "output gain"-may be a widespread motif in sensory processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. A COMPARISON OF METHODS FOR TEACHING RECEPTIVE LABELING TO CHILDREN WITH AUTISM SPECTRUM DISORDERS

    PubMed Central

    Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N

    2011-01-01

    Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed. PMID:21941380

  18. Distinctive receptive field and physiological properties of a wide-field amacrine cell in the macaque monkey retina

    PubMed Central

    Puller, Christian; Rieke, Fred; Neitz, Jay; Neitz, Maureen

    2015-01-01

    At early stages of visual processing, receptive fields are typically described as subtending local regions of space and thus performing computations on a narrow spatial scale. Nevertheless, stimulation well outside of the classical receptive field can exert clear and significant effects on visual processing. Given the distances over which they occur, the retinal mechanisms responsible for these long-range effects would certainly require signal propagation via active membrane properties. Here the physiology of a wide-field amacrine cell—the wiry cell—in macaque monkey retina is explored, revealing receptive fields that represent a striking departure from the classic structure. A single wiry cell integrates signals over wide regions of retina, 5–10 times larger than the classic receptive fields of most retinal ganglion cells. Wiry cells integrate signals over space much more effectively than predicted from passive signal propagation, and spatial integration is strongly attenuated during blockade of NMDA spikes but integration is insensitive to blockade of NaV channels with TTX. Thus these cells appear well suited for contributing to the long-range interactions of visual signals that characterize many aspects of visual perception. PMID:26133804

  19. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  20. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  1. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  2. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  3. Technical aspects of a demonstration tape for three-dimensional sound displays

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1990-01-01

    This document was developed to accompany an audio cassette that demonstrates work in three-dimensional auditory displays, developed at the Ames Research Center Aerospace Human Factors Division. It provides a text version of the audio material, and covers the theoretical and technical issues of spatial auditory displays in greater depth than on the cassette. The technical procedures used in the production of the audio demonstration are documented, including the methods for simulating rotorcraft radio communication, synthesizing auditory icons, and using the Convolvotron, a real-time spatialization device.

  4. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Spatial encoding in spinal sensorimotor circuits differs in different wild type mice strains

    PubMed Central

    Thelin, Jonas; Schouenborg, Jens

    2008-01-01

    Background Previous studies in the rat have shown that the spatial organisation of the receptive fields of nociceptive withdrawal reflex (NWR) system are functionally adapted through experience dependent mechanisms, termed somatosensory imprinting, during postnatal development. Here we wanted to clarify 1) if mice exhibit a similar spatial encoding of sensory input to NWR as previously found in the rat and 2) if mice strains with a poor learning capacity in various behavioural tests, associated with deficient long term potention, also exhibit poor adaptation of NWR. The organisation of the NWR system in two adult wild type mouse strains with normal long term potentiation (LTP) in hippocampus and two adult wild type mouse strains exhibiting deficiencies in corresponding LTP were used and compared to previous results in the rat. Receptive fields of reflexes in single hindlimb muscles were mapped with CO2 laser heat pulses. Results While the spatial organisation of the nociceptive receptive fields in mice with normal LTP were very similar to those in rats, the LTP impaired strains exhibited receptive fields of NWRs with aberrant sensitivity distributions. However, no difference was found in NWR thresholds or onset C-fibre latencies suggesting that the mechanisms determining general reflex sensitivity and somatosensory imprinting are different. Conclusion Our results thus confirm that sensory encoding in mice and rat NWR is similar, provided that mice strains with a good learning capability are studied and raise the possibility that LTP like mechanisms are involved in somatosensory imprinting. PMID:18495020

  6. Live CT imaging of sound reception anatomy and hearing measurements in the pygmy killer whale, Feresa attenuata.

    PubMed

    Montie, Eric W; Manire, Charlie A; Mann, David A

    2011-03-15

    In June 2008, two pygmy killer whales (Feresa attenuata) were stranded alive near Boca Grande, FL, USA, and were taken into rehabilitation. We used this opportunity to learn about the peripheral anatomy of the auditory system and hearing sensitivity of these rare toothed whales. Three-dimensional (3-D) reconstructions of head structures from X-ray computed tomography (CT) images revealed mandibles that were hollow, lacked a bony lamina medial to the pan bone and contained mandibular fat bodies that extended caudally and abutted the tympanoperiotic complex. Using auditory evoked potential (AEP) procedures, the modulation rate transfer function was determined. Maximum evoked potential responses occurred at modulation frequencies of 500 and 1000 Hz. The AEP-derived audiograms were U-shaped. The lowest hearing thresholds occurred between 20 and 60 kHz, with the best hearing sensitivity at 40 kHz. The auditory brainstem response (ABR) was composed of seven waves and resembled the ABR of the bottlenose and common dolphins. By changing electrode locations, creating 3-D reconstructions of the brain from CT images and measuring the amplitude of the ABR waves, we provided evidence that the neuroanatomical sources of ABR waves I, IV and VI were the auditory nerve, inferior colliculus and the medial geniculate body, respectively. The combination of AEP testing and CT imaging provided a new synthesis of methods for studying the auditory system of cetaceans.

  7. Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues

    ERIC Educational Resources Information Center

    Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.

    2016-01-01

    The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…

  8. The Use of Spatialized Speech in Auditory Interfaces for Computer Users Who Are Visually Impaired

    ERIC Educational Resources Information Center

    Sodnik, Jaka; Jakus, Grega; Tomazic, Saso

    2012-01-01

    Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…

  9. Evaluating the Use of Auditory Systems to Improve Performance in Combat Search and Rescue

    DTIC Science & Technology

    2012-03-01

    take advantage of human binaural hearing to present spatial information through auditory stimuli as it would occur in the real world. This allows the...multiple operators unambiguously and in a short amount of time. Spatial audio basics Spatial audio works with human binaural hearing to generate... binaural recordings “sound better” when heard in the same location where the recordings were made. While this appears to be related to the acoustic

  10. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration.

    PubMed

    Wahn, Basil; König, Peter

    2015-01-01

    Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.

  11. Spatial learning while navigating with severely degraded viewing: The role of attention and mobility monitoring

    PubMed Central

    Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.

    2015-01-01

    The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766

  12. Auditory Attentional Control and Selection during Cocktail Party Listening

    PubMed Central

    Hill, Kevin T.

    2010-01-01

    In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393

  13. Modulation of auditory spatial attention by visual emotional cues: differential effects of attentional engagement and disengagement for pleasant and unpleasant cues.

    PubMed

    Harrison, Neil R; Woodhouse, Rob

    2016-05-01

    Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.

  14. Preschool speech articulation and nonword repetition abilities may help predict eventual recovery or persistence of stuttering.

    PubMed

    Spencer, Caroline; Weber-Fox, Christine

    2014-09-01

    In preschool children, we investigated whether expressive and receptive language, phonological, articulatory, and/or verbal working memory proficiencies aid in predicting eventual recovery or persistence of stuttering. Participants included 65 children, including 25 children who do not stutter (CWNS) and 40 who stutter (CWS) recruited at age 3;9-5;8. At initial testing, participants were administered the Test of Auditory Comprehension of Language, 3rd edition (TACL-3), Structured Photographic Expressive Language Test, 3rd edition (SPELT-3), Bankson-Bernthal Test of Phonology-Consonant Inventory subtest (BBTOP-CI), Nonword Repetition Test (NRT; Dollaghan & Campbell, 1998), and Test of Auditory Perceptual Skills-Revised (TAPS-R) auditory number memory and auditory word memory subtests. Stuttering behaviors of CWS were assessed in subsequent years, forming groups whose stuttering eventually persisted (CWS-Per; n=19) or recovered (CWS-Rec; n=21). Proficiency scores in morphosyntactic skills, consonant production, verbal working memory for known words, and phonological working memory and speech production for novel nonwords obtained at the initial testing were analyzed for each group. CWS-Per were less proficient than CWNS and CWS-Rec in measures of consonant production (BBTOP-CI) and repetition of novel phonological sequences (NRT). In contrast, receptive language, expressive language, and verbal working memory abilities did not distinguish CWS-Rec from CWS-Per. Binary logistic regression analysis indicated that preschool BBTOP-CI scores and overall NRT proficiency significantly predicted future recovery status. Results suggest that phonological and speech articulation abilities in the preschool years should be considered with other predictive factors as part of a comprehensive risk assessment for the development of chronic stuttering. At the end of this activity the reader will be able to: (1) describe the current status of nonlinguistic and linguistic predictors for recovery and persistence of stuttering; (2) summarize current evidence regarding the potential value of consonant cluster articulation and nonword repetition abilities in helping to predict stuttering outcome in preschool children; (3) discuss the current findings in relation to potential implications for theories of developmental stuttering; (4) discuss the current findings in relation to potential considerations for the evaluation and treatment of developmental stuttering. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain.

    PubMed

    Groen, Iris I A; Silson, Edward H; Baker, Chris I

    2017-02-19

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  16. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain

    PubMed Central

    2017-01-01

    Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044013

  17. Tympanal travelling waves in migratory locusts.

    PubMed

    Windmill, James F C; Göpfert, Martin C; Robert, Daniel

    2005-01-01

    Hearing animals, including many vertebrates and insects, have the capacity to analyse the frequency composition of sound. In mammals, frequency analysis relies on the mechanical response of the basilar membrane in the cochlear duct. These vibrations take the form of a slow vibrational wave propagating along the basilar membrane from base to apex. Known as von Békésy's travelling wave, this wave displays amplitude maxima at frequency-specific locations along the basilar membrane, providing a spatial map of the frequency of sound--a tonotopy. In their structure, insect auditory systems may not be as sophisticated at those of mammals, yet some are known to perform sound frequency analysis. In the desert locust, this analysis arises from the mechanical properties of the tympanal membrane. In effect, the spatial decomposition of incident sound into discrete frequency components involves a tympanal travelling wave that funnels mechanical energy to specific tympanal locations, where distinct groups of mechanoreceptor neurones project. Notably, observed tympanal deflections differ from those predicted by drum theory. Although phenomenologically equivalent, von Békésy's and the locust's waves differ in their physical implementation. von Békésy's wave is born from interactions between the anisotropic basilar membrane and the surrounding incompressible fluids, whereas the locust's wave rides on an anisotropic membrane suspended in air. The locust's ear thus combines in one structure the functions of sound reception and frequency decomposition.

  18. Research and Studies Directory for Manpower, Personnel, and Training

    DTIC Science & Technology

    1989-05-01

    LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE

  19. Persistent Thalamic Sound Processing Despite Profound Cochlear Denervation.

    PubMed

    Chambers, Anna R; Salazar, Juan J; Polley, Daniel B

    2016-01-01

    Neurons at higher stages of sensory processing can partially compensate for a sudden drop in peripheral input through a homeostatic plasticity process that increases the gain on weak afferent inputs. Even after a profound unilateral auditory neuropathy where >95% of afferent synapses between auditory nerve fibers and inner hair cells have been eliminated with ouabain, central gain can restore cortical processing and perceptual detection of basic sounds delivered to the denervated ear. In this model of profound auditory neuropathy, auditory cortex (ACtx) processing and perception recover despite the absence of an auditory brainstem response (ABR) or brainstem acoustic reflexes, and only a partial recovery of sound processing at the level of the inferior colliculus (IC), an auditory midbrain nucleus. In this study, we induced a profound cochlear neuropathy with ouabain and asked whether central gain enabled a compensatory plasticity in the auditory thalamus comparable to the full recovery of function previously observed in the ACtx, the partial recovery observed in the IC, or something different entirely. Unilateral ouabain treatment in adult mice effectively eliminated the ABR, yet robust sound-evoked activity persisted in a minority of units recorded from the contralateral medial geniculate body (MGB) of awake mice. Sound driven MGB units could decode moderate and high-intensity sounds with accuracies comparable to sham-treated control mice, but low-intensity classification was near chance. Pure tone receptive fields and synchronization to broadband pulse trains also persisted, albeit with significantly reduced quality and precision, respectively. MGB decoding of temporally modulated pulse trains and speech tokens were both greatly impaired in ouabain-treated mice. Taken together, the absence of an ABR belied a persistent auditory processing at the level of the MGB that was likely enabled through increased central gain. Compensatory plasticity at the level of the auditory thalamus was less robust overall than previous observations in cortex or midbrain. Hierarchical differences in compensatory plasticity following sensorineural hearing loss may reflect differences in GABA circuit organization within the MGB, as compared to the ACtx or IC.

  20. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise

    PubMed Central

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.

    2011-01-01

    How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107

  1. The influence of the Japanese waving cat on the joint spatial compatibility effect: A replication and extension of Dolk, Hommel, Prinz, and Liepelt (2013).

    PubMed

    Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph

    2017-01-01

    In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.

  2. Multisensory guidance of orienting behavior.

    PubMed

    Maier, Joost X; Groh, Jennifer M

    2009-12-01

    We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.

  3. Influence of auditory and audiovisual stimuli on the right-left prevalence effect.

    PubMed

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.

  4. Auditory Spatial Perception: Auditory Localization

    DTIC Science & Technology

    2012-05-01

    cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of

  5. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete.

    PubMed

    Jarick, Michelle; Stewart, Mark T; Smilek, Daniel; Dixon, Michael J

    2013-01-01

    Time-space synaesthetes "see" time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred "auditory" viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the "preferred" auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009).

  6. A Novel Nonparametric Approach for Neural Encoding and Decoding Models of Multimodal Receptive Fields.

    PubMed

    Agarwal, Rahul; Chen, Zhe; Kloosterman, Fabian; Wilson, Matthew A; Sarma, Sridevi V

    2016-07-01

    Pyramidal neurons recorded from the rat hippocampus and entorhinal cortex, such as place and grid cells, have diverse receptive fields, which are either unimodal or multimodal. Spiking activity from these cells encodes information about the spatial position of a freely foraging rat. At fine timescales, a neuron's spike activity also depends significantly on its own spike history. However, due to limitations of current parametric modeling approaches, it remains a challenge to estimate complex, multimodal neuronal receptive fields while incorporating spike history dependence. Furthermore, efforts to decode the rat's trajectory in one- or two-dimensional space from hippocampal ensemble spiking activity have mainly focused on spike history-independent neuronal encoding models. In this letter, we address these two important issues by extending a recently introduced nonparametric neural encoding framework that allows modeling both complex spatial receptive fields and spike history dependencies. Using this extended nonparametric approach, we develop novel algorithms for decoding a rat's trajectory based on recordings of hippocampal place cells and entorhinal grid cells. Results show that both encoding and decoding models derived from our new method performed significantly better than state-of-the-art encoding and decoding models on 6 minutes of test data. In addition, our model's performance remains invariant to the apparent modality of the neuron's receptive field.

  7. Linear multivariate evaluation models for spatial perception of soundscape.

    PubMed

    Deng, Zhiyong; Kang, Jian; Wang, Daiwei; Liu, Aili; Kang, Joe Zhengyu

    2015-11-01

    Soundscape is a sound environment that emphasizes the awareness of auditory perception and social or cultural understandings. The case of spatial perception is significant to soundscape. However, previous studies on the auditory spatial perception of the soundscape environment have been limited. Based on 21 native binaural-recorded soundscape samples and a set of auditory experiments for subjective spatial perception (SSP), a study of the analysis among semantic parameters, the inter-aural-cross-correlation coefficient (IACC), A-weighted-equal sound-pressure-level (L(eq)), dynamic (D), and SSP is introduced to verify the independent effect of each parameter and to re-determine some of their possible relationships. The results show that the more noisiness the audience perceived, the worse spatial awareness they received, while the closer and more directional the sound source image variations, dynamics, and numbers of sound sources in the soundscape are, the better the spatial awareness would be. Thus, the sensations of roughness, sound intensity, transient dynamic, and the values of Leq and IACC have a suitable range for better spatial perception. A better spatial awareness seems to promote the preference slightly for the audience. Finally, setting SSPs as functions of the semantic parameters and Leq-D-IACC, two linear multivariate evaluation models of subjective spatial perception are proposed.

  8. Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.

    PubMed

    Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J

    2016-06-01

    The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.

  9. Comparing perceived auditory width to the visual image of a performing ensemble in contrasting bi-modal environmentsa)

    PubMed Central

    Valente, Daniel L.; Braasch, Jonas; Myrbeck, Shane A.

    2012-01-01

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audiovisual environment in which participants were instructed to make auditory width judgments in dynamic bi-modal settings. The results of these psychophysical tests suggest the importance of congruent audio visual presentation to the ecological interpretation of an auditory scene. Supporting data were accumulated in five rooms of ascending volumes and varying reverberation times. Participants were given an audiovisual matching test in which they were instructed to pan the auditory width of a performing ensemble to a varying set of audio and visual cues in rooms. Results show that both auditory and visual factors affect the collected responses and that the two sensory modalities coincide in distinct interactions. The greatest differences between the panned audio stimuli given a fixed visual width were found in the physical space with the largest volume and the greatest source distance. These results suggest, in this specific instance, a predominance of auditory cues in the spatial analysis of the bi-modal scene. PMID:22280585

  10. Spatialized audio improves call sign recognition during multi-aircraft control.

    PubMed

    Kim, Sungbin; Miller, Michael E; Rusnock, Christina F; Elshaw, John J

    2018-07-01

    We investigated the impact of a spatialized audio display on response time, workload, and accuracy while monitoring auditory information for relevance. The human ability to differentiate sound direction implies that spatial audio may be used to encode information. Therefore, it is hypothesized that spatial audio cues can be applied to aid differentiation of critical versus noncritical verbal auditory information. We used a human performance model and a laboratory study involving 24 participants to examine the effect of applying a notional, automated parser to present audio in a particular ear depending on information relevance. Operator workload and performance were assessed while subjects listened for and responded to relevant audio cues associated with critical information among additional noncritical information. Encoding relevance through spatial location in a spatial audio display system--as opposed to monophonic, binaural presentation--significantly reduced response time and workload, particularly for noncritical information. Future auditory displays employing spatial cues to indicate relevance have the potential to reduce workload and improve operator performance in similar task domains. Furthermore, these displays have the potential to reduce the dependence of workload and performance on the number of audio cues. Published by Elsevier Ltd.

  11. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  12. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  13. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds

    PubMed Central

    Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.

    2017-01-01

    Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520

  14. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  15. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  16. Spatial limitations of fast temporal segmentation are best modeled by V1 receptive fields.

    PubMed

    Goodbourn, Patrick T; Forte, Jason D

    2013-11-22

    The fine temporal structure of events influences the spatial grouping and segmentation of visual-scene elements. Although adjacent regions flickering asynchronously at high temporal frequencies appear identical, the visual system signals a boundary between them. These "phantom contours" disappear when the gap between regions exceeds a critical value (g(max)). We used g(max) as an index of neuronal receptive-field size to compare with known receptive-field data from along the visual pathway and thus infer the location of the mechanism responsible for fast temporal segmentation. Observers viewed a circular stimulus reversing in luminance contrast at 20 Hz for 500 ms. A gap of constant retinal eccentricity segmented each stimulus quadrant; on each trial, participants identified a target quadrant containing counterphasing inner and outer segments. Through varying the gap width, g(max) was determined at a range of retinal eccentricities. We found that g(max) increased from 0.3° to 0.8° for eccentricities from 2° to 12°. These values correspond to receptive-field diameters of neurons in primary visual cortex that have been reported in single-cell and fMRI studies and are consistent with the spatial limitations of motion detection. In a further experiment, we found that modulation sensitivity depended critically on the length of the contour and could be predicted by a simple model of spatial summation in early cortical neurons. The results suggest that temporal segmentation is achieved by neurons at the earliest cortical stages of visual processing, most likely in primary visual cortex.

  17. Heterogeneity in the spatial receptive field architecture of multisensory neurons of the superior colliculus and its effects on multisensory integration.

    PubMed

    Ghose, D; Wallace, M T

    2014-01-03

    Multisensory integration has been widely studied in neurons of the mammalian superior colliculus (SC). This has led to the description of various determinants of multisensory integration, including those based on stimulus- and neuron-specific factors. The most widely characterized of these illustrate the importance of the spatial and temporal relationships of the paired stimuli as well as their relative effectiveness in eliciting a response in determining the final integrated output. Although these stimulus-specific factors have generally been considered in isolation (i.e., manipulating stimulus location while holding all other factors constant), they have an intrinsic interdependency that has yet to be fully elucidated. For example, changes in stimulus location will likely also impact both the temporal profile of response and the effectiveness of the stimulus. The importance of better describing this interdependency is further reinforced by the fact that SC neurons have large receptive fields, and that responses at different locations within these receptive fields are far from equivalent. To address these issues, the current study was designed to examine the interdependency between the stimulus factors of space and effectiveness in dictating the multisensory responses of SC neurons. The results show that neuronal responsiveness changes dramatically with changes in stimulus location - highlighting a marked heterogeneity in the spatial receptive fields of SC neurons. More importantly, this receptive field heterogeneity played a major role in the integrative product exhibited by stimulus pairings, such that pairings at weakly responsive locations of the receptive fields resulted in the largest multisensory interactions. Together these results provide greater insight into the interrelationship of the factors underlying multisensory integration in SC neurons, and may have important mechanistic implications for multisensory integration and the role it plays in shaping SC-mediated behaviors. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. The role of retinal bipolar cell in early vision: an implication with analogue networks and regularization theory.

    PubMed

    Yagi, T; Ohshima, S; Funahashi, Y

    1997-09-01

    A linear analogue network model is proposed to describe the neuronal circuit of the outer retina consisting of cones, horizontal cells, and bipolar cells. The model reflects previous physiological findings on the spatial response properties of these neurons to dim illumination and is expressed by physiological mechanisms, i.e., membrane conductances, gap-junctional conductances, and strengths of chemical synaptic interactions. Using the model, we characterized the spatial filtering properties of the bipolar cell receptive field with the standard regularization theory, in which the early vision problems are attributed to minimization of a cost function. The cost function accompanying the present characterization is derived from the linear analogue network model, and one can gain intuitive insights on how physiological mechanisms contribute to the spatial filtering properties of the bipolar cell receptive field. We also elucidated a quantitative relation between the Laplacian of Gaussian operator and the bipolar cell receptive field. From the computational point of view, the dopaminergic modulation of the gap-junctional conductance between horizontal cells is inferred to be a suitable neural adaptation mechanism for transition between photopic and mesopic vision.

  19. Mapping feature-sensitivity and attentional modulation in human auditory cortex with functional magnetic resonance imaging

    PubMed Central

    Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A

    2011-01-01

    Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093

  20. A Feedback Model of Attention Explains the Diverse Effects of Attention on Neural Firing Rates and Receptive Field Structure.

    PubMed

    Miconi, Thomas; VanRullen, Rufin

    2016-02-01

    Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.

  1. Bone conduction reception: head sensitivity mapping.

    PubMed

    McBride, Maranda; Letowski, Tomasz; Tran, Phuong

    2008-05-01

    This study sought to identify skull locations that are highly sensitive to bone conduction (BC) auditory signal reception and could be used in the design of military radio communication headsets. In Experiment 1, pure tone signals were transmitted via BC to 11 skull locations of 14 volunteers seated in a quiet environment. In Experiment 2, the same signals were transmitted via BC to nine skull locations of 12 volunteers seated in an environment with 60 decibels of white background noise. Hearing threshold levels for each signal per location were measured. In the quiet condition, the condyle had the lowest mean threshold for all signals followed by the jaw angle, mastoid and vertex. In the white noise condition, the condyle also had the lowest mean threshold followed by the mastoid, vertex and temple. Overall results of both experiments were very similar and implicated the condyle as the most effective location.

  2. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  3. Multichannel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, D. R.; Erbe, T.; Wenzel, E. M. (Principal Investigator)

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  4. Multi-channel spatial auditory display for speech communications

    NASA Astrophysics Data System (ADS)

    Begault, Durand; Erbe, Tom

    1993-10-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  5. Multichannel spatial auditory display for speech communications.

    PubMed

    Begault, D R; Erbe, T

    1994-10-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degrees azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degrees azimuth positions.

  6. Multichannel Spatial Auditory Display for Speed Communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  7. Multi-channel spatial auditory display for speech communications

    NASA Technical Reports Server (NTRS)

    Begault, Durand; Erbe, Tom

    1993-01-01

    A spatial auditory display for multiple speech communications was developed at NASA-Ames Research Center. Input is spatialized by use of simplified head-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four letter call signs used by launch personnel at NASA, against diotic speech babble. Spatial positions at 30 deg azimuth increments were evaluated. The results from eight subjects showed a maximal intelligibility improvement of about 6 to 7 dB when the signal was spatialized to 60 deg or 90 deg azimuth positions.

  8. Using the preschool language scale, fourth edition to characterize language in preschoolers with autism spectrum disorders.

    PubMed

    Volden, Joanne; Smith, Isabel M; Szatmari, Peter; Bryson, Susan; Fombonne, Eric; Mirenda, Pat; Roberts, Wendy; Vaillancourt, Tracy; Waddell, Charlotte; Zwaigenbaum, Lonnie; Georgiades, Stelios; Duku, Eric; Thompson, Ann

    2011-08-01

    The Preschool Language Scale, Fourth Edition (PLS-4; Zimmerman, Steiner, & Pond, 2002) was used to examine syntactic and semantic language skills in preschool children with autism spectrum disorders (ASD) to determine its suitability for use with this population. We expected that PLS-4 performance would be better in more intellectually able children and that receptive skills would be relatively more impaired than expressive abilities, consistent with previous findings in the area of vocabulary. Our sample consisted of 294 newly diagnosed preschool children with ASD. Children were assessed via a battery of developmental measures, including the PLS-4. As expected, PLS-4 scores were higher in more intellectually able children with ASD, and overall, expressive communication was higher than auditory comprehension. However, this overall advantage was not stable across nonverbal developmental levels. Expressive skills were significantly better than receptive skills at the youngest developmental levels, whereas the converse applied in children with more advanced development. The PLS-4 can be used to obtain a general index of early syntax and semantic skill in young children with ASD. Longitudinal data will be necessary to determine how the developmental relationship between receptive and expressive language skills unfolds in children with ASD.

  9. Validating a Method to Assess Lipreading, Audiovisual Gain, and Integration During Speech Reception With Cochlear-Implanted and Normal-Hearing Subjects Using a Talking Head.

    PubMed

    Schreitmüller, Stefan; Frenken, Miriam; Bentz, Lüder; Ortmann, Magdalene; Walger, Martin; Meister, Hartmut

    Watching a talker's mouth is beneficial for speech reception (SR) in many communication settings, especially in noise and when hearing is impaired. Measures for audiovisual (AV) SR can be valuable in the framework of diagnosing or treating hearing disorders. This study addresses the lack of standardized methods in many languages for assessing lipreading, AV gain, and integration. A new method is validated that supplements a German speech audiometric test with visualizations of the synthetic articulation of an avatar that was used, for it is feasible to lip-sync auditory speech in a highly standardized way. Three hypotheses were formed according to the literature on AV SR that used live or filmed talkers. It was tested whether respective effects could be reproduced with synthetic articulation: (1) cochlear implant (CI) users have a higher visual-only SR than normal-hearing (NH) individuals, and younger individuals obtain higher lipreading scores than older persons. (2) Both CI and NH gain from presenting AV over unimodal (auditory or visual) sentences in noise. (3) Both CI and NH listeners efficiently integrate complementary auditory and visual speech features. In a controlled, cross-sectional study with 14 experienced CI users (mean age 47.4) and 14 NH individuals (mean age 46.3, similar broad age distribution), lipreading, AV gain, and integration of a German matrix sentence test were assessed. Visual speech stimuli were synthesized by the articulation of the Talking Head system "MASSY" (Modular Audiovisual Speech Synthesizer), which displayed standardized articulation with respect to the visibility of German phones. In line with the hypotheses and previous literature, CI users had a higher mean visual-only SR than NH individuals (CI, 38%; NH, 12%; p < 0.001). Age was correlated with lipreading such that within each group, younger individuals obtained higher visual-only scores than older persons (rCI = -0.54; p = 0.046; rNH = -0.78; p < 0.001). Both CI and NH benefitted by AV over unimodal speech as indexed by calculations of the measures visual enhancement and auditory enhancement (each p < 0.001). Both groups efficiently integrated complementary auditory and visual speech features as indexed by calculations of the measure integration enhancement (each p < 0.005). Given the good agreement between results from literature and the outcome of supplementing an existing validated auditory test with synthetic visual cues, the introduced method can be considered an interesting candidate for clinical and scientific applications to assess measures important for AV SR in a standardized manner. This could be beneficial for optimizing the diagnosis and treatment of individual listening and communication disorders, such as cochlear implantation.

  10. Direct Harmonic Linear Navier-Stokes Methods for Efficient Simulation of Wave Packets

    NASA Technical Reports Server (NTRS)

    Streett, C. L.

    1998-01-01

    Wave packets produced by localized disturbances play an important role in transition in three-dimensional boundary layers, such as that on a swept wing. Starting with the receptivity process, we show the effects of wave-space energy distribution on the development of packets and other three-dimensional disturbance patterns. Nonlinearity in the receptivity process is specifically addressed, including demonstration of an effect which can enhance receptivity of traveling crossflow disturbances. An efficient spatial numerical simulation method is allowing most of the simulations presented to be carried out on a workstation.

  11. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.

    PubMed

    Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P

    2005-05-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.

  12. The influence of chromatic context on binocular color rivalry: Perception and neural representation

    PubMed Central

    Hong, Sang Wook; Shevell, Steven K.

    2008-01-01

    The predominance of rivalrous targets is affected by surrounding context when stimuli rival in orientation, motion or color. This study investigated the influence of chromatic context on binocular color rivalry. The predominance of rivalrous chromatic targets was measured in various surrounding contexts. The first experiment showed that a chromatic surround's influence was stronger when the surround was uniform or a grating with luminance contrast (chromatic/black grating) compared to an equiluminant grating (chromatic/white). The second experiment revealed virtually no effect of the orientation of the surrounding chromatic context, using chromatically rivalrous vertical gratings. These results are consistent with a chromatic representation of the context by a non-oriented, chromatically selective and spatially antagonistic receptive field. Neither a double-opponent receptive field nor a receptive field without spatial antagonism accounts for the influence of context on binocular color rivalry. PMID:18331750

  13. Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.

    PubMed

    Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie

    2016-12-07

    A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.

  14. Lack of Multisensory Integration in Hemianopia: No Influence of Visual Stimuli on Aurally Guided Saccades to the Blind Hemifield

    PubMed Central

    Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan

    2015-01-01

    In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952

  15. Conjunctions between motion and disparity are encoded with the same spatial resolution as disparity alone.

    PubMed

    Allenmark, Fredrik; Read, Jenny C A

    2012-10-10

    Neurons in cortical area MT respond well to transparent streaming motion in distinct depth planes, such as caused by observer self-motion, but do not contain subregions excited by opposite directions of motion. We therefore predicted that spatial resolution for transparent motion/disparity conjunctions would be limited by the size of MT receptive fields, just as spatial resolution for disparity is limited by the much smaller receptive fields found in primary visual cortex, V1. We measured this using a novel "joint motion/disparity grating," on which human observers detected motion/disparity conjunctions in transparent random-dot patterns containing dots streaming in opposite directions on two depth planes. Surprisingly, observers showed the same spatial resolution for these as for pure disparity gratings. We estimate the limiting receptive field diameter at 11 arcmin, similar to V1 and much smaller than MT. Higher internal noise for detecting joint motion/disparity produces a slightly lower high-frequency cutoff of 2.5 cycles per degree (cpd) versus 3.3 cpd for disparity. This suggests that information on motion/disparity conjunctions is available in the population activity of V1 and that this information can be decoded for perception even when it is invisible to neurons in MT.

  16. Receptivity of Hypersonic Boundary Layers to Acoustic and Vortical Disturbances

    NASA Technical Reports Server (NTRS)

    Balakamar, P.; Kegerise, Michael A.

    2011-01-01

    Boundary layer receptivity to two-dimensional acoustic disturbances at different incidence angles and to vortical disturbances is investigated by solving the Navier-Stokes equations for Mach 6 flow over a 7deg half-angle sharp-tipped wedge and a cone. Higher order spatial and temporal schemes are employed to obtain the solution. The results show that the instability waves are generated in the leading edge region and that the boundary layer is much more receptive to slow acoustic waves as compared to the fast waves. It is found that the receptivity of the boundary layer on the windward side (with respect to the acoustic forcing) decreases when the incidence angle is increased from 0 to 30 degrees. However, the receptivity coefficient for the leeward side is found to vary relatively weakly with the incidence angle. The maximum receptivity is obtained when the wave incident angle is about 20 degrees. Vortical disturbances also generate unstable second modes, however the receptivity coefficients are smaller than that for the acoustic waves. Vortical disturbances first generate the fast acoustic modes and they switch to the slow mode near the continuous spectrum.

  17. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  19. Spatial and temporal relationships of electrocorticographic alpha and gamma activity during auditory processing.

    PubMed

    Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin

    2014-08-15

    Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Do you see what I hear? Vantage point preference and visual dominance in a time-space synaesthete

    PubMed Central

    Jarick, Michelle; Stewart, Mark T.; Smilek, Daniel; Dixon, Michael J.

    2013-01-01

    Time-space synaesthetes “see” time units organized in a spatial form. While the structure might be invariant for most synaesthetes, the perspective by which some view their calendar is somewhat flexible. One well-studied synaesthete L adopts different viewpoints for months seen vs. heard. Interestingly, L claims to prefer her auditory perspective, even though the month names are represented visually upside down. To verify this, we used a spatial-cueing task that included audiovisual month cues. These cues were either congruent with L's preferred “auditory” viewpoint (auditory-only and auditory + month inverted) or incongruent (upright visual-only and auditory + month upright). Our prediction was that L would show enhanced cueing effects (larger response time difference between valid and invalid targets) following the audiovisual congruent cues since both elicit the “preferred” auditory perspective. Also, when faced with conflicting cues, we predicted L would choose the preferred auditory perspective over the visual perspective. As we expected, L did show enhanced cueing effects following the audiovisual congruent cues that corresponded with her preferred auditory perspective, but that the visual perspective dominated when L was faced with both viewpoints simultaneously. The results are discussed with relation to the reification hypothesis of sequence space synaesthesia (Eagleman, 2009). PMID:24137140

  1. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  2. Spectrotemporal Modulation Sensitivity as a Predictor of Speech-Reception Performance in Noise With Hearing Aids

    PubMed Central

    Danielsson, Henrik; Hällgren, Mathias; Stenfelt, Stefan; Rönnberg, Jerker; Lunner, Thomas

    2016-01-01

    The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function—spectrotemporal modulation (STM) sensitivity—and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate) applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2–6 kHz) pure-tone average (HFA; R2 = .31) and STM sensitivity (R2 = .28). Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44). The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise. PMID:27815546

  3. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance.

    PubMed

    Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T

    2012-05-01

    In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.

  4. Good Holders, Bad Shufflers: An Examination of Working Memory Processes and Modalities in Children with and without Attention-Deficit/Hyperactivity Disorder.

    PubMed

    Simone, Ashley N; Bédard, Anne-Claude V; Marks, David J; Halperin, Jeffrey M

    2016-01-01

    The aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1-11).

  5. Orienting Auditory Spatial Attention Engages Frontal Eye Fields and Medial Occipital Cortex in Congenitally Blind Humans

    PubMed Central

    Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.

    2007-01-01

    What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882

  6. Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis.

    PubMed

    Loomis, Jack M; Klatzky, Roberta L; McHugh, Brendan; Giudice, Nicholas A

    2012-08-01

    Spatial working memory can maintain representations from vision, hearing, and touch, representations referred to here as spatial images. The present experiment addressed whether spatial images from vision and hearing that are simultaneously present within working memory retain modality-specific tags or are amodal. Observers were presented with short sequences of targets varying in angular direction, with the targets in a given sequence being all auditory, all visual, or a sequential mixture of the two. On two thirds of the trials, one of the locations was repeated, and observers had to respond as quickly as possible when detecting this repetition. Ancillary detection and localization tasks confirmed that the visual and auditory targets were perceptually comparable. Response latencies in the working memory task showed small but reliable costs in performance on trials involving a sequential mixture of auditory and visual targets, as compared with trials of pure vision or pure audition. These deficits were statistically reliable only for trials on which the modalities of the matching location switched from the penultimate to the final target in the sequence, indicating a switching cost. The switching cost for the pair in immediate succession means that the spatial images representing the target locations retain features of the visual or auditory representations from which they were derived. However, there was no reliable evidence of a performance cost for mixed modalities in the matching pair when the second of the two did not immediately follow the first, suggesting that more enduring spatial images in working memory may be amodal.

  7. The spatial extent of excitatory and inhibitory zones in the receptive field of superficial layer hypercomplex cells

    PubMed Central

    Sillito, A. M.

    1977-01-01

    1. An investigation has been made of the extent of inhibitory and excitatory components in the receptive field of superficial layer hypercomplex cells in the cat's striate cortex and the relation of the components to the length preference exhibited by these cells. 2. Maximal responses were produced by an optimal length stimulus moving through a restricted region of the receptive field. The length of this receptive field region was less than the total length of the excitatory zone as mapped with a very short slit. Slits of similar length to the excitatory zone produced a smaller response than an optimal length slit. 3. An increase of slit length so that it passed over receptive field regions either side of the excitatory zone resulted in an elimination of the response. When background discharge levels were increased by the iontophoretic application of D, L-homocysteic acid slits of this length were observed to produce a suppression of the resting discharge as they passed over the receptive field. They did not modify the resting discharge level when it was induced by the iontophoretic application of the GABA antagonist bicuculline. This data is taken to indicate that long slits activate a powerful post-synaptic inhibitory input to the cell. 4. Maximal inhibitory effects were only observed if the testing slit passed over the receptive field centre. That is slits with a gap positioned midway along their length so as to exclude the optimal excitatory response region surprisingly tended to produce excitatory effects rather than the expected inhibitory effects. It appears that simultaneous stimulation of the receptive field centre is a precondition for the inhibitory effect of stimulation of regions either side of the excitatory zone to be activated. 5. It is suggested that the interneurones mediating the inhibitory input to the superficial layer hypercomplex cells are driven both by cells in adjacent hypercolumns with receptive fields spatially displaced to either side of the excitatory zone and by cells in the same column, optimal inhibitory effects only being achieved when both sets of input to the interneurone are activated. PMID:604459

  8. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness.

    PubMed

    Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui

    2015-09-01

    Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Cluster-based analysis improves predictive validity of spike-triggered receptive field estimates

    PubMed Central

    Malone, Brian J.

    2017-01-01

    Spectrotemporal receptive field (STRF) characterization is a central goal of auditory physiology. STRFs are often approximated by the spike-triggered average (STA), which reflects the average stimulus preceding a spike. In many cases, the raw STA is subjected to a threshold defined by gain values expected by chance. However, such correction methods have not been universally adopted, and the consequences of specific gain-thresholding approaches have not been investigated systematically. Here, we evaluate two classes of statistical correction techniques, using the resulting STRF estimates to predict responses to a novel validation stimulus. The first, more traditional technique eliminated STRF pixels (time-frequency bins) with gain values expected by chance. This correction method yielded significant increases in prediction accuracy, including when the threshold setting was optimized for each unit. The second technique was a two-step thresholding procedure wherein clusters of contiguous pixels surviving an initial gain threshold were then subjected to a cluster mass threshold based on summed pixel values. This approach significantly improved upon even the best gain-thresholding techniques. Additional analyses suggested that allowing threshold settings to vary independently for excitatory and inhibitory subfields of the STRF resulted in only marginal additional gains, at best. In summary, augmenting reverse correlation techniques with principled statistical correction choices increased prediction accuracy by over 80% for multi-unit STRFs and by over 40% for single-unit STRFs, furthering the interpretational relevance of the recovered spectrotemporal filters for auditory systems analysis. PMID:28877194

  10. A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.

    PubMed

    Carlin, Michael A; Elhilali, Mounya

    2015-12-01

    One of the hallmarks of sound processing in the brain is the ability of the nervous system to adapt to changing behavioral demands and surrounding soundscapes. It can dynamically shift sensory and cognitive resources to focus on relevant sounds. Neurophysiological studies indicate that this ability is supported by adaptively retuning the shapes of cortical spectro-temporal receptive fields (STRFs) to enhance features of target sounds while suppressing those of task-irrelevant distractors. Because an important component of human communication is the ability of a listener to dynamically track speech in noisy environments, the solution obtained by auditory neurophysiology implies a useful adaptation strategy for speech activity detection (SAD). SAD is an important first step in a number of automated speech processing systems, and performance is often reduced in highly noisy environments. In this paper, we describe how task-driven adaptation is induced in an ensemble of neurophysiological STRFs, and show how speech-adapted STRFs reorient themselves to enhance spectro-temporal modulations of speech while suppressing those associated with a variety of nonspeech sounds. We then show how an adapted ensemble of STRFs can better detect speech in unseen noisy environments compared to an unadapted ensemble and a noise-robust baseline. Finally, we use a stimulus reconstruction task to demonstrate how the adapted STRF ensemble better captures the spectrotemporal modulations of attended speech in clean and noisy conditions. Our results suggest that a biologically plausible adaptation framework can be applied to speech processing systems to dynamically adapt feature representations for improving noise robustness.

  11. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality.

    PubMed

    Stone, Scott A; Tata, Matthew S

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.

  12. Rendering visual events as sounds: Spatial attention capture by auditory augmented reality

    PubMed Central

    Tata, Matthew S.

    2017-01-01

    Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B) to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible. PMID:28792518

  13. The effect of contextual auditory stimuli on virtual spatial navigation in patients with focal hemispheric lesions.

    PubMed

    Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie

    2018-01-01

    Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.

  14. State-space receptive fields of semicircular canal afferent neurons in the bullfrog

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    Receptive fields are commonly used to describe spatial characteristics of sensory neuron responses. They can be extended to characterize temporal or dynamical aspects by mapping neural responses in dynamical state spaces. The state-space receptive field of a neuron is the probability distribution of the dynamical state of the stimulus-generating system conditioned upon the occurrence of a spike. We have computed state-space receptive fields for semicircular canal afferent neurons in the bullfrog (Rana catesbeiana). We recorded spike times during broad-band Gaussian noise rotational velocity stimuli, computed the frequency distribution of head states at spike times, and normalized these to obtain conditional pdfs for the state. These state-space receptive fields quantify what the brain can deduce about the dynamical state of the head when a single spike arrives from the periphery. c2001 Elsevier Science B.V. All rights reserved.

  15. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    PubMed Central

    Kamke, Marc R.; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945

  16. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    PubMed

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  17. Vestibular evoked myogenic potential testing: Payment policy review for clinicians and payers.

    PubMed

    Fife, Terry D; Satya-Murti, Saty; Burkard, Robert F; Carey, John P

    2018-04-01

    A recent American Academy of Neurology Evidence-Based Practice Guideline on vestibular myogenic evoked potential (VEMP) testing has described superior canal dehiscence syndrome (SCDS) and evaluated the merits of VEMP in its diagnosis. SCDS is an uncommon but now well-recognized cause of dizziness and auditory symptoms. This article familiarizes health care providers with this syndrome and the utility and shortcomings of VEMP as a diagnostic test and also explores payment policies for VEMP. In carefully selected patients with documented history compatible with the SCDS, both high-resolution temporal bone CT scan and VEMP are valuable aids for diagnosis. Payers might be unfamiliar with both this syndrome and VEMP testing. It is important to raise awareness of VEMP and its possible indications and the rationale for coverage of VEMP testing. Payers may not be readily receptive to VEMP coverage if this test is used in an undifferentiated manner for all common vestibular and auditory symptoms.

  18. Global inhibition and stimulus competition in the owl optic tectum

    PubMed Central

    Mysore, Shreesh P.; Asadollahi, Ali; Knudsen, Eric I.

    2010-01-01

    Stimulus selection for gaze and spatial attention involves competition among stimuli across sensory modalities and across all of space. We demonstrate that such cross-modal, global competition takes place in the intermediate and deep layers of the optic tectum, a structure known to be involved in gaze control and attention. A variety of either visual or auditory stimuli located anywhere outside of a neuron's receptive field (RF) were shown to suppress or completely eliminate responses to a visual stimulus located inside the RF in nitrous oxide sedated owls. The essential mechanism underlying this stimulus competition is global, divisive inhibition. Unlike the effect of the classical inhibitory surround, which decreases with distance from the RF center and shapes neuronal responses to individual stimuli, global inhibition acts across the entirety of space and modulates responses primarily in the context of multiple stimuli. Whereas the source of this global inhibition is as yet unknown, our data indicate that different networks mediate the classical surround and global inhibition. We hypothesize that this global, cross-modal inhibition, which acts automatically in a bottom-up fashion even in sedated animals, is critical to the creation of a map of stimulus salience in the optic tectum. PMID:20130182

  19. Association of Chronic Subjective Tinnitus with Neuro- Cognitive Performance.

    PubMed

    Gudwani, Sunita; Munjal, Sanjay K; Panda, Naresh K; Kohli, Adarsh

    2017-12-01

    Chronic subjective tinnitus is associated with cognitive disruptions affecting perception, thinking, language, reasoning, problem solving, memory, visual tasks (reading) and attention. To evaluate existence of any association between tinnitus parameters and neuropsychological performance to explain cognitive processing. Study design was prospective, consisting 25 patients with idiopathic chronic subjective tinnitus and gave informed consent before planning their treatment. Neuropsychological profile included (i) performance on verbal information, comprehension, arithmetic and digit span; (ii) non-verbal performance for visual pattern completion analogies; (iii) memory performance for long-term, recent, delayed-recall, immediate-recall, verbal-retention, visualretention, visual recognition; (iv) reception, interpretation and execution for visual motor gestalt. Correlation between tinnitus onset duration/ loudness perception with neuropsychological profile was assessed by calculating Spearman's coefficient. Findings suggest that tinnitus may interfere with cognitive processing especially performance on digit span, verbal comprehension, mental balance, attention & concentration, immediate recall, visual recognition and visual-motor gestalt subtests. Negative correlation between neurocognitive tasks with tinnitus loudness and onset duration indicated their association. Positive correlation between tinnitus and visual-motor gestalt performance indicated the brain dysfunction. Tinnitus association with non-auditory processing of verbal, visual and visuo-spatial information suggested neuroplastic changes that need to be targeted in cognitive rehabilitation.

  20. Adaptation to stimulus statistics in the perception and neural representation of auditory space.

    PubMed

    Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J

    2010-06-24

    Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.

  1. Local sensitivity to stimulus orientation and spatial frequency within the receptive fields of neurons in visual area 2 of macaque monkeys

    PubMed Central

    Tao, X.; Zhang, B.; Smith, E. L.; Nishimoto, S.; Ohzawa, I.

    2012-01-01

    We used dynamic dense noise stimuli and local spectral reverse correlation methods to reveal the local sensitivities of neurons in visual area 2 (V2) of macaque monkeys to orientation and spatial frequency within their receptive fields. This minimized the potentially confounding assumptions that are inherent in stimulus selections. The majority of neurons exhibited a relatively high degree of homogeneity for the preferred orientations and spatial frequencies in the spatial matrix of facilitatory subfields. However, about 20% of all neurons showed maximum orientation differences between neighboring subfields that were greater than 25 deg. The neurons preferring horizontal or vertical orientations showed less inhomogeneity in space than the neurons preferring oblique orientations. Over 50% of all units also exhibited suppressive profiles, and those were more heterogeneous than facilitatory profiles. The preferred orientation and spatial frequency of suppressive profiles differed substantially from those of facilitatory profiles, and the neurons with suppressive subfields had greater orientation selectivity than those without suppressive subfields. The peak suppression occurred with longer delays than the peak facilitation. These results suggest that the receptive field profiles of the majority of V2 neurons reflect the orderly convergence of V1 inputs over space, but that a subset of V2 neurons exhibit more complex response profiles having both suppressive and facilitatory subfields. These V2 neurons with heterogeneous subfield profiles could play an important role in the initial processing of complex stimulus features. PMID:22114163

  2. Light adaptation alters inner retinal inhibition to shape OFF retinal pathway signaling

    PubMed Central

    Mazade, Reece E.

    2016-01-01

    The retina adjusts its signaling gain over a wide range of light levels. A functional result of this is increased visual acuity at brighter luminance levels (light adaptation) due to shifts in the excitatory center-inhibitory surround receptive field parameters of ganglion cells that increases their sensitivity to smaller light stimuli. Recent work supports the idea that changes in ganglion cell spatial sensitivity with background luminance are due in part to inner retinal mechanisms, possibly including modulation of inhibition onto bipolar cells. To determine how the receptive fields of OFF cone bipolar cells may contribute to changes in ganglion cell resolution, the spatial extent and magnitude of inhibitory and excitatory inputs were measured from OFF bipolar cells under dark- and light-adapted conditions. There was no change in the OFF bipolar cell excitatory input with light adaptation; however, the spatial distributions of inhibitory inputs, including both glycinergic and GABAergic sources, became significantly narrower, smaller, and more transient. The magnitude and size of the OFF bipolar cell center-surround receptive fields as well as light-adapted changes in resting membrane potential were incorporated into a spatial model of OFF bipolar cell output to the downstream ganglion cells, which predicted an increase in signal output strength with light adaptation. We show a prominent role for inner retinal spatial signals in modulating the modeled strength of bipolar cell output to potentially play a role in ganglion cell visual sensitivity and acuity. PMID:26912599

  3. Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia.

    PubMed

    Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone

    2017-07-19

    Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.

  4. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  5. Auditory perception and the control of spatially coordinated action of deaf and hearing children.

    PubMed

    Savelsbergh, G J; Netelenbos, J B; Whiting, H T

    1991-03-01

    From birth onwards, auditory stimulation directs and intensifies visual orientation behaviour. In deaf children, by definition, auditory perception cannot take place and cannot, therefore, make a contribution to visual orientation to objects approaching from outside the initial field of view. In experiment 1, a difference in catching ability is demonstrated between deaf and hearing children (10-13 years of age) when the ball approached from the periphery or from outside the field of view. No differences in catching ability between the two groups occurred when the ball approached from within the field of view. A second experiment was conducted in order to determine if differences in catching ability between deaf and hearing children could be attributed to execution of slow orientating movements and/or slow reaction time as a result of the auditory loss. The deaf children showed slower reaction times. No differences were found in movement times between deaf and hearing children. Overall, the findings suggest that a lack of auditory stimulation during development can lead to deficiencies in the coordination of actions such as catching which are both spatially and temporally constrained.

  6. The effect of differential listening experience on the development of expressive and receptive language in children with bilateral cochlear implants.

    PubMed

    Hess, Christi; Zettler-Greeley, Cynthia; Godar, Shelly P; Ellis-Weismer, Susan; Litovsky, Ruth Y

    2014-01-01

    Growing evidence suggests that children who are deaf and use cochlear implants (CIs) can communicate effectively using spoken language. Research has reported that age of implantation and length of experience with the CI play an important role in a predicting a child's linguistic development. In recent years, the increase in the number of children receiving bilateral CIs (BiCIs) has led to interest in new variables that may also influence the development of hearing, speech, and language abilities, such as length of bilateral listening experience and the length of time between the implantation of the two CIs. One goal of the present study was to determine how a cohort of children with BiCIs performed on standardized measures of language and nonverbal cognition. This study examined the relationship between performance on language and nonverbal intelligence quotient (IQ) tests and the ages at implantation of the first CI and second CI. This study also examined whether early bilateral activation is related to better language scores. Children with BiCIs (n = 39; ages 4 to 9 years) were tested on two standardized measures, the Test of Language Development and the Leiter International Performance Scale-Revised, to evaluate their expressive/receptive language skills and nonverbal IQ/memory. Hierarchical regression analyses were used to evaluate whether BiCI hearing experience predicts language performance. While large intersubject variability existed, on average, almost all the children with BiCIs scored within or above normal limits on measures of nonverbal cognition. Expressive and receptive language scores were highly variable, less likely to be above the normative mean, and did not correlate with Length of first CI Use, defined as length of auditory experience with one cochlear implant, or Length of second CI Use, defined as length of auditory experience with two cochlear implants. All children in the present study had BiCIs. Most IQ scores were either at or above that found in the general population of typically hearing children. However, there was greater variability in their performance on a standardized test of expressive and receptive language. This cohort of children, who are mainstreamed in schools at age-appropriate grades, whose mothers' education is high, and whose families' socioecononomic status is high, had, as a group, on average, language scores within the same range as the normative sample of hearing children. Further research identifying the predictors that contribute to the high variability in both expressive and receptive language scores in children with BiCIs will provide useful information that can aid in clinical management and decision making.

  7. Spatial selective attention in a complex auditory environment such as polyphonic music.

    PubMed

    Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf

    2010-01-01

    To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.

  8. The Impact of Early Visual Deprivation on Spatial Hearing: A Comparison between Totally and Partially Visually Deprived Children

    PubMed Central

    Cappagli, Giulia; Finocchietti, Sara; Cocchi, Elena; Gori, Monica

    2017-01-01

    The specific role of early visual deprivation on spatial hearing is still unclear, mainly due to the difficulty of comparing similar spatial skills at different ages and to the difficulty in recruiting young blind children from birth. In this study, the effects of early visual deprivation on the development of auditory spatial localization have been assessed in a group of seven 3–5 years old children with congenital blindness (n = 2; light perception or no perception of light) or low vision (n = 5; visual acuity range 1.1–1.7 LogMAR), with the main aim to understand if visual experience is fundamental to the development of specific spatial skills. Our study led to three main findings: firstly, totally blind children performed overall more poorly compared sighted and low vision children in all the spatial tasks performed; secondly, low vision children performed equally or better than sighted children in the same auditory spatial tasks; thirdly, higher residual levels of visual acuity are positively correlated with better spatial performance in the dynamic condition of the auditory localization task indicating that the more residual vision the better spatial performance. These results suggest that early visual experience has an important role in the development of spatial cognition, even when the visual input during the critical period of visual calibration is partially degraded like in the case of low vision children. Overall these results shed light on the importance of early assessment of spatial impairments in visually impaired children and early intervention to prevent the risk of isolation and social exclusion. PMID:28443040

  9. Spatial Disorientation Training - Demonstration and Avoidance (entrainement a la desorientation spatiale - Demonstration et reponse)

    DTIC Science & Technology

    2008-10-01

    INTRODUCTION RTO-TR-HFM-118 1 - 7 1.2.1.2 Acoustic HUD A three-dimensional auditory display presents sound from arbitrary directions spanning a...through the visual channel, use of the auditory channel shortens reaction times and is expected to reduce pilot workload, thus improving the overall...of vestibular stimulation using a rotating chair or suitable disorientation device to provide each student with a personal experience of some of

  10. The ventriloquist in periphery: impact of eccentricity-related reliability on audio-visual localization.

    PubMed

    Charbonneau, Geneviève; Véronneau, Marie; Boudrias-Fournier, Colin; Lepore, Franco; Collignon, Olivier

    2013-10-28

    The relative reliability of separate sensory estimates influences the way they are merged into a unified percept. We investigated how eccentricity-related changes in reliability of auditory and visual stimuli influence their integration across the entire frontal space. First, we surprisingly found that despite a strong decrease in auditory and visual unisensory localization abilities in periphery, the redundancy gain resulting from the congruent presentation of audio-visual targets was not affected by stimuli eccentricity. This result therefore contrasts with the common prediction that a reduction in sensory reliability necessarily induces an enhanced integrative gain. Second, we demonstrate that the visual capture of sounds observed with spatially incongruent audio-visual targets (ventriloquist effect) steadily decreases with eccentricity, paralleling a lowering of the relative reliability of unimodal visual over unimodal auditory stimuli in periphery. Moreover, at all eccentricities, the ventriloquist effect positively correlated with a weighted combination of the spatial resolution obtained in unisensory conditions. These findings support and extend the view that the localization of audio-visual stimuli relies on an optimal combination of auditory and visual information according to their respective spatial reliability. All together, these results evidence that the external spatial coordinates of multisensory events relative to an observer's body (e.g., eyes' or head's position) influence how this information is merged, and therefore determine the perceptual outcome.

  11. Peripheral hearing loss reduces the ability of children to direct selective attention during multi-talker listening.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2017-07-01

    Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Arabinogalactan-protein secretion is associated with the acquisition of stigmatic receptivity in the apple flower

    PubMed Central

    Losada, Juan M.; Herrero, María

    2012-01-01

    Background and Aims Stigmatic receptivity plays a clear role in pollination dynamics; however, little is known about the factors that confer to a stigma the competence to be receptive for the germination of pollen grains. In this work, a developmental approach is used to evaluate the acquisition of stigmatic receptivity and its relationship with a possible change in arabinogalactan-proteins (AGPs). Methods Flowers of the domestic apple, Malus × domestica, were assessed for their capacity to support pollen germination at different developmental stages. Stigmas from these same stages were characterized morphologically and different AGP epitopes detected by immunocytochemistry. Key Results Acquisition of stigmatic receptivity and the secretion of classical AGPs from stigmatic cells occurred concurrently and following the same spatial distribution. While in unpollinated stigmas AGPs appeared unaltered, in cross-pollinated stigmas AGPs epitopes vanished as pollen tubes passed by. Conclusions The concurrent secretion of AGPs with the acquisition of stigmatic receptivity, together with the differential response in unpollinated and cross-pollinated pistils point out a role of AGPs in supporting pollen tube germination and strongly suggest that secretion of AGPs is associated with the acquisition of stigma receptivity. PMID:22652420

  13. Background sounds contribute to spectrotemporal plasticity in primary auditory cortex

    PubMed Central

    Moucha, Raluca; Pandya, Pritesh K.; Engineer, Navzer D.; Rathbun, Daniel L.

    2010-01-01

    The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8–4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity PMID:15616812

  14. School performance and wellbeing of children with CI in different communicative-educational environments.

    PubMed

    Langereis, Margreet; Vermeulen, Anneke

    2015-06-01

    This study aimed to evaluate the long term effects of CI on auditory, language, educational and social-emotional development of deaf children in different educational-communicative settings. The outcomes of 58 children with profound hearing loss and normal non-verbal cognition, after 60 months of CI use have been analyzed. At testing the children were enrolled in three different educational settings; in mainstream education, where spoken language is used or in hard-of-hearing education where sign supported spoken language is used and in bilingual deaf education, with Sign Language of the Netherlands and Sign Supported Dutch. Children were assessed on auditory speech perception, receptive language, educational attainment and wellbeing. Auditory speech perception of children with CI in mainstream education enable them to acquire language and educational levels that are comparable to those of their normal hearing peers. Although the children in mainstream and hard-of-hearing settings show similar speech perception abilities, language development in children in hard-of-hearing settings lags significantly behind. Speech perception, language and educational attainments of children in deaf education remained extremely poor. Furthermore more children in mainstream and hard-of-hearing environments are resilient than in deaf educational settings. Regression analyses showed an important influence of educational setting. Children with CI who are placed in early intervention environments that facilitate auditory development are able to achieve good auditory speech perception, language and educational levels on the long term. Most parents of these children report no social-emotional concerns. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Auditory Temporal Acuity Probed With Cochlear Implant Stimulation and Cortical Recording

    PubMed Central

    Kirby, Alana E.

    2010-01-01

    Cochlear implants stimulate the auditory nerve with amplitude-modulated (AM) electric pulse trains. Pulse rates >2,000 pulses per second (pps) have been hypothesized to enhance transmission of temporal information. Recent studies, however, have shown that higher pulse rates impair phase locking to sinusoidal AM in the auditory cortex and impair perceptual modulation detection. Here, we investigated the effects of high pulse rates on the temporal acuity of transmission of pulse trains to the auditory cortex. In anesthetized guinea pigs, signal-detection analysis was used to measure the thresholds for detection of gaps in pulse trains at rates of 254, 1,017, and 4,069 pps and in acoustic noise. Gap-detection thresholds decreased by an order of magnitude with increases in pulse rate from 254 to 4,069 pps. Such a pulse-rate dependence would likely influence speech reception through clinical speech processors. To elucidate the neural mechanisms of gap detection, we measured recovery from forward masking after a 196.6-ms pulse train. Recovery from masking was faster at higher carrier pulse rates and masking increased linearly with current level. We fit the data with a dual-exponential recovery function, consistent with a peripheral and a more central process. High-rate pulse trains evoked less central masking, possibly due to adaptation of the response in the auditory nerve. Neither gap detection nor forward masking varied with cortical depth, indicating that these processes are likely subcortical. These results indicate that gap detection and modulation detection are mediated by two separate neural mechanisms. PMID:19923242

  16. Linear and nonlinear auditory response properties of interneurons in a high-order avian vocal motor nucleus during wakefulness.

    PubMed

    Raksin, Jonathan N; Glaze, Christopher M; Smith, Sarah; Schmidt, Marc F

    2012-04-01

    Motor-related forebrain areas in higher vertebrates also show responses to passively presented sensory stimuli. However, sensory tuning properties in these areas, especially during wakefulness, and their relation to perception, are poorly understood. In the avian song system, HVC (proper name) is a vocal-motor structure with auditory responses well defined under anesthesia but poorly characterized during wakefulness. We used a large set of stimuli including the bird's own song (BOS) and many conspecific songs (CON) to characterize auditory tuning properties in putative interneurons (HVC(IN)) during wakefulness. Our findings suggest that HVC contains a diversity of responses that vary in overall excitability to auditory stimuli, as well as bias in spike rate increases to BOS over CON. We used statistical tests to classify cells in order to further probe auditory responses, yielding one-third of neurons that were either unresponsive or suppressed and two-thirds with excitatory responses to one or more stimuli. A subset of excitatory neurons were tuned exclusively to BOS and showed very low linearity as measured by spectrotemporal receptive field analysis (STRF). The remaining excitatory neurons responded well to CON stimuli, although many cells still expressed a bias toward BOS. These findings suggest the concurrent presence of a nonlinear and a linear component to responses in HVC, even within the same neuron. These characteristics are consistent with perceptual deficits in distinguishing BOS from CON stimuli following lesions of HVC and other song nuclei and suggest mirror neuronlike qualities in which "self" (here BOS) is used as a referent to judge "other" (here CON).

  17. Single-trial classification of auditory event-related potentials elicited by stimuli from different spatial directions.

    PubMed

    Cabrera, Alvaro Fuentes; Hoffmann, Pablo Faundez

    2010-01-01

    This study is focused on the single-trial classification of auditory event-related potentials elicited by sound stimuli from different spatial directions. Five naϊve subjects were asked to localize a sound stimulus reproduced over one of 8 loudspeakers placed in a circular array, equally spaced by 45°. The subject was seating in the center of the circular array. Due to the complexity of an eight classes classification, our approach consisted on feeding our classifier with two classes, or spatial directions, at the time. The seven chosen pairs were 0°, which was the loudspeaker directly in front of the subject, with all the other seven directions. The discrete wavelet transform was used to extract features in the time-frequency domain and a support vector machine performed the classification procedure. The average accuracy over all subjects and all pair of spatial directions was 76.5%, σ = 3.6. The results of this study provide evidence that the direction of a sound is encoded in single-trial auditory event-related potentials.

  18. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture

    PubMed Central

    2017-01-01

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238

  19. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    PubMed

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.

  20. Auditory-visual integration modulates location-specific repetition suppression of auditory responses.

    PubMed

    Shrem, Talia; Murray, Micah M; Deouell, Leon Y

    2017-11-01

    Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.

  1. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  2. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    PubMed Central

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli. PMID:27853290

  3. Perceptual consequences of normal and abnormal peripheral compression: Potential links between psychoacoustics and speech perception

    NASA Astrophysics Data System (ADS)

    Oxenham, Andrew J.; Rosengard, Peninah S.; Braida, Louis D.

    2004-05-01

    Cochlear damage can lead to a reduction in the overall amount of peripheral auditory compression, presumably due to outer hair cell (OHC) loss or dysfunction. The perceptual consequences of functional OHC loss include loudness recruitment and reduced dynamic range, poorer frequency selectivity, and poorer effective temporal resolution. These in turn may lead to a reduced ability to make use of spectral and temporal fluctuations in background noise when listening to a target sound, such as speech. We tested the effect of OHC function on speech reception in hearing-impaired listeners by comparing psychoacoustic measures of cochlear compression and sentence recognition in a variety of noise backgrounds. In line with earlier studies, we found weak (nonsignificant) correlations between the psychoacoustic tasks and speech reception thresholds in quiet or in steady-state noise. However, when spectral and temporal fluctuations were introduced in the masker, speech reception improved to an extent that was well predicted by the psychoacoustic measures. Thus, our initial results suggest a strong relationship between measures of cochlear compression and the ability of listeners to take advantage of spectral and temporal masker fluctuations in recognizing speech. [Work supported by NIH Grants Nos. R01DC03909, T32DC00038, and R01DC00117.

  4. Differences in neural activation between preterm and full term born adolescents on a sentence comprehension task: implications for educational accommodations.

    PubMed

    Barde, Laura H F; Yeatman, Jason D; Lee, Eliana S; Glover, Gary; Feldman, Heidi M

    2012-02-15

    Adolescent survivors of preterm birth experience persistent functional problems that negatively impact academic outcomes, even when standardized measures of cognition and language suggest normal ability. In this fMRI study, we compared the neural activation supporting auditory sentence comprehension in two groups of adolescents (ages 9-16 years); sentences varied in length and syntactic difficulty. Preterms (n=18, mean gestational age 28.8 weeks) and full terms (n=14) had scores on verbal IQ, receptive vocabulary, and receptive language tests that were within or above normal limits and similar between groups. In early and late phases of the trial, we found interactions by group and length; in the late phase, we also found a group by syntactic difficulty interaction. Post hoc tests revealed that preterms demonstrated significant activation in the left and right middle frontal gyri as syntactic difficulty increased. ANCOVA showed that the interactions could not be attributed to differences in age, receptive language skill, or reaction time. Results are consistent with the hypothesis that preterm birth modulates brain-behavior relations in sentence comprehension as task demands increase. We suggest preterms' differences in neural processing may indicate a need for educational accommodations, even when formal test scores indicate normal academic achievement. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Trimodal speech perception: how residual acoustic hearing supplements cochlear-implant consonant recognition in the presence of visual cues.

    PubMed

    Sheffield, Benjamin M; Schuchman, Gerald; Bernstein, Joshua G W

    2015-01-01

    As cochlear implant (CI) acceptance increases and candidacy criteria are expanded, these devices are increasingly recommended for individuals with less than profound hearing loss. As a result, many individuals who receive a CI also retain acoustic hearing, often in the low frequencies, in the nonimplanted ear (i.e., bimodal hearing) and in some cases in the implanted ear (i.e., hybrid hearing) which can enhance the performance achieved by the CI alone. However, guidelines for clinical decisions pertaining to cochlear implantation are largely based on expectations for postsurgical speech-reception performance with the CI alone in auditory-only conditions. A more comprehensive prediction of postimplant performance would include the expected effects of residual acoustic hearing and visual cues on speech understanding. An evaluation of auditory-visual performance might be particularly important because of the complementary interaction between the speech information relayed by visual cues and that contained in the low-frequency auditory signal. The goal of this study was to characterize the benefit provided by residual acoustic hearing to consonant identification under auditory-alone and auditory-visual conditions for CI users. Additional information regarding the expected role of residual hearing in overall communication performance by a CI listener could potentially lead to more informed decisions regarding cochlear implantation, particularly with respect to recommendations for or against bilateral implantation for an individual who is functioning bimodally. Eleven adults 23 to 75 years old with a unilateral CI and air-conduction thresholds in the nonimplanted ear equal to or better than 80 dB HL for at least one octave frequency between 250 and 1000 Hz participated in this study. Consonant identification was measured for conditions involving combinations of electric hearing (via the CI), acoustic hearing (via the nonimplanted ear), and speechreading (visual cues). The results suggest that the benefit to CI consonant-identification performance provided by the residual acoustic hearing is even greater when visual cues are also present. An analysis of consonant confusions suggests that this is because the voicing cues provided by the residual acoustic hearing are highly complementary with the mainly place-of-articulation cues provided by the visual stimulus. These findings highlight the need for a comprehensive prediction of trimodal (acoustic, electric, and visual) postimplant speech-reception performance to inform implantation decisions. The increased influence of residual acoustic hearing under auditory-visual conditions should be taken into account when considering surgical procedures or devices that are intended to preserve acoustic hearing in the implanted ear. This is particularly relevant when evaluating the candidacy of a current bimodal CI user for a second CI (i.e., bilateral implantation). Although recent developments in CI technology and surgical techniques have increased the likelihood of preserving residual acoustic hearing, preservation cannot be guaranteed in each individual case. Therefore, the potential gain to be derived from bilateral implantation needs to be weighed against the possible loss of the benefit provided by residual acoustic hearing.

  6. Audiovisual integration in hemianopia: A neurocomputational account based on cortico-collicular interaction.

    PubMed

    Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro

    2016-10-01

    Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. The plastic ear and perceptual relearning in auditory spatial perception

    PubMed Central

    Carlile, Simon

    2014-01-01

    The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10–60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5–10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497

  8. Development of orientation tuning in simple cells of primary visual cortex

    PubMed Central

    Moore, Bartlett D.

    2012-01-01

    Orientation selectivity and its development are basic features of visual cortex. The original model of orientation selectivity proposes that elongated simple cell receptive fields are constructed from convergent input of an array of lateral geniculate nucleus neurons. However, orientation selectivity of simple cells in the visual cortex is generally greater than the linear contributions based on projections from spatial receptive field profiles. This implies that additional selectivity may arise from intracortical mechanisms. The hierarchical processing idea implies mainly linear connections, whereas cortical contributions are generally considered to be nonlinear. We have explored development of orientation selectivity in visual cortex with a focus on linear and nonlinear factors in a population of anesthetized 4-wk postnatal kittens and adult cats. Linear contributions are estimated from receptive field maps by which orientation tuning curves are generated and bandwidth is quantified. Nonlinear components are estimated as the magnitude of the power function relationship between responses measured from drifting sinusoidal gratings and those predicted from the spatial receptive field. Measured bandwidths for kittens are slightly larger than those in adults, whereas predicted bandwidths are substantially broader. These results suggest that relatively strong nonlinearities in early postnatal stages are substantially involved in the development of orientation tuning in visual cortex. PMID:22323631

  9. Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.

    PubMed

    Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E

    2017-11-06

    Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age.  Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.

  10. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  11. Noise-induced tinnitus using individualized gap detection analysis and its relationship with hyperacusis, anxiety, and spatial cognition.

    PubMed

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus((+)) and tinnitus((-)) groups and detected no evidence of tinnitus in tinnitus((-)) and control rats. In addition, the tinnitus((+)) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus((+)) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments.

  12. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  13. Effects of methylphenidate on working memory components: influence of measurement.

    PubMed

    Bedard, Anne-Claude; Jain, Umesh; Johnson, Sheilah Hogg; Tannock, Rosemary

    2007-09-01

    To investigate the effects of methylphenidate (MPH) on components of working memory (WM) in attention-deficit hyperactivity disorder (ADHD) and determine the responsiveness of WM measures to MPH. Participants were a clinical sample of 50 children and adolescents with ADHD, aged 6 to 16 years old, who participated in an acute randomized, double-blind, placebo-controlled, crossover trial with single challenges of three MPH doses. Four components of WM were investigated, which varied in processing demands (storage versus manipulation of information) and modality (auditory-verbal; visual-spatial), each of which was indexed by a minimum of two separate measures. MPH improved the ability to store visual-spatial information irrespective of instrument used, but had no effects on the storage of auditory-verbal information. By contrast, MPH enhanced the ability to manipulate both auditory-verbal and visual-spatial information, although effects were instrument specific in both cases. MPH effects on WM are selective: they vary as a function of WM component and measurement.

  14. Role of semantic paradigms for optimization of language mapping in clinical FMRI studies.

    PubMed

    Zacà, D; Jarso, S; Pillai, J J

    2013-10-01

    The optimal paradigm choice for language mapping in clinical fMRI studies is challenging due to the variability in activation among different paradigms, the contribution to activation of cognitive processes other than language, and the difficulties in monitoring patient performance. In this study, we compared language localization and lateralization between 2 commonly used clinical language paradigms and 3 newly designed dual-choice semantic paradigms to define a streamlined and adequate language-mapping protocol. Twelve healthy volunteers performed 5 language paradigms: Silent Word Generation, Sentence Completion, Visual Antonym Pair, Auditory Antonym Pair, and Noun-Verb Association. Group analysis was performed to assess statistically significant differences in fMRI percentage signal change and lateralization index among these paradigms in 5 ROIs: inferior frontal gyrus, superior frontal gyrus, middle frontal gyrus for expressive language activation, middle temporal gyrus, and superior temporal gyrus for receptive language activation. In the expressive ROIs, Silent Word Generation was the most robust and best lateralizing paradigm (greater percentage signal change and lateralization index than semantic paradigms at P < .01 and P < .05 levels, respectively). In the receptive region of interest, Sentence Completion and Noun-Verb Association were the most robust activators (greater percentage signal change than other paradigms, P < .01). All except Auditory Antonym Pair were good lateralizing tasks (the lateralization index was significantly lower than other paradigms, P < .05). The combination of Silent Word Generation and ≥1 visual semantic paradigm, such as Sentence Completion and Noun-Verb Association, is adequate to determine language localization and lateralization; Noun-Verb Association has the additional advantage of objective monitoring of patient performance.

  15. Optimal Audiovisual Integration in the Ventriloquism Effect But Pervasive Deficits in Unisensory Spatial Localization in Amblyopia.

    PubMed

    Richards, Michael D; Goltz, Herbert C; Wong, Agnes M F

    2018-01-01

    Classically understood as a deficit in spatial vision, amblyopia is increasingly recognized to also impair audiovisual multisensory processing. Studies to date, however, have not determined whether the audiovisual abnormalities reflect a failure of multisensory integration, or an optimal strategy in the face of unisensory impairment. We use the ventriloquism effect and the maximum-likelihood estimation (MLE) model of optimal integration to investigate integration of audiovisual spatial information in amblyopia. Participants with unilateral amblyopia (n = 14; mean age 28.8 years; 7 anisometropic, 3 strabismic, 4 mixed mechanism) and visually normal controls (n = 16, mean age 29.2 years) localized brief unimodal auditory, unimodal visual, and bimodal (audiovisual) stimuli during binocular viewing using a location discrimination task. A subset of bimodal trials involved the ventriloquism effect, an illusion in which auditory and visual stimuli originating from different locations are perceived as originating from a single location. Localization precision and bias were determined by psychometric curve fitting, and the observed parameters were compared with predictions from the MLE model. Spatial localization precision was significantly reduced in the amblyopia group compared with the control group for unimodal visual, unimodal auditory, and bimodal stimuli. Analyses of localization precision and bias for bimodal stimuli showed no significant deviations from the MLE model in either the amblyopia group or the control group. Despite pervasive deficits in localization precision for visual, auditory, and audiovisual stimuli, audiovisual integration remains intact and optimal in unilateral amblyopia.

  16. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  17. Local and Global Spatial Organization of Interaural Level Difference and Frequency Preferences in Auditory Cortex

    PubMed Central

    Panniello, Mariangela; King, Andrew J; Dahmen, Johannes C; Walker, Kerry M M

    2018-01-01

    Abstract Despite decades of microelectrode recordings, fundamental questions remain about how auditory cortex represents sound-source location. Here, we used in vivo 2-photon calcium imaging to measure the sensitivity of layer II/III neurons in mouse primary auditory cortex (A1) to interaural level differences (ILDs), the principal spatial cue in this species. Although most ILD-sensitive neurons preferred ILDs favoring the contralateral ear, neurons with either midline or ipsilateral preferences were also present. An opponent-channel decoder accurately classified ILDs using the difference in responses between populations of neurons that preferred contralateral-ear-greater and ipsilateral-ear-greater stimuli. We also examined the spatial organization of binaural tuning properties across the imaged neurons with unprecedented resolution. Neurons driven exclusively by contralateral ear stimuli or by binaural stimulation occasionally formed local clusters, but their binaural categories and ILD preferences were not spatially organized on a more global scale. In contrast, the sound frequency preferences of most neurons within local cortical regions fell within a restricted frequency range, and a tonotopic gradient was observed across the cortical surface of individual mice. These results indicate that the representation of ILDs in mouse A1 is comparable to that of most other mammalian species, and appears to lack systematic or consistent spatial order. PMID:29136122

  18. Compatibility of motion facilitates visuomotor synchronization.

    PubMed

    Hove, Michael J; Spivey, Michael J; Krumhansl, Carol L

    2010-12-01

    Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, synchronization success rates increased dramatically for spatiotemporal sequences of both geometric and biological forms over flashing sequences. In Experiment 2, synchronization performance was best when target sequences and movements were directionally compatible (i.e., simultaneously down), followed by orthogonal stimuli, and was poorest for incompatible moving stimuli and flashing stimuli. In Experiment 3, synchronization performance was best with auditory sequences, followed by compatible moving stimuli, and was worst for flashing and fading stimuli. Results indicate that visuomotor synchronization improves dramatically with compatible spatial information. However, an auditory advantage in sensorimotor synchronization persists.

  19. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  1. A virtual display system for conveying three-dimensional acoustic information

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Wightman, Frederic L.; Foster, Scott H.

    1988-01-01

    The development of a three-dimensional auditory display system is discussed. Theories of human sound localization and techniques for synthesizing various features of auditory spatial perceptions are examined. Psychophysical data validating the system are presented. The human factors applications of the system are considered.

  2. Neuronal nonlinearity explains greater visual spatial resolution for darks than lights.

    PubMed

    Kremkow, Jens; Jin, Jianzhong; Komban, Stanley J; Wang, Yushi; Lashgari, Reza; Li, Xiaobing; Jansen, Michael; Zaidi, Qasim; Alonso, Jose-Manuel

    2014-02-25

    Astronomers and physicists noticed centuries ago that visual spatial resolution is higher for dark than light stimuli, but the neuronal mechanisms for this perceptual asymmetry remain unknown. Here we demonstrate that the asymmetry is caused by a neuronal nonlinearity in the early visual pathway. We show that neurons driven by darks (OFF neurons) increase their responses roughly linearly with luminance decrements, independent of the background luminance. However, neurons driven by lights (ON neurons) saturate their responses with small increases in luminance and need bright backgrounds to approach the linearity of OFF neurons. We show that, as a consequence of this difference in linearity, receptive fields are larger in ON than OFF thalamic neurons, and cortical neurons are more strongly driven by darks than lights at low spatial frequencies. This ON/OFF asymmetry in linearity could be demonstrated in the visual cortex of cats, monkeys, and humans and in the cat visual thalamus. Furthermore, in the cat visual thalamus, we show that the neuronal nonlinearity is present at the ON receptive field center of ON-center neurons and ON receptive field surround of OFF-center neurons, suggesting an origin at the level of the photoreceptor. These results demonstrate a fundamental difference in visual processing between ON and OFF channels and reveal a competitive advantage for OFF neurons over ON neurons at low spatial frequencies, which could be important during cortical development when retinal images are blurred by immature optics in infant eyes.

  3. Comparisons of MRI images, and auditory-related and vocal-related protein expressions in the brain of echolocation bats and rodents.

    PubMed

    Hsiao, Chun-Jen; Hsu, Chih-Hsiang; Lin, Ching-Lung; Wu, Chung-Hsin; Jen, Philip Hung-Sun

    2016-08-17

    Although echolocating bats and other mammals share the basic design of laryngeal apparatus for sound production and auditory system for sound reception, they have a specialized laryngeal mechanism for ultrasonic sound emissions as well as a highly developed auditory system for processing species-specific sounds. Because the sounds used by bats for echolocation and rodents for communication are quite different, there must be differences in the central nervous system devoted to producing and processing species-specific sounds between them. The present study examines the difference in the relative size of several brain structures and expression of auditory-related and vocal-related proteins in the central nervous system of echolocation bats and rodents. Here, we report that bats using constant frequency-frequency-modulated sounds (CF-FM bats) and FM bats for echolocation have a larger volume of midbrain nuclei (inferior and superior colliculi) and cerebellum relative to the size of the brain than rodents (mice and rats). However, the former have a smaller volume of the cerebrum and olfactory bulb, but greater expression of otoferlin and forkhead box protein P2 than the latter. Although the size of both midbrain colliculi is comparable in both CF-FM and FM bats, CF-FM bats have a larger cerebrum and greater expression of otoferlin and forkhead box protein P2 than FM bats. These differences in brain structure and protein expression are discussed in relation to their biologically relevant sounds and foraging behavior.

  4. Extrinsic Embryonic Sensory Stimulation Alters Multimodal Behavior and Cellular Activation

    PubMed Central

    Markham, Rebecca G.; Shimizu, Toru; Lickliter, Robert

    2009-01-01

    Embryonic vision is generated and maintained by spontaneous neuronal activation patterns, yet extrinsic stimulation also sculpts sensory development. Because the sensory and motor systems are interconnected in embryogenesis, how extrinsic sensory activation guides multimodal differentiation is an important topic. Further, it is unknown whether extrinsic stimulation experienced near sensory sensitivity onset contributes to persistent brain changes, ultimately affecting postnatal behavior. To determine the effects of extrinsic stimulation on multimodal development, we delivered auditory stimulation to bobwhite quail groups during early, middle, or late embryogenesis, and then tested postnatal behavioral responsiveness to auditory or visual cues. Auditory preference tendencies were more consistently toward the conspecific stimulus for animals stimulated during late embryogenesis. Groups stimulated during middle or late embryogenesis showed altered postnatal species-typical visual responsiveness, demonstrating a persistent multimodal effect. We also examined whether auditory-related brain regions are receptive to extrinsic input during middle embryogenesis by measuring postnatal cellular activation. Stimulated birds showed a greater number of ZENK-immunopositive cells per unit volume of brain tissue in deep optic tectum, a midbrain region strongly implicated in multimodal function. We observed similar results in the medial and caudomedial nidopallia in the telencephalon. There were no ZENK differences between groups in inferior colliculus or in caudolateral nidopallium, avian analog to prefrontal cortex. To our knowledge, these are the first results linking extrinsic stimulation delivered so early in embryogenesis to changes in postnatal multimodal behavior and cellular activation. The potential role of competitive interactions between the sensory and motor systems is discussed. PMID:18777564

  5. Spatial Attention and Audiovisual Interactions in Apparent Motion

    ERIC Educational Resources Information Center

    Sanabria, Daniel; Soto-Faraco, Salvador; Spence, Charles

    2007-01-01

    In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either…

  6. Multisensory Integration Affects Visuo-Spatial Working Memory

    ERIC Educational Resources Information Center

    Botta, Fabiano; Santangelo, Valerio; Raffone, Antonino; Sanabria, Daniel; Lupianez, Juan; Belardinelli, Marta Olivetti

    2011-01-01

    In the present study, we investigate how spatial attention, driven by unisensory and multisensory cues, can bias the access of information into visuo-spatial working memory (VSWM). In a series of four experiments, we compared the effectiveness of spatially-nonpredictive visual, auditory, or audiovisual cues in capturing participants' spatial…

  7. Characterizing spatial tuning functions of neurons in the auditory cortex of young and aged monkeys: a new perspective on old data

    PubMed Central

    Engle, James R.; Recanzone, Gregg H.

    2012-01-01

    Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals had greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, which is consistent with the hypothesis that inhibition is a primary mechanism for sharpening spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, which is consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline. PMID:23316160

  8. Behavioral detection of intra-cortical microstimulation in the primary and secondary auditory cortex of cats

    PubMed Central

    Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling

    2015-01-01

    Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics. PMID:25964744

  9. Behavioral detection of intra-cortical microstimulation in the primary and secondary auditory cortex of cats.

    PubMed

    Zhao, Zhenling; Liu, Yongchun; Ma, Lanlan; Sato, Yu; Qin, Ling

    2015-01-01

    Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS) delivered in the primary and secondary auditory fields (A1 and A2, respectively). After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal's behavioral decision process and had an implication for the development of cortical auditory prosthetics.

  10. Implicit semantic priming in Spanish-speaking children and adults: an auditory lexical decision task.

    PubMed

    Girbau, Dolors; Schwartz, Richard G

    2011-05-01

    Although receptive priming has long been used as a way to examine lexical access in adults, few studies have applied this method to children and rarely in an auditory modality. We compared auditory associative priming in children and adults. A testing battery and a Lexical Decision (LD) task was administered to 42 adults and 27 children (8;1-10; 11 years-old) from Spain. They listened to Spanish word pairs (semantically related/unrelated word pairs and word-pseudoword pairs), and tone pairs. Then participants pressed one key for word pairs, and another for pairs with a word and a pseudoword. They also had to press the two keys alternatively for tone pairs as a basic auditory control. Both groups of participants, children and adults, exhibited semantic priming, with significantly faster Reaction Times (RTs) to semantically related word pairs than to unrelated pairs and to the two word-pseudoword sets. The priming effect was twice as large in the adults compared to children, and the children (not the adults) were significantly slower in their response to word-pseudoword pairs than to the unrelated word pairs. Moreover, accuracy was somewhat higher in adults than children for each word pair type, but especially in the word-pseudoword pairs. As expected, children were significantly slower than adults in the RTs for all stimulus types, and their RTs decreased significantly from 8 to 10 years of age and they also decreased in relation to some of their language abilities development (e.g., relative clauses comprehension). In both age groups, the Reaction Time average for tone pairs was lower than for speech pairs, but only all adults obtained 100% accuracy (which was slightly lower in children). Auditory processing and semantic networks are still developing in 8-10 year old children.

  11. Effect of leading-edge geometry on boundary-layer receptivity to freestream sound

    NASA Technical Reports Server (NTRS)

    Lin, Nay; Reed, Helen L.; Saric, W. S.

    1991-01-01

    The receptivity to freestream sound of the laminar boundary layer over a semi-infinite flat plate with an elliptic leading edge is simulated numerically. The incompressible flow past the flat plate is computed by solving the full Navier-Stokes equations in general curvilinear coordinates. A finite-difference method which is second-order accurate in space and time is used. Spatial and temporal developments of the Tollmien-Schlichting wave in the boundary layer, due to small-amplitude time-harmonic oscillations of the freestream velocity that closely simulate a sound wave travelling parallel to the plate, are observed. The effect of leading-edge curvature is studied by varying the aspect ratio of the ellipse. The boundary layer over the flat plate with a sharper leading edge is found to be less receptive. The relative contribution of the discontinuity in curvature at the ellipse-flat-plate juncture to receptivity is investigated by smoothing the juncture with a polynomial. Continuous curvature leads to less receptivity. A new geometry of the leading edge, a modified super ellipse, which provides continuous curvature at the juncture with the flat plate, is used to study the effect of continuous curvature and inherent pressure gradient on receptivity.

  12. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    PubMed

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  13. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  14. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  15. Cellular and Molecular Underpinnings of Neuronal Assembly in the Central Auditory System during Mouse Development

    PubMed Central

    Di Bonito, Maria; Studer, Michèle

    2017-01-01

    During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562

  16. Musicians and non-musicians are equally adept at perceiving masked speech

    PubMed Central

    Boebinger, Dana; Evans, Samuel; Scott, Sophie K.; Rosen, Stuart; Lima, César F.; Manly, Tom

    2015-01-01

    There is much interest in the idea that musicians perform better than non-musicians in understanding speech in background noise. Research in this area has often used energetic maskers, which have their effects primarily at the auditory periphery. However, masking interference can also occur at more central auditory levels, known as informational masking. This experiment extends existing research by using multiple maskers that vary in their informational content and similarity to speech, in order to examine differences in perception of masked speech between trained musicians (n = 25) and non-musicians (n = 25). Although musicians outperformed non-musicians on a measure of frequency discrimination, they showed no advantage in perceiving masked speech. Further analysis revealed that nonverbal IQ, rather than musicianship, significantly predicted speech reception thresholds in noise. The results strongly suggest that the contribution of general cognitive abilities needs to be taken into account in any investigations of individual variability for perceiving speech in noise. PMID:25618067

  17. Inverted-U Function Relating Cortical Plasticity and Task Difficulty

    PubMed Central

    Engineer, Navzer D.; Engineer, Crystal T.; Reed, Amanda C.; Pandya, Pritesh K.; Jakkamsetti, Vikram; Moucha, Raluca; Kilgard, Michael P.

    2012-01-01

    Many psychological and physiological studies with simple stimuli have suggested that perceptual learning specifically enhances the response of primary sensory cortex to task-relevant stimuli. The aim of this study was to determine whether auditory discrimination training on complex tasks enhances primary auditory cortex responses to a target sequence relative to non-target and novel sequences. We collected responses from more than 2,000 sites in 31 rats trained on one of six discrimination tasks that differed primarily in the similarity of the target and distractor sequences. Unlike training with simple stimuli, long-term training with complex stimuli did not generate target specific enhancement in any of the groups. Instead, cortical receptive field size decreased, latency decreased, and paired pulse depression decreased in rats trained on the tasks of intermediate difficulty while tasks that were too easy or too difficult either did not alter or degraded cortical responses. These results suggest an inverted-U function relating neural plasticity and task difficulty. PMID:22249158

  18. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study. Results showed that males produce calls that were highly directional that together with social status influenced the response of receivers. Results from the playback study were able to confirm that the isolated acoustic components of this display resulted in similar responses among males. These three chapters provide further information about comparative aspects of spatial auditory processing in pinnipeds.

  19. Noise-Induced Tinnitus Using Individualized Gap Detection Analysis and Its Relationship with Hyperacusis, Anxiety, and Spatial Cognition

    PubMed Central

    Pace, Edward; Zhang, Jinsheng

    2013-01-01

    Tinnitus has a complex etiology that involves auditory and non-auditory factors and may be accompanied by hyperacusis, anxiety and cognitive changes. Thus far, investigations of the interrelationship between tinnitus and auditory and non-auditory impairment have yielded conflicting results. To further address this issue, we noise exposed rats and assessed them for tinnitus using a gap detection behavioral paradigm combined with statistically-driven analysis to diagnose tinnitus in individual rats. We also tested rats for hearing detection, responsivity, and loss using prepulse inhibition and auditory brainstem response, and for spatial cognition and anxiety using Morris water maze and elevated plus maze. We found that our tinnitus diagnosis method reliably separated noise-exposed rats into tinnitus(+) and tinnitus(−) groups and detected no evidence of tinnitus in tinnitus(−) and control rats. In addition, the tinnitus(+) group demonstrated enhanced startle amplitude, indicating hyperacusis-like behavior. Despite these results, neither tinnitus, hyperacusis nor hearing loss yielded any significant effects on spatial learning and memory or anxiety, though a majority of rats with the highest anxiety levels had tinnitus. These findings showed that we were able to develop a clinically relevant tinnitus(+) group and that our diagnosis method is sound. At the same time, like clinical studies, we found that tinnitus does not always result in cognitive-emotional dysfunction, although tinnitus may predispose subjects to certain impairment like anxiety. Other behavioral assessments may be needed to further define the relationship between tinnitus and anxiety, cognitive deficits, and other impairments. PMID:24069375

  20. Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.

    PubMed

    Vercillo, Tiziana; Gori, Monica

    2015-01-01

    The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.

  1. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  2. Cross-modal links among vision, audition, and touch in complex environments.

    PubMed

    Ferris, Thomas K; Sarter, Nadine B

    2008-02-01

    This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.

  3. Cortical Inhibition Reduces Information Redundancy at Presentation of Communication Sounds in the Primary Auditory Cortex

    PubMed Central

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris

    2013-01-01

    In all sensory modalities, intracortical inhibition shapes the functional properties of cortical neurons but also influences the responses to natural stimuli. Studies performed in various species have revealed that auditory cortex neurons respond to conspecific vocalizations by temporal spike patterns displaying a high trial-to-trial reliability, which might result from precise timing between excitation and inhibition. Studying the guinea pig auditory cortex, we show that partial blockage of GABAA receptors by gabazine (GBZ) application (10 μm, a concentration that promotes expansion of cortical receptive fields) increased the evoked firing rate and the spike-timing reliability during presentation of communication sounds (conspecific and heterospecific vocalizations), whereas GABAB receptor antagonists [10 μm saclofen; 10–50 μm CGP55845 (p-3-aminopropyl-p-diethoxymethyl phosphoric acid)] had nonsignificant effects. Computing mutual information (MI) from the responses to vocalizations using either the evoked firing rate or the temporal spike patterns revealed that GBZ application increased the MI derived from the activity of single cortical site but did not change the MI derived from population activity. In addition, quantification of information redundancy showed that GBZ significantly increased redundancy at the population level. This result suggests that a potential role of intracortical inhibition is to reduce information redundancy during the processing of natural stimuli. PMID:23804094

  4. Auditory verbal habilitation is associated with improved outcome for children with cochlear implant.

    PubMed

    Percy-Smith, Lone; Tønning, Tenna Lindbjerg; Josvassen, Jane Lignel; Mikkelsen, Jeanette Hølledig; Nissen, Lena; Dieleman, Eveline; Hallstrøm, Maria; Cayé-Thomasen, Per

    2018-01-01

    To study the impact of (re)habilitation strategy on speech-language outcomes for early, cochlear implanted children enrolled in different intervention programmes post implant. Data relate to a total of 130 children representing two pediatric cohorts consisting of 94 and 36 subjects, respectively. The two cohorts had different speech and language intervention following cochlear implantation, i.e. standard habilitation vs. auditory verbal (AV) intervention. Three tests of speech and language were applied covering language areas of receptive and productive vocabulary and language understanding. Children in AV intervention outperformed children in standard habilitation on all three tests of speech and language. When effect of intervention was adjusted with other covariates children in AV intervention still had higher odds at performing at age equivalent speech and language levels. Compared to standard intervention, AV intervention is associated with improved outcome for children with CI. Based on this finding, we recommend that all children with HI should be offered this intervention and it is, therefore, highly relevant when National boards of Health and Social Affairs recommend basing the habilitation on principles from AV practice. It should be noted, that a minority of children use spoken language with sign support. For this group it is, however, still important that educational services provide auditory skills training.

  5. Anesthetic state modulates excitability but not spectral tuning or neural discrimination in single auditory midbrain neurons

    PubMed Central

    Schumacher, Joseph W.; Schneider, David M.

    2011-01-01

    The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals. PMID:21543752

  6. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.

    PubMed

    Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".

  7. Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians

    PubMed Central

    Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald

    2016-01-01

    The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330

  8. Effects of speech intelligibility level on concurrent visual task performance.

    PubMed

    Payne, D G; Peters, L J; Birkmire, D P; Bonto, M A; Anastasi, J S; Wenger, M J

    1994-09-01

    Four experiments were performed to determine if changes in the level of speech intelligibility in an auditory task have an impact on performance in concurrent visual tasks. The auditory task used in each experiment was a memory search task in which subjects memorized a set of words and then decided whether auditorily presented probe items were members of the memorized set. The visual tasks used were an unstable tracking task, a spatial decision-making task, a mathematical reasoning task, and a probability monitoring task. Results showed that performance on the unstable tracking and probability monitoring tasks was unaffected by the level of speech intelligibility on the auditory task, whereas accuracy in the spatial decision-making and mathematical processing tasks was significantly worse at low speech intelligibility levels. The findings are interpreted within the framework of multiple resource theory.

  9. Evoked potential correlates of selective attention with multi-channel auditory inputs

    NASA Technical Reports Server (NTRS)

    Schwent, V. L.; Hillyard, S. A.

    1975-01-01

    Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.

  10. Converging Modalities Ground Abstract Categories: The Case of Politics

    PubMed Central

    Farias, Ana Rita; Garrido, Margarida V.; Semin, Gün R.

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal. PMID:23593360

  11. Converging modalities ground abstract categories: the case of politics.

    PubMed

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  12. Auditory rehabilitation after stroke: treatment of auditory processing disorders in stroke patients with personal frequency-modulated (FM) systems.

    PubMed

    Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva

    2017-03-01

    Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the speech signal by 90° compared with unaided listening. Personal FM systems are feasible in stroke patients, and may be of benefit in just under 20% of this population, who are not eligible for conventional hearing aids.

  13. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception

    PubMed Central

    Litovsky, Ruth Y.; Gordon, Karen

    2017-01-01

    Spatial hearing skills are essential for children as they grow, learn and play. They provide critical cues for determining the locations of sources in the environment, and enable segregation of important sources, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4–5 years. In contrast, children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, they generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. PMID:26828740

  14. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals

    PubMed Central

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz

    2016-01-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504

  15. Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.

    PubMed

    Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R

    2016-08-01

    Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.

  16. Sensory Substitution: The Spatial Updating of Auditory Scenes "Mimics" the Spatial Updating of Visual Scenes.

    PubMed

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).

  17. Emergent Spatial Patterns of Excitatory and Inhibitory Synaptic Strengths Drive Somatotopic Representational Discontinuities and their Plasticity in a Computational Model of Primary Sensory Cortical Area 3b

    PubMed Central

    Grajski, Kamil A.

    2016-01-01

    Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers), boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties. PMID:27504086

  18. An analysis of neural receptive field plasticity by point process adaptive filtering

    PubMed Central

    Brown, Emery N.; Nguyen, David P.; Frank, Loren M.; Wilson, Matthew A.; Solo, Victor

    2001-01-01

    Neural receptive fields are plastic: with experience, neurons in many brain regions change their spiking responses to relevant stimuli. Analysis of receptive field plasticity from experimental measurements is crucial for understanding how neural systems adapt their representations of relevant biological information. Current analysis methods using histogram estimates of spike rate functions in nonoverlapping temporal windows do not track the evolution of receptive field plasticity on a fine time scale. Adaptive signal processing is an established engineering paradigm for estimating time-varying system parameters from experimental measurements. We present an adaptive filter algorithm for tracking neural receptive field plasticity based on point process models of spike train activity. We derive an instantaneous steepest descent algorithm by using as the criterion function the instantaneous log likelihood of a point process spike train model. We apply the point process adaptive filter algorithm in a study of spatial (place) receptive field properties of simulated and actual spike train data from rat CA1 hippocampal neurons. A stability analysis of the algorithm is sketched in the Appendix. The adaptive algorithm can update the place field parameter estimates on a millisecond time scale. It reliably tracked the migration, changes in scale, and changes in maximum firing rate characteristic of hippocampal place fields in a rat running on a linear track. Point process adaptive filtering offers an analytic method for studying the dynamics of neural receptive fields. PMID:11593043

  19. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    PubMed

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the tuning functions in low-level auditory cortex underlie the perceived elevation of a sound source. Copyright © 2018 the authors 0270-6474/18/383252-13$15.00/0.

  20. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  1. The Effects of Auditory Information on 4-Month-Old Infants' Perception of Trajectory Continuity

    ERIC Educational Resources Information Center

    Bremner, J. Gavin; Slater, Alan M.; Johnson, Scott P.; Mason, Uschi C.; Spring, Jo

    2012-01-01

    Young infants perceive an object's trajectory as continuous across occlusion provided the temporal or spatial gap in perception is small. In 3 experiments involving 72 participants the authors investigated the effects of different forms of auditory information on 4-month-olds' perception of trajectory continuity. Provision of dynamic auditory…

  2. Describing the trajectory of language development in the presence of severe-to-profound hearing loss: a closer look at children with cochlear implants versus hearing aids.

    PubMed

    Yoshinaga-Itano, Christine; Baca, Rosalinda L; Sedey, Allison L

    2010-10-01

    The objective of this investigation was to describe the language growth of children with severe or profound hearing loss with cochlear implants versus those children with the same degree of hearing loss using hearing aids. A prospective longitudinal observation and analysis. University of Colorado Department of Speech Language and Hearing Sciences. There were 87 children with severe-to-profound hearing loss from 48 to 87 months of age. All children received early intervention services through the Colorado Home Intervention Program. Most children received intervention services from a certified auditory-verbal therapist or an auditory-oral therapist and weekly sign language instruction from an instructor who was deaf or hard of hearing and native or fluent in American Sign Language. The Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, were the assessment tools for children 4 to 7 years of age. The expressive language subscale of the Minnesota Child Development was used in the infant/toddler period (birth to 36 mo). Average language estimates at 84 months of age were nearly identical to the normative sample for receptive language and 7 months delayed for expressive vocabulary. Children demonstrated a mean rate of growth from 4 years through 7 years on these 2 assessments that was equivalent to their normal-hearing peers. As a group, children with hearing aids deviated more from the age equivalent trajectory on the Test of Auditory Comprehension of Language, 3rd Edition, and the Expressive One Word Picture Vocabulary Test, 3rd Edition, than children with cochlear implants. When a subset of children were divided into performance categories, we found that children with cochlear implants were more likely to be "gap closers" and less likely to be "gap openers," whereas the reverse was true for the children with hearing aids for both measures. Children who are educated through oral-aural combined with sign language instruction can achieve age-appropriate language levels on expressive vocabulary and receptive syntax ages 4 through 7 years. However, it is easier to maintain a constant rate of development rather than to accelerate from birth through 84 months of age, which represented approximately 80% of our sample. However, acceleration of language development is possible in some children and could result from cochlear implantation.

  3. Intentional switching in auditory selective attention: Exploring age-related effects in a spatial setup requiring speech perception.

    PubMed

    Oberem, Josefa; Koch, Iring; Fels, Janina

    2017-06-01

    Using a binaural-listening paradigm, age-related differences in the ability to intentionally switch auditory selective attention between two speakers, defined by their spatial location, were examined. Therefore 40 normal-hearing participants (20 young, Ø 24.8years; 20 older Ø 67.8years) were tested. The spatial reproduction of stimuli was provided by headphones using head-related-transfer-functions of an artificial head. Spoken number words of two speakers were presented simultaneously to participants from two out of eight locations on the horizontal plane. Guided by a visual cue indicating the spatial location of the target speaker, the participants were asked to categorize the target's number word into smaller vs. greater than five while ignoring the distractor's speech. Results showed significantly higher reaction times and error rates for older participants. The relative influence of the spatial switch of the target-speaker (switch or repetition of speaker's direction in space) was identical across age groups. Congruency effects (stimuli spoken by target and distractor may evoke the same answer or different answers) were increased for older participants and depend on the target's position. Results suggest that the ability to intentionally switch auditory attention to a new cued location was unimpaired whereas it was generally harder for older participants to suppress processing the distractor's speech. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Exploring the temporal dynamics of sustained and transient spatial attention using steady-state visual evoked potentials.

    PubMed

    Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte

    2017-05-01

    While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.

  5. Primitive Auditory Memory Is Correlated with Spatial Unmasking That Is Based on Direct-Reflection Integration

    PubMed Central

    Li, Huahui; Kong, Lingzhi; Wu, Xihong; Li, Liang

    2013-01-01

    In reverberant rooms with multiple-people talking, spatial separation between speech sources improves recognition of attended speech, even though both the head-shadowing and interaural-interaction unmasking cues are limited by numerous reflections. It is the perceptual integration between the direct wave and its reflections that bridges the direct-reflection temporal gaps and results in the spatial unmasking under reverberant conditions. This study further investigated (1) the temporal dynamic of the direct-reflection-integration-based spatial unmasking as a function of the reflection delay, and (2) whether this temporal dynamic is correlated with the listeners’ auditory ability to temporally retain raw acoustic signals (i.e., the fast decaying primitive auditory memory, PAM). The results showed that recognition of the target speech against the speech-masker background is a descending exponential function of the delay of the simulated target reflection. In addition, the temporal extent of PAM is frequency dependent and markedly longer than that for perceptual fusion. More importantly, the temporal dynamic of the speech-recognition function is significantly correlated with the temporal extent of the PAM of low-frequency raw signals. Thus, we propose that a chain process, which links the earlier-stage PAM with the later-stage correlation computation, perceptual integration, and attention facilitation, plays a role in spatially unmasking target speech under reverberant conditions. PMID:23658664

  6. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  7. Short-Term Memory for Space and Time Flexibly Recruit Complementary Sensory-Biased Frontal Lobe Attention Networks.

    PubMed

    Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C

    2015-08-19

    The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Intracranial mapping of auditory perception: event-related responses and electrocortical stimulation.

    PubMed

    Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D

    2009-01-01

    We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.

  9. Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation

    PubMed Central

    Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.

    2010-01-01

    Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540

  10. Cross-modal detection using various temporal and spatial configurations.

    PubMed

    Schirillo, James A

    2011-01-01

    To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.

  11. Comparative physiology of sound localization in four species of owls.

    PubMed

    Volman, S F; Konishi, M

    1990-01-01

    Bilateral ear asymmetry is found in some, but not all, species of owls. We investigated the neural basis of sound localization in symmetrical and asymmetrical species, to deduce how ear asymmetry might have evolved from the ancestral condition, by comparing the response properties of neurons in the external nucleus of the inferior colliculus (ICx) of the symmetrical burrowing owl and asymmetrical long-eared owl with previous findings in the symmetrical great horned owl and asymmetrical barn owl. In the ICx of all of these owls, the neurons had spatially restricted receptive fields, and auditory space was topographically mapped. In the symmetrical owls, ICx units were not restricted in elevation, and only azimuth was mapped in ICx. In the barn owl, the space map is two-dimensional, with elevation forming the second dimension. Receptive fields in the long-eared owl were somewhat restricted in elevation, but their tuning was not sharp enough to determine if elevation is mapped. In every species, the primary cue for azimuth was interaural time difference, although ICx units were also tuned for interaural intensity difference (IID). In the barn owl, the IIDs of sounds with frequencies between about 5 and 8 kHz vary systematically with elevation, and the IID selectivity of ICx neurons primarily encodes elevation. In the symmetrical owls, whose ICx neurons do not respond to frequencies above about 5 kHz, IID appears to be a supplementary cue for azimuth. We hypothesize that ear asymmetry can be exploited by owls that have evolved the higher-frequency hearing necessary to generate elevation cues. Thus, the IID selectivity of ICx neurons in symmetrical owls may preadapt them for asymmetry; the neural circuitry that underlies IID selectivity is already present in symmetrical owls, but because IID is not absolutely required to encode azimuth it can come to encode elevation in asymmetrical owls.

  12. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  13. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  14. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  15. Spatial structure of cone inputs to receptive fields in primate lateral geniculate nucleus

    NASA Astrophysics Data System (ADS)

    Reid, R. Clay; Shapley, Robert M.

    1992-04-01

    HUMAN colour vision depends on three classes of cone photoreceptors, those sensitive to short (S), medium (M) or long (L) wavelengths, and on how signals from these cones are combined by neurons in the retina and brain. Macaque monkey colour vision is similar to human, and the receptive fields of macaque visual neurons have been used as an animal model of human colour processing1. P retinal ganglion cells and parvocellular neurons are colour-selective neurons in macaque retina and lateral geniculate nucleus. Interactions between cone signals feeding into these neurons are still unclear. On the basis of experimental results with chromatic adaptation, excitatory and inhibitory inputs from L and M cones onto P cells (and parvocellular neurons) were thought to be quite specific2,3 (Fig. la). But these experiments with spatially diffuse adaptation did not rule out the 'mixed-surround' hypothesis: that there might be one cone-specific mechanism, the receptive field centre, and a surround mechanism connected to all cone types indiscriminately (Fig. le). Recent work has tended to support the mixed-surround hypothesis4-8. We report here the development of new stimuli to measure spatial maps of the linear L-, M- and S-cone inputs to test the hypothesis definitively. Our measurements contradict the mixed-surround hypothesis and imply cone specificity in both centre and surround.

  16. Receptive-field subfields of V2 neurons in macaque monkeys are adult-like near birth.

    PubMed

    Zhang, Bin; Tao, Xiaofeng; Shen, Guofu; Smith, Earl L; Ohzawa, Izumi; Chino, Yuzo M

    2013-02-06

    Infant primates can discriminate texture-defined form despite their relatively low visual acuity. The neuronal mechanisms underlying this remarkable visual capacity of infants have not been studied in nonhuman primates. Since many V2 neurons in adult monkeys can extract the local features in complex stimuli that are required for form vision, we used two-dimensional dynamic noise stimuli and local spectral reverse correlation to measure whether the spatial map of receptive-field subfields in individual V2 neurons is sufficiently mature near birth to capture local features. As in adults, most V2 neurons in 4-week-old monkeys showed a relatively high degree of homogeneity in the spatial matrix of facilitatory subfields. However, ∼25% of V2 neurons had the subfield map where the neighboring facilitatory subfields substantially differed in their preferred orientations and spatial frequencies. Over 80% of V2 neurons in both infants and adults had "tuned" suppressive profiles in their subfield maps that could alter the tuning properties of facilitatory profiles. The differences in the preferred orientations between facilitatory and suppressive profiles were relatively large but extended over a broad range. Response immaturities in infants were mild; the overall strength of facilitatory subfield responses was lower than that in adults, and the optimal correlation delay ("latency") was longer in 4-week-old infants. These results suggest that as early as 4 weeks of age, the spatial receptive-field structure of V2 neurons is as complex as in adults and the ability of V2 neurons to compare local features of neighboring stimulus elements is nearly adult like.

  17. Capturing contextual effects in spectro-temporal receptive fields.

    PubMed

    Westö, Johan; May, Patrick J C

    2016-09-01

    Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Time domain multiplexed spatial division multiplexing receiver.

    PubMed

    van Uden, Roy G H; Okonkwo, Chigo M; Chen, Haoshuo; de Waardt, Hugo; Koonen, Antonius M J

    2014-05-19

    A novel time domain multiplexed (TDM) spatial division multiplexing (SDM) receiver which allows for the reception of >1 dual polarization mode with a single coherent receiver, and corresponding 4-port oscilloscope, is experimentally demonstrated. Received by two coherent receivers and respective 4-port oscilloscopes, a 3 mode transmission of 28GBaud QPSK, 8, 16, and 32QAM over 41.7km of few-mode fiber demonstrates the performance of the TDM-SDM receiver with respect to back-to-back. In addition, by using carrier phase estimation employing one digital phase locked loop per output, the frequency offset between the transmitter laser and local oscillator is shown to perform similar to previous work which employs 3 coherent receivers and 4-port oscilloscopes which are dedicated to the reception of each the three modes.

  19. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    NASA Astrophysics Data System (ADS)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  20. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  1. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  2. The effect of spatial auditory landmarks on ambulation.

    PubMed

    Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E

    2018-02-01

    The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    ERIC Educational Resources Information Center

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  4. Audiovisual Perception of Noise Vocoded Speech in Dyslexic and Non-Dyslexic Adults: The Role of Low-Frequency Visual Modulations

    ERIC Educational Resources Information Center

    Megnin-Viggars, Odette; Goswami, Usha

    2013-01-01

    Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…

  5. Sexual Hearing: The influence of sex hormones on acoustic communication in frogs

    PubMed Central

    Arch, Victoria S.; Narins, Peter M.

    2009-01-01

    The majority of anuran amphibians (frogs and toads) use acoustic communication to mediate sexual behavior and reproduction. Generally, females find and select their mates using acoustic cues provided by males in the form of conspicuous advertisement calls. In these species, vocal signal production and reception are intimately tied to successful reproduction. Research with anurans has demonstrated that acoustic communication is modulated by reproductive hormones, including gonadal steroids and peptide neuromodulators. Most of these studies have focused on the ways in which hormonal systems influence vocal signal production; however, here we will concentrate on a growing body of literature that examines hormonal modulation of call reception. This literature suggests that reproductive hormones contribute to the coordination of reproductive behaviors between signaler and receiver by modulating sensitivity and spectral filtering of the anuran auditory system. It has become evident that the hormonal systems that influence reproductive behaviors are highly conserved among vertebrate taxa, thus studying the endocrine and neuromodulatory bases of acoustic communication in frogs and toads can lead to insights of broader applicability to hormonal modulation of vertebrate sensory physiology and behavior. PMID:19272318

  6. Prosody perception and musical pitch discrimination in adults using cochlear implants.

    PubMed

    Kalathottukaren, Rose Thomas; Purdy, Suzanne C; Ballard, Elaine

    2015-07-01

    This study investigated prosodic perception and musical pitch discrimination in adults using cochlear implants (CI), and examined the relationship between prosody perception scores and non-linguistic auditory measures, demographic variables, and speech recognition scores. Participants were given four subtests of the PEPS-C (profiling elements of prosody in speech-communication), the adult paralanguage subtest of the DANVA 2 (diagnostic analysis of non verbal accuracy 2), and the contour and interval subtests of the MBEA (Montreal battery of evaluation of amusia). Twelve CI users aged 25;5 to 78;0 years participated. CI participants performed significantly more poorly than normative values for New Zealand adults for PEPS-C turn-end, affect, and contrastive stress reception subtests, but were not different from the norm for the chunking reception subtest. Performance on the DANVA 2 adult paralanguage subtest was lower than the normative mean reported by Saindon (2010) . Most of the CI participants performed at chance level on both MBEA subtests. CI users have difficulty perceiving prosodic information accurately. Difficulty in understanding different aspects of prosody and music may be associated with reduced pitch perception ability.

  7. Link between orientation and retinotopic maps in primary visual cortex

    PubMed Central

    Paik, Se-Bum; Ringach, Dario L.

    2012-01-01

    Maps representing the preference of neurons for the location and orientation of a stimulus on the visual field are a hallmark of primary visual cortex. It is not yet known how these maps develop and what function they play in visual processing. One hypothesis postulates that orientation maps are initially seeded by the spatial interference of ON- and OFF-center retinal receptive field mosaics. Here we show that such a mechanism predicts a link between the layout of orientation preferences around singularities of different signs and the cardinal axes of the retinotopic map. Moreover, we confirm the predicted relationship holds in tree shrew primary visual cortex. These findings provide additional support for the notion that spatially structured input from the retina may provide a blueprint for the early development of cortical maps and receptive fields. More broadly, it raises the possibility that spatially structured input from the periphery may shape the organization of primary sensory cortex of other modalities as well. PMID:22509015

  8. Spatial Release From Masking in Simulated Cochlear Implant Users With and Without Access to Low-Frequency Acoustic Hearing

    PubMed Central

    Dietz, Mathias; Hohmann, Volker; Jürgens, Tim

    2015-01-01

    For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918

  9. Directional harmonic theory: a computational Gestalt model to account for illusory contour and vertex formation.

    PubMed

    Lehar, Steven

    2003-01-01

    Visual illusions and perceptual grouping phenomena offer an invaluable tool for probing the computational mechanism of low-level visual processing. Some illusions, like the Kanizsa figure, reveal illusory contours that form edges collinear with the inducing stimulus. This kind of illusory contour has been modeled by neural network models by way of cells equipped with elongated spatial receptive fields designed to detect and complete the collinear alignment. There are, however, other illusory groupings which are not so easy to account for in neural network terms. The Ehrenstein illusion exhibits an illusory contour that forms a contour orthogonal to the stimulus instead of collinear with it. Other perceptual grouping effects reveal illusory contours that exhibit a sharp corner or vertex, and still others take the form of vertices defined by the intersection of three, four, or more illusory contours that meet at a point. A direct extension of the collinear completion models to account for these phenomena tends towards a combinatorial explosion, because it would suggest cells with specialized receptive fields configured to perform each of those completion types, each of which would have to be replicated at every location and every orientation across the visual field. These phenomena therefore challenge the adequacy of the neural network approach to account for these diverse perceptual phenomena. I have proposed elsewhere an alternative paradigm of neurocomputation in the harmonic resonance theory (Lehar 1999, see website), whereby pattern recognition and completion are performed by spatial standing waves across the neural substrate. The standing waves perform a computational function analogous to that of the spatial receptive fields of the neural network approach, except that, unlike that paradigm, a single resonance mechanism performs a function equivalent to a whole array of spatial receptive fields of different spatial configurations and of different orientations, and thereby avoids the combinatorial explosion inherent in the older paradigm. The present paper presents the directional harmonic model, a more specific development of the harmonic resonance theory, designed to account for specific perceptual grouping phenomena. Computer simulations of the directional harmonic model show that it can account for collinear contours as observed in the Kanizsa figure, orthogonal contours as seen in the Ehrenstein illusion, and a number of illusory vertex percepts composed of two, three, or more illusory contours that meet in a variety of configurations.

  10. Humpback whale bioacoustics: From form to function

    NASA Astrophysics Data System (ADS)

    Mercado, Eduardo, III

    This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.

  11. The Coupling between Ca2+ Channels and the Exocytotic Ca2+ Sensor at Hair Cell Ribbon Synapses Varies Tonotopically along the Mature Cochlea

    PubMed Central

    Cho, Soyoun

    2017-01-01

    The cochlea processes auditory signals over a wide range of frequencies and intensities. However, the transfer characteristics at hair cell ribbon synapses are still poorly understood at different frequency locations along the cochlea. Using recordings from mature gerbils, we report here a surprisingly strong block of exocytosis by the slow Ca2+ buffer EGTA (10 mM) in basal hair cells tuned to high frequencies (∼30 kHz). In addition, using recordings from gerbil, mouse, and bullfrog auditory organs, we find that the spatial coupling between Ca2+ influx and exocytosis changes from nanodomain in low-frequency tuned hair cells (∼<2 kHz) to progressively more microdomain in high-frequency cells (∼>2 kHz). Hair cell synapses have thus developed remarkable frequency-dependent tuning of exocytosis: accurate low-latency encoding of onset and offset of sound intensity in the cochlea's base and submillisecond encoding of membrane receptor potential fluctuations in the apex for precise phase-locking to sound signals. We also found that synaptic vesicle pool recovery from depletion was sensitive to high concentrations of EGTA, suggesting that intracellular Ca2+ buffers play an important role in vesicle recruitment in both low- and high-frequency hair cells. In conclusion, our results indicate that microdomain coupling is important for exocytosis in high-frequency hair cells, suggesting a novel hypothesis for why these cells are more susceptible to sound-induced damage than low-frequency cells; high-frequency inner hair cells must have a low Ca2+ buffer capacity to sustain exocytosis, thus making them more prone to Ca2+-induced cytotoxicity. SIGNIFICANCE STATEMENT In the inner ear, sensory hair cells signal reception of sound. They do this by converting the sound-induced movement of their hair bundles present at the top of these cells, into an electrical current. This current depolarizes the hair cell and triggers the calcium-induced release of the neurotransmitter glutamate that activates the postsynaptic auditory fibers. The speed and precision of this process enables the brain to perceive the vital components of sound, such as frequency and intensity. We show that the coupling strength between calcium channels and the exocytosis calcium sensor at inner hair cell synapses changes along the mammalian cochlea such that the timing and/or intensity of sound is encoded with high precision. PMID:28154149

  12. Boundary-layer receptivity due to distributed surface imperfections of a deterministic or random nature

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan

    1992-01-01

    Acoustic receptivity of a Blasius boundary layer in the presence of distributed surface irregularities is investigated analytically. It is shown that, out of the entire spatial spectrum of the surface irregularities, only a small band of Fourier components can lead to an efficient conversion of the acoustic input at any given frequency to an unstable eigenmode of the boundary layer flow. The location, and width, of this most receptive band of wavenumbers corresponds to a relative detuning of O(R sub l.b.(exp -3/8)) with respect to the lower-neutral instability wavenumber at the frequency under consideration, R sub l.b. being the Reynolds number based on a typical boundary-layer thickness at the lower branch of the neutral stability curve. Surface imperfections in the form of discrete mode waviness in this range of wavenumbers lead to initial instability amplitudes which are O(R sub l.b.(exp 3/8)) larger than those caused by a single, isolated roughness element. In contrast, irregularities with a continuous spatial spectrum produce much smaller instability amplitudes, even compared to the isolated case, since the increase due to the resonant nature of the response is more than that compensated for by the asymptotically small band-width of the receptivity process. Analytical expressions for the maximum possible instability amplitudes, as well as their expectation for an ensemble of statistically irregular surfaces with random phase distributions, are also presented.

  13. Sensory Substitution: The Spatial Updating of Auditory Scenes “Mimics” the Spatial Updating of Visual Scenes

    PubMed Central

    Pasqualotto, Achille; Esenkaya, Tayfun

    2016-01-01

    Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or “soundscapes”. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD). PMID:27148000

  14. Binding of Verbal and Spatial Features in Auditory Working Memory

    ERIC Educational Resources Information Center

    Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.

    2009-01-01

    The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…

  15. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  16. Realigning thunder and lightning: temporal adaptation to spatiotemporally distant events.

    PubMed

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants' SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events).

  17. Effects of training and motivation on auditory P300 brain-computer interface performance.

    PubMed

    Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S

    2016-01-01

    Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Perceptual integration of faces and voices depends on the interaction of emotional content and spatial frequency.

    PubMed

    Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich

    2017-02-01

    The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. The Influence of Eye Closure on Somatosensory Discrimination: A Trade-off Between Simple Perception and Discrimination.

    PubMed

    Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W

    2017-06-01

    We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Global versus local adaptation in fly motion-sensitive neurons

    PubMed Central

    Neri, Peter; Laughlin, Simon B

    2005-01-01

    Flies, like humans, experience a well-known consequence of adaptation to visual motion, the waterfall illusion. Direction-selective neurons in the fly lobula plate permit a detailed analysis of the mechanisms responsible for motion adaptation and their function. Most of these neurons are spatially non-opponent, they sum responses to motion in the preferred direction across their entire receptive field, and adaptation depresses responses by subtraction and by reducing contrast gain. When we adapted a small area of the receptive field to motion in its anti-preferred direction, we discovered that directional gain at unadapted regions was enhanced. This novel phenomenon shows that neuronal responses to the direction of stimulation in one area of the receptive field are dynamically adjusted to the history of stimulation both within and outside that area. PMID:16191636

  1. Can an aircraft be piloted via sonification with an acceptable attentional cost? A comparison of blind and sighted pilots.

    PubMed

    Valéry, Benoît; Scannella, Sébastien; Peysakhovich, Vsevolod; Barone, Pascal; Causse, Mickaël

    2017-07-01

    In the aeronautics field, some authors have suggested that an aircraft's attitude sonification could be used by pilots to cope with spatial disorientation situations. Such a system is currently used by blind pilots to control the attitude of their aircraft. However, given the suspected higher auditory attentional capacities of blind people, the possibility for sighted individuals to use this system remains an open question. For example, its introduction may overload the auditory channel, which may in turn alter the responsiveness of pilots to infrequent but critical auditory warnings. In this study, two groups of pilots (blind versus sighted) performed a simulated flight experiment consisting of successive aircraft maneuvers, on the sole basis of an aircraft sonification. Maneuver difficulty was varied while we assessed flight performance along with subjective and electroencephalographic (EEG) measures of workload. The results showed that both groups of participants reached target-attitudes with a good accuracy. However, more complex maneuvers increased subjective workload and impaired brain responsiveness toward unexpected auditory stimuli as demonstrated by lower N1 and P3 amplitudes. Despite that the EEG signal showed a clear reorganization of the brain in the blind participants (higher alpha power), the brain responsiveness to unexpected auditory stimuli was not significantly different between the two groups. The results suggest that an auditory display might provide useful additional information to spatially disoriented pilots with normal vision. However, its use should be restricted to critical situations and simple recovery or guidance maneuvers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Preattentive representation of feature conjunctions for concurrent spatially distributed auditory objects.

    PubMed

    Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István

    2005-09-01

    The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.

  3. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  4. Multisensory connections of monkey auditory cerebral cortex

    PubMed Central

    Smiley, John F.; Falchier, Arnaud

    2009-01-01

    Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources. PMID:19619628

  5. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  6. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System.

    PubMed

    Roth, Zvi N

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream.

  7. Functional MRI Representational Similarity Analysis Reveals a Dissociation between Discriminative and Relative Location Information in the Human Visual System

    PubMed Central

    Roth, Zvi N.

    2016-01-01

    Neural responses in visual cortex are governed by a topographic mapping from retinal locations to cortical responses. Moreover, at the voxel population level early visual cortex (EVC) activity enables accurate decoding of stimuli locations. However, in many cases information enabling one to discriminate between locations (i.e., discriminative information) may be less relevant than information regarding the relative location of two objects (i.e., relative information). For example, when planning to grab a cup, determining whether the cup is located at the same retinal location as the hand is hardly relevant, whereas the location of the cup relative to the hand is crucial for performing the action. We have previously used multivariate pattern analysis techniques to measure discriminative location information, and found the highest levels in EVC, in line with other studies. Here we show, using representational similarity analysis, that availability of discriminative information in fMRI activation patterns does not entail availability of relative information. Specifically, we find that relative location information can be reliably extracted from activity patterns in posterior intraparietal sulcus (pIPS), but not from EVC, where we find the spatial representation to be warped. We further show that this variability in relative information levels between regions can be explained by a computational model based on an array of receptive fields. Moreover, when the model's receptive fields are extended to include inhibitory surround regions, the model can account for the spatial warping in EVC. These results demonstrate how size and shape properties of receptive fields in human visual cortex contribute to the transformation of discriminative spatial representations into relative spatial representations along the visual stream. PMID:27242455

  8. A comprehensive three-dimensional cortical map of vowel space.

    PubMed

    Scharinger, Mathias; Idsardi, William J; Poe, Samantha

    2011-12-01

    Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.

  9. Missing a trick: Auditory load modulates conscious awareness in audition.

    PubMed

    Fairnie, Jake; Moore, Brian C J; Remington, Anna

    2016-07-01

    In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. A possible role for a paralemniscal auditory pathway in the coding of slow temporal information

    PubMed Central

    Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina

    2010-01-01

    Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680

  11. Thinking about touch facilitates tactile but not auditory processing.

    PubMed

    Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris

    2012-05-01

    Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.

  12. Development of Mobile Accessible Pedestrian Signals (MAPS) for Blind Pedestrians at Signalized Intersections

    DOT National Transportation Integrated Search

    2011-06-01

    People with vision impairment have different perception and spatial cognition as compared to the sighted people. Blind pedestrians primarily rely on auditory, olfactory, or tactile feedback to determine spatial location and find their way. They gener...

  13. Reproductive Hormones Modify Reception of Species-Typical Communication Signals in a Female Anuran

    PubMed Central

    Lynch, Kathleen S.; Wilczynski, Walter

    2008-01-01

    In many vertebrates, the production and reception of species-typical courtship signals occurs when gonadotropin and gonadal hormone levels are elevated. These hormones may modify sensory processing in the signal receiver in a way that enhances behavioral responses to the signal. We examined this possibility in female túngara frogs (Physalaemus pustulosus) by treating them with either gonadotropin (which elevated estradiol) or saline and exposing them to either mate choruses or silence. Expression of an activity-dependent gene, egr-1, was quantified within two sub-nuclei of the auditory midbrain to investigate whether gonadotropin plus chorus exposure induced greater egr-1 induction than either of these stimuli alone. The laminar nucleus (LN), a sub-nucleus of the torus semicircularis that contains steroid receptors, exhibited elevated egr-1 induction in response to chorus exposure and gonadotropin treatment. Further analysis revealed that neither chorus exposure nor gonadotropin treatment alone elevated egr-1 expression in comparison to baseline levels whereas gonadotropin + chorus exposure did. This suggests that mate signals and hormones together produce an additive effect so that together they induce more egr-1 expression than either alone. Our previously published studies of female túngara frogs reveal that (1) gonadotropin-induced estradiol elevations also increase behavioral responses to male signals, and (2) reception of male signals elevates estradiol levels in the female. Here, we report data that reveal a novel mechanism by which males exploit female sensory processing to increase behavioral responses to their courtship signals. PMID:18032889

  14. Spatial updating in area LIP is independent of saccade direction.

    PubMed

    Heiser, Laura M; Colby, Carol L

    2006-05-01

    We explore the world around us by making rapid eye movements to objects of interest. Remarkably, these eye movements go unnoticed, and we perceive the world as stable. Spatial updating is one of the neural mechanisms that contributes to this perception of spatial constancy. Previous studies in macaque lateral intraparietal cortex (area LIP) have shown that individual neurons update, or "remap," the locations of salient visual stimuli at the time of an eye movement. The existence of remapping implies that neurons have access to visual information from regions far beyond the classically defined receptive field. We hypothesized that neurons have access to information located anywhere in the visual field. We tested this by recording the activity of LIP neurons while systematically varying the direction in which a stimulus location must be updated. Our primary finding is that individual neurons remap stimulus traces in multiple directions, indicating that LIP neurons have access to information throughout the visual field. At the population level, stimulus traces are updated in conjunction with all saccade directions, even when we consider direction as a function of receptive field location. These results show that spatial updating in LIP is effectively independent of saccade direction. Our findings support the hypothesis that the activity of LIP neurons contributes to the maintenance of spatial constancy throughout the visual field.

  15. Acoustic Receptivity of Mach 4.5 Boundary Layer with Leading- Edge Bluntness

    NASA Technical Reports Server (NTRS)

    Malik, Mujeeb R.; Balakumar, Ponnampalam

    2007-01-01

    Boundary layer receptivity to two-dimensional slow and fast acoustic waves is investigated by solving Navier-Stokes equations for Mach 4.5 flow over a flat plate with a finite-thickness leading edge. Higher order spatial and temporal schemes are employed to obtain the solution whereby the flat-plate leading edge region is resolved by providing a sufficiently refined grid. The results show that the instability waves are generated in the leading edge region and that the boundary-layer is much more receptive to slow acoustic waves (by almost a factor of 20) as compared to the fast waves. Hence, this leading-edge receptivity mechanism is expected to be more relevant in the transition process for high Mach number flows where second mode instability is dominant. Computations are performed to investigate the effect of leading-edge thickness and it is found that bluntness tends to stabilize the boundary layer. Furthermore, the relative significance of fast acoustic waves is enhanced in the presence of bluntness. The effect of acoustic wave incidence angle is also studied and it is found that the receptivity of the boundary layer on the windward side (with respect to the acoustic forcing) decreases by more than a factor of 4 when the incidence angle is increased from 0 to 45 deg. However, the receptivity coefficient for the leeward side is found to vary relatively weakly with the incidence angle.

  16. Prediction of consonant recognition in quiet for listeners with normal and impaired hearing using an auditory model.

    PubMed

    Jürgens, Tim; Ewert, Stephan D; Kollmeier, Birger; Brand, Thomas

    2014-03-01

    Consonant recognition was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in quiet as a function of speech level using a nonsense logatome test. Average recognition scores were analyzed and compared to recognition scores of a speech recognition model. In contrast to commonly used spectral speech recognition models operating on long-term spectra, a "microscopic" model operating in the time domain was used. Variations of the model (accounting for hearing impairment) and different model parameters (reflecting cochlear compression) were tested. Using these model variations this study examined whether speech recognition performance in quiet is affected by changes in cochlear compression, namely, a linearization, which is often observed in HI listeners. Consonant recognition scores for HI listeners were poorer than for NH listeners. The model accurately predicted the speech reception thresholds of the NH and most HI listeners. A partial linearization of the cochlear compression in the auditory model, while keeping audibility constant, produced higher recognition scores and improved the prediction accuracy. However, including listener-specific information about the exact form of the cochlear compression did not improve the prediction further.

  17. Comparative assessment of amphibious hearing in pinnipeds.

    PubMed

    Reichmuth, Colleen; Holt, Marla M; Mulsow, Jason; Sills, Jillian M; Southall, Brandon L

    2013-06-01

    Auditory sensitivity in pinnipeds is influenced by the need to balance efficient sound detection in two vastly different physical environments. Previous comparisons between aerial and underwater hearing capabilities have considered media-dependent differences relative to auditory anatomy, acoustic communication, ecology, and amphibious life history. New data for several species, including recently published audiograms and previously unreported measurements obtained in quiet conditions, necessitate a re-evaluation of amphibious hearing in pinnipeds. Several findings related to underwater hearing are consistent with earlier assessments, including an expanded frequency range of best hearing in true seals that spans at least six octaves. The most notable new results indicate markedly better aerial sensitivity in two seals (Phoca vitulina and Mirounga angustirostris) and one sea lion (Zalophus californianus), likely attributable to improved ambient noise control in test enclosures. An updated comparative analysis alters conventional views and demonstrates that these amphibious pinnipeds have not necessarily sacrificed aerial hearing capabilities in favor of enhanced underwater sound reception. Despite possessing underwater hearing that is nearly as sensitive as fully aquatic cetaceans and sirenians, many seals and sea lions have retained acute aerial hearing capabilities rivaling those of terrestrial carnivores.

  18. Head-Up Auditory Displays for Traffic Collision Avoidance System Advisories: A Preliminary Investigation

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.

    1993-01-01

    The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.

  19. Cortical depth dependent population receptive field attraction by spatial attention in human V1.

    PubMed

    Klein, Barrie P; Fracasso, Alessio; van Dijk, Jelle A; Paffen, Chris L E; Te Pas, Susan F; Dumoulin, Serge O

    2018-04-27

    Visual spatial attention concentrates neural resources at the attended location. Recently, we demonstrated that voluntary spatial attention attracts population receptive fields (pRFs) toward its location throughout the visual hierarchy. Theoretically, both a feed forward or feedback mechanism could underlie pRF attraction in a given cortical area. Here, we use sub-millimeter ultra-high field functional MRI to measure pRF attraction across cortical depth and assess the contribution of feed forward and feedback signals to pRF attraction. In line with previous findings, we find consistent attraction of pRFs with voluntary spatial attention in V1. When assessed as a function of cortical depth, we find pRF attraction in every cortical portion (deep, center and superficial), although the attraction is strongest in deep cortical portions (near the gray-white matter boundary). Following the organization of feed forward and feedback processing across V1, we speculate that a mixture of feed forward and feedback processing underlies pRF attraction in V1. Specifically, we propose that feedback processing contributes to the pRF attraction in deep cortical portions. Copyright © 2018. Published by Elsevier Inc.

  20. The RetroX auditory implant for high-frequency hearing loss.

    PubMed

    Garin, P; Genard, F; Galle, C; Jamart, J

    2004-07-01

    The objective of this study was to analyze the subjective satisfaction and measure the hearing gain provided by the RetroX (Auric GmbH, Rheine, Germany), an auditory implant of the external ear. We conducted a retrospective case review. We conducted this study at a tertiary referral center at a university hospital. We studied 10 adults with high-frequency sensori-neural hearing loss (ski-slope audiogram). The RetroX consists of an electronic unit sited in the postaural sulcus connected to a titanium tube implanted under the auricle between the sulcus and the entrance of the external auditory canal. Implanting requires only minor surgery under local anesthesia. Main outcome measures were a satisfaction questionnaire, pure-tone audiometry in quiet, speech audiometry in quiet, speech audiometry in noise, and azimuth audiometry (hearing threshold in function of sound source location within the horizontal plane at ear level). : Subjectively, all 10 patients are satisfied or even extremely satisfied with the hearing improvement provided by the RetroX. They wear the implant daily, from morning to evening. We observe a statistically significant improvement of pure-tone thresholds at 1, 2, and 4 kHz. In quiet, the speech reception threshold improves by 9 dB. Speech audiometry in noise shows that intelligibility improves by 26% for a signal-to-noise ratio of -5 dB, by 18% for a signal-to-noise ratio of 0 dB, and by 13% for a signal-to-noise ratio of +5 dB. Localization audiometry indicates that the skull masks sound contralateral to the implanted ear. Of the 10 patients, one had acoustic feedback and one presented with a granulomatous reaction to the foreign body that necessitated removing the implant. The RetroX auditory implant is a semi-implantable hearing aid without occlusion of the external auditory canal. It provides a new therapeutic alternative for managing high-frequency hearing loss.

  1. Similarity in Spatial Origin of Information Facilitates Cue Competition and Interference

    ERIC Educational Resources Information Center

    Amundson, Jeffrey C.; Miller, Ralph R.

    2007-01-01

    Two lick suppression studies were conducted with water-deprived rats to investigate the influence of spatial similarity in cue interaction. Experiment 1 assessed the influence of similarity of the spatial origin of competing cues in a blocking procedure. Greater blocking was observed in the condition in which the auditory blocking cue and the…

  2. Ergodic channel capacity of spatial correlated multiple-input multiple-output free space optical links using multipulse pulse-position modulation

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Cao, Minghua

    2017-02-01

    The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.

  3. Beethoven's Last Piano Sonata and Those Who Follow Crocodiles: Cross-Domain Mappings of Auditory Pitch in a Musical Context

    ERIC Educational Resources Information Center

    Eitan, Zohar; Timmers, Renee

    2010-01-01

    Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…

  4. Near-Independent Capacities and Highly Constrained Output Orders in the Simultaneous Free Recall of Auditory-Verbal and Visuo-Spatial Stimuli

    ERIC Educational Resources Information Center

    Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff

    2018-01-01

    Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…

  5. Spatiotemporal profiles of receptive fields of neurons in the lateral posterior nucleus of the cat LP-pulvinar complex.

    PubMed

    Piché, Marilyse; Thomas, Sébastien; Casanova, Christian

    2015-10-01

    The pulvinar is the largest extrageniculate thalamic visual nucleus in mammals. It establishes reciprocal connections with virtually all visual cortexes and likely plays a role in transthalamic cortico-cortical communication. In cats, the lateral posterior nucleus (LP) of the LP-pulvinar complex can be subdivided in two subregions, the lateral (LPl) and medial (LPm) parts, which receive a predominant input from the striate cortex and the superior colliculus, respectively. Here, we revisit the receptive field structure of LPl and LPm cells in anesthetized cats by determining their first-order spatiotemporal profiles through reverse correlation analysis following sparse noise stimulation. Our data reveal the existence of previously unidentified receptive field profiles in the LP nucleus both in space and time domains. While some cells responded to only one stimulus polarity, the majority of neurons had receptive fields comprised of bright and dark responsive subfields. For these neurons, dark subfields' size was larger than that of bright subfields. A variety of receptive field spatial organization types were identified, ranging from totally overlapped to segregated bright and dark subfields. In the time domain, a large spectrum of activity overlap was found, from cells with temporally coinciding subfield activity to neurons with distinct, time-dissociated subfield peak activity windows. We also found LP neurons with space-time inseparable receptive fields and neurons with multiple activity periods. Finally, a substantial degree of homology was found between LPl and LPm first-order receptive field spatiotemporal profiles, suggesting a high integration of cortical and subcortical inputs within the LP-pulvinar complex. Copyright © 2015 the American Physiological Society.

  6. Source analysis of auditory steady-state responses in acoustic and electric hearing.

    PubMed

    Luke, Robert; De Vos, Astrid; Wouters, Jan

    2017-02-15

    Speech is a complex signal containing a broad variety of acoustic information. For accurate speech reception, the listener must perceive modulations over a range of envelope frequencies. Perception of these modulations is particularly important for cochlear implant (CI) users, as all commercial devices use envelope coding strategies. Prolonged deafness affects the auditory pathway. However, little is known of how cochlear implantation affects the neural processing of modulated stimuli. This study investigates and contrasts the neural processing of envelope rate modulated signals in acoustic and CI listeners. Auditory steady-state responses (ASSRs) are used to study the neural processing of amplitude modulated (AM) signals. A beamforming technique is applied to determine the increase in neural activity relative to a control condition, with particular attention paid to defining the accuracy and precision of this technique relative to other tomographies. In a cohort of 44 acoustic listeners, the location, activity and hemispheric lateralisation of ASSRs is characterised while systematically varying the modulation rate (4, 10, 20, 40 and 80Hz) and stimulation ear (right, left and bilateral). We demonstrate a complex pattern of laterality depending on both modulation rate and stimulation ear that is consistent with, and extends, existing literature. We present a novel extension to the beamforming method which facilitates source analysis of electrically evoked auditory steady-state responses (EASSRs). In a cohort of 5 right implanted unilateral CI users, the neural activity is determined for the 40Hz rate and compared to the acoustic cohort. Results indicate that CI users activate typical thalamic locations for 40Hz stimuli. However, complementary to studies of transient stimuli, the CI population has atypical hemispheric laterality, preferentially activating the contralateral hemisphere. Copyright © 2016. Published by Elsevier Inc.

  7. Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.

    PubMed

    Razak, Khaleel A

    2013-01-01

    Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.

  8. The feature-weighted receptive field: an interpretable encoding model for complex feature spaces.

    PubMed

    St-Yves, Ghislain; Naselaris, Thomas

    2017-06-20

    We introduce the feature-weighted receptive field (fwRF), an encoding model designed to balance expressiveness, interpretability and scalability. The fwRF is organized around the notion of a feature map-a transformation of visual stimuli into visual features that preserves the topology of visual space (but not necessarily the native resolution of the stimulus). The key assumption of the fwRF model is that activity in each voxel encodes variation in a spatially localized region across multiple feature maps. This region is fixed for all feature maps; however, the contribution of each feature map to voxel activity is weighted. Thus, the model has two separable sets of parameters: "where" parameters that characterize the location and extent of pooling over visual features, and "what" parameters that characterize tuning to visual features. The "where" parameters are analogous to classical receptive fields, while "what" parameters are analogous to classical tuning functions. By treating these as separable parameters, the fwRF model complexity is independent of the resolution of the underlying feature maps. This makes it possible to estimate models with thousands of high-resolution feature maps from relatively small amounts of data. Once a fwRF model has been estimated from data, spatial pooling and feature tuning can be read-off directly with no (or very little) additional post-processing or in-silico experimentation. We describe an optimization algorithm for estimating fwRF models from data acquired during standard visual neuroimaging experiments. We then demonstrate the model's application to two distinct sets of features: Gabor wavelets and features supplied by a deep convolutional neural network. We show that when Gabor feature maps are used, the fwRF model recovers receptive fields and spatial frequency tuning functions consistent with known organizational principles of the visual cortex. We also show that a fwRF model can be used to regress entire deep convolutional networks against brain activity. The ability to use whole networks in a single encoding model yields state-of-the-art prediction accuracy. Our results suggest a wide variety of uses for the feature-weighted receptive field model, from retinotopic mapping with natural scenes, to regressing the activities of whole deep neural networks onto measured brain activity. Copyright © 2017. Published by Elsevier Inc.

  9. A pilot study of working memory and academic achievement in college students with ADHD.

    PubMed

    Gropper, Rachel J; Tannock, Rosemary

    2009-05-01

    To investigate working memory (WM), academic achievement, and their relationship in university students with attention-deficit/hyperactivity disorder (ADHD). Participants were university students with previously confirmed diagnoses of ADHD (n = 16) and normal control (NC) students (n = 30). Participants completed 3 auditory-verbal WM measures, 2 visual-spatial WM measures, and 1 control executive function task. Also, they self-reported grade point averages (GPAs) based on university courses. The ADHD group displayed significant weaknesses on auditory-verbal WM tasks and 1 visual-spatial task. They also showed a nonsignificant trend for lower GPAs. Within the entire sample, there was a significant relationship between GPA and auditory-verbal WM. WM impairments are evident in a subgroup of the ADHD population attending university. WM abilities are linked with, and thus may compromise, academic attainment. Parents and physicians are advised to counsel university-bound students with ADHD to contact the university accessibility services to provide them with academic guidance.

  10. Development of a test battery for evaluating speech perception in complex listening environments.

    PubMed

    Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R

    2014-08-01

    In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.

  11. Brain Activation Patterns in Response to Conspecific and Heterospecific Social Acoustic Signals in Female Plainfin Midshipman Fish, Porichthys notatus.

    PubMed

    Mohr, Robert A; Chang, Yiran; Bhandiwad, Ashwin A; Forlano, Paul M; Sisneros, Joseph A

    2018-01-01

    While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli. © 2018 S. Karger AG, Basel.

  12. Task relevance modulates the behavioural and neural effects of sensory predictions

    PubMed Central

    Friston, Karl J.; Nobre, Anna C.

    2017-01-01

    The brain is thought to generate internal predictions to optimize behaviour. However, it is unclear whether predictions signalling is an automatic brain function or depends on task demands. Here, we manipulated the spatial/temporal predictability of visual targets, and the relevance of spatial/temporal information provided by auditory cues. We used magnetoencephalography (MEG) to measure participants’ brain activity during task performance. Task relevance modulated the influence of predictions on behaviour: spatial/temporal predictability improved spatial/temporal discrimination accuracy, but not vice versa. To explain these effects, we used behavioural responses to estimate subjective predictions under an ideal-observer model. Model-based time-series of predictions and prediction errors (PEs) were associated with dissociable neural responses: predictions correlated with cue-induced beta-band activity in auditory regions and alpha-band activity in visual regions, while stimulus-bound PEs correlated with gamma-band activity in posterior regions. Crucially, task relevance modulated these spectral correlates, suggesting that current goals influence PE and prediction signalling. PMID:29206225

  13. Spatial band-pass filtering aids decoding musical genres from auditory cortex 7T fMRI.

    PubMed

    Sengupta, Ayan; Pollmann, Stefan; Hanke, Michael

    2018-01-01

    Spatial filtering strategies, combined with multivariate decoding analysis of BOLD images, have been used to investigate the nature of the neural signal underlying the discriminability of brain activity patterns evoked by sensory stimulation -- primarily in the visual cortex. Reported evidence indicates that such signals are spatially broadband in nature, and are not primarily comprised of fine-grained activation patterns. However, it is unclear whether this is a general property of the BOLD signal, or whether it is specific to the details of employed analyses and stimuli. Here we performed an analysis of publicly available, high-resolution 7T fMRI on the response BOLD response to musical genres in primary auditory cortex that matches a previously conducted study on decoding visual orientation from V1.  The results show that the pattern of decoding accuracies with respect to different types and levels of spatial filtering is comparable to that obtained from V1, despite considerable differences in the respective cortical circuitry.

  14. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  15. Realigning Thunder and Lightning: Temporal Adaptation to Spatiotemporally Distant Events

    PubMed Central

    Navarra, Jordi; Fernández-Prieto, Irune; Garcia-Morera, Joel

    2013-01-01

    The brain is able to realign asynchronous signals that approximately coincide in both space and time. Given that many experience-based links between visual and auditory stimuli are established in the absence of spatiotemporal proximity, we investigated whether or not temporal realignment arises in these conditions. Participants received a 3-min exposure to visual and auditory stimuli that were separated by 706 ms and appeared either from the same (Experiment 1) or from different spatial positions (Experiment 2). A simultaneity judgment task (SJ) was administered right afterwards. Temporal realignment between vision and audition was observed, in both Experiment 1 and 2, when comparing the participants’ SJs after this exposure phase with those obtained after a baseline exposure to audiovisual synchrony. However, this effect was present only when the visual stimuli preceded the auditory stimuli during the exposure to asynchrony. A similar pattern of results (temporal realignment after exposure to visual-leading asynchrony but not after exposure to auditory-leading asynchrony) was obtained using temporal order judgments (TOJs) instead of SJs (Experiment 3). Taken together, these results suggest that temporal recalibration still occurs for visual and auditory stimuli that fall clearly outside the so-called temporal window for multisensory integration and appear from different spatial positions. This temporal realignment may be modulated by long-term experience with the kind of asynchrony (vision-leading) that we most frequently encounter in the outside world (e.g., while perceiving distant events). PMID:24391928

  16. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  17. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  18. Space-time codependence of retinal ganglion cells can be explained by novel and separable components of their receptive fields.

    PubMed

    Cowan, Cameron S; Sabharwal, Jasdeep; Wu, Samuel M

    2016-09-01

    Reverse correlation methods such as spike-triggered averaging consistently identify the spatial center in the linear receptive fields (RFs) of retinal ganglion cells (GCs). However, the spatial antagonistic surround observed in classical experiments has proven more elusive. Tests for the antagonistic surround have heretofore relied on models that make questionable simplifying assumptions such as space-time separability and radial homogeneity/symmetry. We circumvented these, along with other common assumptions, and observed a linear antagonistic surround in 754 of 805 mouse GCs. By characterizing the RF's space-time structure, we found the overall linear RF's inseparability could be accounted for both by tuning differences between the center and surround and differences within the surround. Finally, we applied this approach to characterize spatial asymmetry in the RF surround. These results shed new light on the spatiotemporal organization of GC linear RFs and highlight a major contributor to its inseparability. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of the American Physiological Society and The Physiological Society.

  19. The School Office: An Overview.

    ERIC Educational Resources Information Center

    Hudson, Randy

    2000-01-01

    Presents an overview of the fundamental principles of school office design that remain constant despite changes in building technologies, and technological and spatial flexibility. Principles discussed include the school office and traffic patterns, security, and visitor reception requirements. (GR)

  20. Efficient receptive field tiling in primate V1

    PubMed Central

    Nauhaus, Ian; Nielsen, Kristina J.; Callaway, Edward M.

    2017-01-01

    The primary visual cortex (V1) encodes a diverse set of visual features, including orientation, ocular dominance (OD) and spatial frequency (SF), whose joint organization must be precisely structured to optimize coverage within the retinotopic map. Prior experiments have only identified efficient coverage based on orthogonal maps. Here, we used two-photon calcium imaging to reveal an alternative arrangement for OD and SF maps in macaque V1; their gradients run parallel but with unique spatial periods, whereby low SF regions coincide with monocular regions. Next, we mapped receptive fields and find surprisingly precise micro-retinotopy that yields a smaller point-image and requires more efficient inter-map geometry, thus underscoring the significance of map relationships. While smooth retinotopy is constraining, studies suggest that it improves both wiring economy and the V1 population code read downstream. Altogether, these data indicate that connectivity within V1 is finely tuned and precise at the level of individual neurons. PMID:27499086

  1. Laser Stimulation of Single Auditory Nerve Fibers

    PubMed Central

    Littlefield, Philip D.; Vujanovic, Irena; Mundi, Jagmeet; Matic, Agnella Izzo; Richter, Claus-Peter

    2011-01-01

    Objectives/Hypothesis One limitation with cochlear implants is the difficulty stimulating spatially discrete spiral ganglion cell groups because of electrode interactions. Multipolar electrodes have improved on this some, but also at the cost of much higher device power consumption. Recently, it has been shown that spatially selective stimulation of the auditory nerve is possible with a mid-infrared laser aimed at the spiral ganglion via the round window. However, these neurons must be driven at adequate rates for optical radiation to be useful in cochlear implants. We herein use single-fiber recordings to characterize the responses of auditory neurons to optical radiation. Study Design In vivo study using normal-hearing adult gerbils. Methods Two diode lasers were used for stimulation of the auditory nerve. They operated between 1.844 μm and 1.873 μm, with pulse durations of 35 μs to 1,000 μs, and at repetition rates up to 1,000 pulses per second (pps). The laser outputs were coupled to a 200-μm-diameter optical fiber placed against the round window membrane and oriented toward the spiral ganglion. The auditory nerve was exposed through a craniotomy, and recordings were taken from single fibers during acoustic and laser stimulation. Results Action potentials occurred 2.5 ms to 4.0 ms after the laser pulse. The latency jitter was up to 3 ms. Maximum rates of discharge averaged 97 ± 52.5 action potentials per second. The neurons did not strictly respond to the laser at stimulation rates over 100 pps. Conclusions Auditory neurons can be stimulated by a laser beam passing through the round window membrane and driven at rates sufficient for useful auditory information. Optical stimulation and electrical stimulation have different characteristics; which could be selectively exploited in future cochlear implants. Level of Evidence Not applicable. PMID:20830761

  2. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  4. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  5. An investigation of spatial representation of pitch in individuals with congenital amusia.

    PubMed

    Lu, Xuejing; Sun, Yanan; Thompson, William Forde

    2017-09-01

    Spatial representation of pitch plays a central role in auditory processing. However, it is unknown whether impaired auditory processing is associated with impaired pitch-space mapping. Experiment 1 examined spatial representation of pitch in individuals with congenital amusia using a stimulus-response compatibility (SRC) task. For amusic and non-amusic participants, pitch classification was faster and more accurate when correct responses involved a physical action that was spatially congruent with the pitch height of the stimulus than when it was incongruent. However, this spatial representation of pitch was not as stable in amusic individuals, revealed by slower response times when compared with control individuals. One explanation is that the SRC effect in amusics reflects a linguistic association, requiring additional time to link pitch height and spatial location. To test this possibility, Experiment 2 employed a colour-classification task. Participants judged colour while ignoring a concurrent pitch by pressing one of two response keys positioned vertically to be congruent or incongruent with the pitch. The association between pitch and space was found in both groups, with comparable response times in the two groups, suggesting that amusic individuals are only slower to respond to tasks involving explicit judgments of pitch.

  6. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  7. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  8. Spatial Attention Modulates the Precedence Effect

    ERIC Educational Resources Information Center

    London, Sam; Bishop, Christopher W.; Miller, Lee M.

    2012-01-01

    Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such…

  9. Cerebral responses to local and global auditory novelty under general anesthesia

    PubMed Central

    Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir

    2017-01-01

    Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046

  10. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    PubMed

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Spatial integration and cortical dynamics.

    PubMed

    Gilbert, C D; Das, A; Ito, M; Kapadia, M; Westheimer, G

    1996-01-23

    Cells in adult primary visual cortex are capable of integrating information over much larger portions of the visual field than was originally thought. Moreover, their receptive field properties can be altered by the context within which local features are presented and by changes in visual experience. The substrate for both spatial integration and cortical plasticity is likely to be found in a plexus of long-range horizontal connections, formed by cortical pyramidal cells, which link cells within each cortical area over distances of 6-8 mm. The relationship between horizontal connections and cortical functional architecture suggests a role in visual segmentation and spatial integration. The distribution of lateral interactions within striate cortex was visualized with optical recording, and their functional consequences were explored by using comparable stimuli in human psychophysical experiments and in recordings from alert monkeys. They may represent the substrate for perceptual phenomena such as illusory contours, surface fill-in, and contour saliency. The dynamic nature of receptive field properties and cortical architecture has been seen over time scales ranging from seconds to months. One can induce a remapping of the topography of visual cortex by making focal binocular retinal lesions. Shorter-term plasticity of cortical receptive fields was observed following brief periods of visual stimulation. The mechanisms involved entailed, for the short-term changes, altering the effectiveness of existing cortical connections, and for the long-term changes, sprouting of axon collaterals and synaptogenesis. The mutability of cortical function implies a continual process of calibration and normalization of the perception of visual attributes that is dependent on sensory experience throughout adulthood and might further represent the mechanism of perceptual learning.

  12. Nonverbal spatially selective attention in 4- and 5-year-old children.

    PubMed

    Sanders, Lisa D; Zobel, Benjamin H

    2012-07-01

    Under some conditions 4- and 5-year-old children can differentially process sounds from attended and unattended locations. In fact, the latency of spatially selective attention effects on auditory processing as measured with event-related potentials (ERPs) is quite similar in young children and adults. However, it is not clear if developmental differences in the polarity, distribution, and duration of attention effects are best attributed to acoustic characteristics, availability of non-spatial attention cues, task demands, or domain. In the current study adults and children were instructed to attend to one of two simultaneously presented soundscapes (e.g., city sounds or night sounds) to detect targets (e.g., car horn or owl hoot) in the attended channel only. Probes presented from the same location as the attended soundscape elicited a larger negativity by 80 ms after onset in both adults and children. This initial negative difference (Nd) was followed by a larger positivity for attended probes in adults and another negativity for attended probes in children. The results indicate that the neural systems by which attention modulates early auditory processing are available for young children even when presented with nonverbal sounds. They also suggest important interactions between attention, acoustic characteristics, and maturity on auditory evoked potentials. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.

    PubMed

    Nava, Elena; Grassi, Massimo; Turati, Chiara

    2016-01-01

    Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.

  14. Preconditioning of Spatial and Auditory Cues: Roles of the Hippocampus, Frontal Cortex, and Cue-Directed Attention

    PubMed Central

    Talk, Andrew C.; Grasby, Katrina L.; Rawson, Tim; Ebejer, Jane L.

    2016-01-01

    Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity. PMID:27999366

  15. Human sensitivity to differences in the rate of auditory cue change.

    PubMed

    Maloff, Erin S; Grantham, D Wesley; Ashmead, Daniel H

    2013-05-01

    Measurement of sensitivity to differences in the rate of change of auditory signal parameters is complicated by confounds among duration, extent, and velocity of the changing signal. Dooley and Moore [(1988) J. Acoust. Soc. Am. 84(4), 1332-1337] proposed a method for measuring sensitivity to rate of change using a duration discrimination task. They reported improved duration discrimination when an additional intensity or frequency change cue was present. The current experiments were an attempt to use this method to measure sensitivity to the rate of change in intensity and spatial position. Experiment 1 investigated whether duration discrimination was enhanced when additional cues of rate of intensity change, rate of spatial position change, or both were provided. Experiment 2 determined whether participant listening experience or the testing environment influenced duration discrimination task performance. Experiment 3 assessed whether duration discrimination could be used to measure sensitivity to rates of changes in intensity and spatial position for stimuli with lower rates of change, as well as emphasizing the constancy of the velocity cue. Results of these experiments showed that duration discrimination was impaired rather than enhanced by the additional velocity cues. The findings are discussed in terms of the demands of listening to concurrent changes along multiple auditory dimensions.

  16. SeaTouch: A Haptic and Auditory Maritime Environment for Non Visual Cognitive Mapping of Blind Sailors

    NASA Astrophysics Data System (ADS)

    Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques

    Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.

  17. Psychophysical spectro-temporal receptive fields in an auditory task.

    PubMed

    Shub, Daniel E; Richards, Virginia M

    2009-05-01

    Psychophysical relative weighting functions, which provide information about the importance of different regions of a stimulus in forming decisions, are traditionally estimated using trial-based procedures, where a single stimulus is presented and a single response is recorded. Everyday listening is much more "free-running" in that we often must detect randomly occurring signals in the presence of a continuous background. Psychophysical relative weighting functions have not been measured with free-running paradigms. Here, we combine a free-running paradigm with the reverse correlation technique used to estimate physiological spectro-temporal receptive fields (STRFs) to generate psychophysical relative weighting functions that are analogous to physiological STRFs. The psychophysical task required the detection of a fixed target signal (a sequence of spectro-temporally coherent tone pips with a known frequency) in the presence of a continuously presented informational masker (spectro-temporally random tone pips). A comparison of psychophysical relative weighting functions estimated with the current free-running paradigm and trial-based paradigms, suggests that in informational-masking tasks subjects' decision strategies are similar in both free-running and trial-based paradigms. For more cognitively challenging tasks there may be differences in the decision strategies with free-running and trial-based paradigms.

  18. Temporary conductive hearing loss in early life impairs spatial memory of rats in adulthood.

    PubMed

    Zhao, Han; Wang, Li; Chen, Liang; Zhang, Jinsheng; Sun, Wei; Salvi, Richard J; Huang, Yi-Na; Wang, Ming; Chen, Lin

    2018-05-31

    It is known that an interruption of acoustic input in early life will result in abnormal development of the auditory system. Here, we further show that this negative impact actually spans beyond the auditory system to the hippocampus, a system critical for spatial memory. We induced a temporary conductive hearing loss (TCHL) in P14 rats by perforating the eardrum and allowing it to heal. The Morris water maze and Y-maze tests were deployed to evaluate spatial memory of the rats. Electrophysiological recordings and anatomical analysis were made to evaluate functional and structural changes in the hippocampus following TCHL. The rats with the TCHL had nearly normal hearing at P42, but had a decreased performance with the Morris water maze and Y-maze tests compared with the control group. A functional deficit in the hippocampus of the rats with the TCHL was found as revealed by the depressed long-term potentiation and the reduced NMDA receptor-mediated postsynaptic current. A structural deficit in the hippocampus of those animals was also found as revealed the abnormal expression of the NMDA receptors, the decreased number of dendritic spines, the reduced postsynaptic density and the reduced level of neurogenesis. Our study demonstrates that even temporary auditory sensory deprivation in early life of rats results in abnormal development of the hippocampus and consequently impairs spatial memory in adulthood. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  19. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  20. Cortical activation patterns to spatially presented pure tone stimuli with different intensities measured by functional near-infrared spectroscopy.

    PubMed

    Bauernfeind, Günther; Wriessnegger, Selina C; Haumann, Sabine; Lenarz, Thomas

    2018-03-08

    Functional near-infrared spectroscopy (fNIRS) is an emerging technique for the assessment of functional activity of the cerebral cortex. Recently fNIRS was also envisaged as a novel neuroimaging approach for measuring the auditory cortex activity in the field of in auditory diagnostics. This study aimed to investigate differences in brain activity related to spatially presented sounds with different intensities in 10 subjects by means of functional near-infrared spectroscopy (fNIRS). We found pronounced cortical activation patterns in the temporal and frontal regions of both hemispheres. In contrast to these activation patterns, we found deactivation patterns in central and parietal regions of both hemispheres. Furthermore our results showed an influence of spatial presentation and intensity of the presented sounds on brain activity in related regions of interest. These findings are in line with previous fMRI studies which also reported systematic changes of activation in temporal and frontal areas with increasing sound intensity. Although clear evidence for contralaterality effects and hemispheric asymmetries were absent in the group data, these effects were partially visible on the single subject level. Concluding, fNIRS is sensitive enough to capture differences in brain responses during the spatial presentation of sounds with different intensities in several cortical regions. Our results may serve as a valuable contribution for further basic research and the future use of fNIRS in the area of central auditory diagnostics. © 2018 Wiley Periodicals, Inc.

  1. Fear Conditioning is Disrupted by Damage to the Postsubiculum

    PubMed Central

    Robinson, Siobhan; Bucci, David J.

    2011-01-01

    The hippocampus plays a central role in spatial and contextual learning and memory, however relatively little is known about the specific contributions of parahippocampal structures that interface with the hippocampus. The postsubiculum (PoSub) is reciprocally connected with a number of hippocampal, parahippocampal and subcortical structures that are involved in spatial learning and memory. In addition, behavioral data suggest that PoSub is needed for optimal performance during tests of spatial memory. Together, these data suggest that PoSub plays a prominent role in spatial navigation. Currently it is unknown whether the PoSub is needed for other forms of learning and memory that also require the formation of associations among multiple environmental stimuli. To address this gap in the literature we investigated the role of PoSub in Pavlovian fear conditioning. In Experiment 1 male rats received either lesions of PoSub or Sham surgery prior to training in a classical fear conditioning procedure. On the training day a tone was paired with foot shock three times. Conditioned fear to the training context was evaluated 24 hr later by placing rats back into the conditioning chamber without presenting any tones or shocks. Auditory fear was assessed on the third day by presenting the auditory stimulus in a novel environment (no shock). PoSub-lesioned rats exhibited impaired acquisition of the conditioned fear response as well as impaired expression of contextual and auditory fear conditioning. In Experiment 2, PoSub lesions were made 1 day after training to specifically assess the role of PoSub in fear memory. No deficits in the expression of contextual fear were observed, but freezing to the tone was significantly reduced in PoSub-lesioned rats compared to shams. Together, these results indicate that PoSub is necessary for normal acquisition of conditioned fear, and that PoSub contributes to the expression of auditory but not contextual fear memory. PMID:22076971

  2. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    PubMed Central

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior. PMID:29535614

  3. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities.

    PubMed

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks implicated during divided attention across spatial locations and sensory modalities, pointing out the importance of investigating effective connectivity of large-scale brain networks supporting complex behavior.

  4. Brainstem transcription of speech is disrupted in children with autism spectrum disorders

    PubMed Central

    Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina

    2009-01-01

    Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083

  5. A Case Study Assessing the Auditory and Speech Development of Four Children Implanted with Cochlear Implants by the Chronological Age of 12 Months

    PubMed Central

    2013-01-01

    Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation. PMID:23509653

  6. A case study assessing the auditory and speech development of four children implanted with cochlear implants by the chronological age of 12 months.

    PubMed

    May-Mederake, Birgit; Shehata-Dieler, Wafaa

    2013-01-01

    Children with severe hearing loss most likely receive the greatest benefit from a cochlear implant (CI) when implanted at less than 2 years of age. Children with a hearing loss may also benefit greater from binaural sensory stimulation. Four children who received their first CI under 12 months of age were included in this study. Effects on auditory development were determined using the German LittlEARS Auditory Questionnaire, closed- and open-set monosyllabic word tests, aided free-field, the Mainzer and Göttinger speech discrimination tests, Monosyllabic-Trochee-Polysyllabic (MTP), and Listening Progress Profile (LiP). Speech production and grammar development were evaluated using a German language speech development test (SETK), reception of grammar test (TROG-D) and active vocabulary test (AWST-R). The data showed that children implanted under 12 months of age reached open-set monosyllabic word discrimination at an age of 24 months. LiP results improved over time, and children recognized 100% of words in the MTP test after 12 months. All children performed as well as or better than their hearing peers in speech production and grammar development. SETK showed that the speech development of these children was in general age appropriate. The data suggests that early hearing loss intervention benefits speech and language development and supports the trend towards early cochlear implantation. Furthermore, the data emphasizes the potential benefits associated with bilateral implantation.

  7. Natural stimulation of the nonclassical receptive field increases information transmission efficiency in V1.

    PubMed

    Vinje, William E; Gallant, Jack L

    2002-04-01

    We have investigated how the nonclassical receptive field (nCRF) affects information transmission by V1 neurons during simulated natural vision in awake, behaving macaques. Stimuli were centered over the classical receptive field (CRF) and stimulus size was varied from one to four times the diameter of the CRF. Stimulus movies reproduced the spatial and temporal stimulus dynamics of natural vision while maintaining constant CRF stimulation across all sizes. In individual neurons, stimulation of the nCRF significantly increases the information rate, the information per spike, and the efficiency of information transmission. Furthermore, the population averages of these quantities also increase significantly with nCRF stimulation. These data demonstrate that the nCRF increases the sparseness of the stimulus representation in V1, suggesting that the nCRF tunes V1 neurons to match the highly informative components of the natural world.

  8. Visual and auditory accessory stimulus offset and the Simon effect.

    PubMed

    Nishimura, Akio; Yokosawa, Kazuhiko

    2010-10-01

    We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.

  9. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users.

    PubMed

    Scheperle, Rachel A; Abbas, Paul J

    2015-01-01

    The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

  10. Tinnitus. I: Auditory mechanisms: a model for tinnitus and hearing impairment.

    PubMed

    Hazell, J W; Jastreboff, P J

    1990-02-01

    A model is proposed for tinnitus and sensorineural hearing loss involving cochlear pathology. As tinnitus is defined as a cortical perception of sound in the absence of an appropriate external stimulus it must result from a generator in the auditory system which undergoes extensive auditory processing before it is perceived. The concept of spatial nonlinearity in the cochlea is presented as a cause of tinnitus generation controlled by the efferents. Various clinical presentations of tinnitus and the way in which they respond to changes in the environment are discussed with respect to this control mechanism. The concept of auditory retraining as part of the habituation process, and interaction with the prefrontal cortex and limbic system is presented as a central model which emphasizes the importance of the emotional significance and meaning of tinnitus.

  11. Fast and robust estimation of spectro-temporal receptive fields using stochastic approximations.

    PubMed

    Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn

    2015-05-15

    The receptive field (RF) represents the signal preferences of sensory neurons and is the primary analysis method for understanding sensory coding. While it is essential to estimate a neuron's RF, finding numerical solutions to increasingly complex RF models can become computationally intensive, in particular for high-dimensional stimuli or when many neurons are involved. Here we propose an optimization scheme based on stochastic approximations that facilitate this task. The basic idea is to derive solutions on a random subset rather than computing the full solution on the available data set. To test this, we applied different optimization schemes based on stochastic gradient descent (SGD) to both the generalized linear model (GLM) and a recently developed classification-based RF estimation approach. Using simulated and recorded responses, we demonstrate that RF parameter optimization based on state-of-the-art SGD algorithms produces robust estimates of the spectro-temporal receptive field (STRF). Results on recordings from the auditory midbrain demonstrate that stochastic approximations preserve both predictive power and tuning properties of STRFs. A correlation of 0.93 with the STRF derived from the full solution may be obtained in less than 10% of the full solution's estimation time. We also present an on-line algorithm that allows simultaneous monitoring of STRF properties of more than 30 neurons on a single computer. The proposed approach may not only prove helpful for large-scale recordings but also provides a more comprehensive characterization of neural tuning in experiments than standard tuning curves. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Auditory observation of stepping actions can cue both spatial and temporal components of gait in Parkinson׳s disease patients.

    PubMed

    Young, William R; Rodger, Matthew W M; Craig, Cathy M

    2014-05-01

    A common behavioural symptom of Parkinson׳s disease (PD) is reduced step length (SL). Whilst sensory cueing strategies can be effective in increasing SL and reducing gait variability, current cueing strategies conveying spatial or temporal information are generally confined to the use of either visual or auditory cue modalities, respectively. We describe a novel cueing strategy using ecologically-valid 'action-related' sounds (footsteps on gravel) that convey both spatial and temporal parameters of a specific action within a single cue. The current study used a real-time imitation task to examine whether PD affects the ability to re-enact changes in spatial characteristics of stepping actions, based solely on auditory information. In a second experimental session, these procedures were repeated using synthesized sounds derived from recordings of the kinetic interactions between the foot and walking surface. A third experimental session examined whether adaptations observed when participants walked to action-sounds were preserved when participants imagined either real recorded or synthesized sounds. Whilst healthy control participants were able to re-enact significant changes in SL in all cue conditions, these adaptations, in conjunction with reduced variability of SL were only observed in the PD group when walking to, or imagining the recorded sounds. The findings show that while recordings of stepping sounds convey action information to allow PD patients to re-enact and imagine spatial characteristics of gait, synthesis of sounds purely from gait kinetics is insufficient to evoke similar changes in behaviour, perhaps indicating that PD patients have a higher threshold to cue sensorimotor resonant responses. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Lateralization of Frequency-Specific Networks for Covert Spatial Attention to Auditory Stimuli

    PubMed Central

    Thorpe, Samuel; D'Zmura, Michael

    2011-01-01

    We conducted a cued spatial attention experiment to investigate the time–frequency structure of human EEG induced by attentional orientation of an observer in external auditory space. Seven subjects participated in a task in which attention was cued to one of two spatial locations at left and right. Subjects were instructed to report the speech stimulus at the cued location and to ignore a simultaneous speech stream originating from the uncued location. EEG was recorded from the onset of the directional cue through the offset of the inter-stimulus interval (ISI), during which attention was directed toward the cued location. Using a wavelet spectrum, each frequency band was then normalized by the mean level of power observed in the early part of the cue interval to obtain a measure of induced power related to the deployment of attention. Topographies of band specific induced power during the cue and inter-stimulus intervals showed peaks over symmetric bilateral scalp areas. We used a bootstrap analysis of a lateralization measure defined for symmetric groups of channels in each band to identify specific lateralization events throughout the ISI. Our results suggest that the deployment and maintenance of spatially oriented attention throughout a period of 1,100 ms is marked by distinct episodes of reliable hemispheric lateralization ipsilateral to the direction in which attention is oriented. An early theta lateralization was evident over posterior parietal electrodes and was sustained throughout the ISI. In the alpha and mu bands punctuated episodes of parietal power lateralization were observed roughly 500 ms after attentional deployment, consistent with previous studies of visual attention. In the beta band these episodes show similar patterns of lateralization over frontal motor areas. These results indicate that spatial attention involves similar mechanisms in the auditory and visual modalities. PMID:21630112

  14. Linguistic Input, Electronic Media, and Communication Outcomes of Toddlers with Hearing Loss

    PubMed Central

    Ambrose, Sophie E.; VanDam, Mark; Moeller, Mary Pat

    2013-01-01

    Objectives The objectives of this study were to examine the quantity of adult words, adult-child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these variables contributed to variability in children’s communication outcomes. Design Participants were 28 children with mild to severe hearing loss. Full-day recordings of children’s auditory environments were collected within 6 months of their 2nd birthdays by utilizing LENA (Language ENvironment Analysis) technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children’s communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. Results On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children’s communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. Conclusions HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children’s exposure to electronic media. PMID:24441740

  15. Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss.

    PubMed

    Ambrose, Sophie E; VanDam, Mark; Moeller, Mary Pat

    2014-01-01

    The objectives of this study were to examine the quantity of adult words, adult-child conversational turns, and electronic media in the auditory environments of toddlers who are hard of hearing (HH) and to examine whether these factors contributed to variability in children's communication outcomes. Participants were 28 children with mild to severe hearing loss. Full-day recordings of children's auditory environments were collected within 6 months of their second birthdays by using Language ENvironment Analysis technology. The system analyzes full-day acoustic recordings, yielding estimates of the quantity of adult words, conversational turns, and electronic media exposure in the recordings. Children's communication outcomes were assessed via the receptive and expressive scales of the Mullen Scales of Early Learning at 2 years of age and the Comprehensive Assessment of Spoken Language at 3 years of age. On average, the HH toddlers were exposed to approximately 1400 adult words per hour and participated in approximately 60 conversational turns per hour. An average of 8% of each recording was classified as electronic media. However, there was considerable within-group variability on all three measures. Frequency of conversational turns, but not adult words, was positively associated with children's communication outcomes at 2 and 3 years of age. Amount of electronic media exposure was negatively associated with 2-year-old receptive language abilities; however, regression results indicate that the relationship was fully mediated by the quantity of conversational turns. HH toddlers who were engaged in more conversational turns demonstrated stronger linguistic outcomes than HH toddlers who were engaged in fewer conversational turns. The frequency of these interactions was found to be decreased in households with high rates of electronic media exposure. Optimal language-learning environments for HH toddlers include frequent linguistic interactions between parents and children. To support this goal, parents should be encouraged to reduce their children's exposure to electronic media.

  16. The African cichlid fish Astatotilapia burtoni uses acoustic communication for reproduction: sound production, hearing, and behavioral significance.

    PubMed

    Maruska, Karen P; Ung, Uyhun S; Fernald, Russell D

    2012-01-01

    Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2-5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes.

  17. The African Cichlid Fish Astatotilapia burtoni Uses Acoustic Communication for Reproduction: Sound Production, Hearing, and Behavioral Significance

    PubMed Central

    Maruska, Karen P.; Ung, Uyhun S.; Fernald, Russell D.

    2012-01-01

    Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2–5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes. PMID:22624055

  18. Interhemispheric coupling between the posterior sylvian regions impacts successful auditory temporal order judgment.

    PubMed

    Bernasconi, Fosco; Grivel, Jeremy; Murray, Micah M; Spierer, Lucas

    2010-07-01

    Accurate perception of the temporal order of sensory events is a prerequisite in numerous functions ranging from language comprehension to motor coordination. We investigated the spatio-temporal brain dynamics of auditory temporal order judgment (aTOJ) using electrical neuroimaging analyses of auditory evoked potentials (AEPs) recorded while participants completed a near-threshold task requiring spatial discrimination of left-right and right-left sound sequences. AEPs to sound pairs modulated topographically as a function of aTOJ accuracy over the 39-77ms post-stimulus period, indicating the engagement of distinct configurations of brain networks during early auditory processing stages. Source estimations revealed that accurate and inaccurate performance were linked to bilateral posterior sylvian regions activity (PSR). However, activity within left, but not right, PSR predicted behavioral performance suggesting that left PSR activity during early encoding phases of pairs of auditory spatial stimuli appears critical for the perception of their order of occurrence. Correlation analyses of source estimations further revealed that activity between left and right PSR was significantly correlated in the inaccurate but not accurate condition, indicating that aTOJ accuracy depends on the functional decoupling between homotopic PSR areas. These results support a model of temporal order processing wherein behaviorally relevant temporal information--i.e. a temporal 'stamp'--is extracted within the early stages of cortical processes within left PSR but critically modulated by inputs from right PSR. We discuss our results with regard to current models of temporal of temporal order processing, namely gating and latency mechanisms. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  19. Probing the limits of alpha power lateralisation as a neural marker of selective attention in middle-aged and older listeners.

    PubMed

    Tune, Sarah; Wöstmann, Malte; Obleser, Jonas

    2018-02-11

    In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  20. Auditory and Cognitive Factors Associated with Speech-in-Noise Complaints following Mild Traumatic Brain Injury.

    PubMed

    Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J

    2017-04-01

    Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was related to a combination of working memory and peripheral auditory factors across all listeners in the study. The results of this study are consistent with previous findings that a subset of listeners with MTBI has objective auditory deficits. Speech-in-noise performance was related to a combination of auditory and nonauditory factors, confirming the important role of audiology in MTBI rehabilitation. Further research is needed to evaluate the prevalence and causal relationship of auditory deficits following MTBI. American Academy of Audiology

Top