Sample records for auditory ventral stream

  1. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  2. Mapping a lateralization gradient within the ventral stream for auditory speech perception.

    PubMed

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.

  3. Mapping a lateralization gradient within the ventral stream for auditory speech perception

    PubMed Central

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory–phonetic to lexico-semantic processing and along the posterior–anterior axis, thus forming a “lateralization” gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe. PMID:24106470

  4. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    NASA Astrophysics Data System (ADS)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  5. White matter anisotropy in the ventral language pathway predicts sound-to-word learning success

    PubMed Central

    Wong, Francis C. K.; Chandrasekaran, Bharath; Garibaldi, Kyla; Wong, Patrick C. M.

    2011-01-01

    According to the dual stream model of auditory language processing, the dorsal stream is responsible for mapping sound to articulation while the ventral stream plays the role of mapping sound to meaning. Most researchers agree that the arcuate fasciculus (AF) is the neuroanatomical correlate of the dorsal steam, however, less is known about what constitutes the ventral one. Nevertheless two hypotheses exist, one suggests that the segment of the AF that terminates in middle temporal gyrus corresponds to the ventral stream and the other suggests that it is the extreme capsule that underlies this sound to meaning pathway. The goal of this study is to evaluate these two competing hypotheses. We trained participants with a sound-to-word learning paradigm in which they learned to use a foreign phonetic contrast for signaling word meaning. Using diffusion tensor imaging (DTI), a brain imaging tool to investigate white matter connectivity in humans, we found that fractional anisotropy in the left parietal-temporal region positively correlated with the performance in sound-to-word learning. In addition, fiber tracking revealed a ventral pathway, composed of the extreme capsule and the inferior longitudinal fasciculus, that mediated auditory comprehension. Our findings provide converging evidence supporting the importance of the ventral steam, an extreme capsule system, in the frontal-temporal language network. Implications for current models of speech processing will also be discussed. PMID:21677162

  6. Damage to ventral and dorsal language pathways in acute aphasia

    PubMed Central

    Hartwigsen, Gesa; Kellmeyer, Philipp; Glauche, Volkmar; Mader, Irina; Klöppel, Stefan; Suchan, Julia; Karnath, Hans-Otto; Weiller, Cornelius; Saur, Dorothee

    2013-01-01

    Converging evidence from neuroimaging studies and computational modelling suggests an organization of language in a dual dorsal–ventral brain network: a dorsal stream connects temporoparietal with frontal premotor regions through the superior longitudinal and arcuate fasciculus and integrates sensorimotor processing, e.g. in repetition of speech. A ventral stream connects temporal and prefrontal regions via the extreme capsule and mediates meaning, e.g. in auditory comprehension. The aim of our study was to test, in a large sample of 100 aphasic stroke patients, how well acute impairments of repetition and comprehension correlate with lesions of either the dorsal or ventral stream. We combined voxelwise lesion-behaviour mapping with the dorsal and ventral white matter fibre tracts determined by probabilistic fibre tracking in our previous study in healthy subjects. We found that repetition impairments were mainly associated with lesions located in the posterior temporoparietal region with a statistical lesion maximum in the periventricular white matter in projection of the dorsal superior longitudinal and arcuate fasciculus. In contrast, lesions associated with comprehension deficits were found more ventral-anterior in the temporoprefrontal region with a statistical lesion maximum between the insular cortex and the putamen in projection of the ventral extreme capsule. Individual lesion overlap with the dorsal fibre tract showed a significant negative correlation with repetition performance, whereas lesion overlap with the ventral fibre tract revealed a significant negative correlation with comprehension performance. To summarize, our results from patients with acute stroke lesions support the claim that language is organized along two segregated dorsal–ventral streams. Particularly, this is the first lesion study demonstrating that task performance on auditory comprehension measures requires an interaction between temporal and prefrontal brain regions via the ventral extreme capsule pathway. PMID:23378217

  7. Neuronal basis of speech comprehension.

    PubMed

    Specht, Karsten

    2014-01-01

    Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex

    PubMed Central

    Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.

    2009-01-01

    ‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492

  9. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey

    PubMed Central

    Scott, Brian H.; Leccese, Paul A.; Saleem, Kadharbatcha S.; Kikuchi, Yukiko; Mullarkey, Matthew P.; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C.

    2017-01-01

    Abstract In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. PMID:26620266

  10. Auditory pathways: anatomy and physiology.

    PubMed

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  11. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    PubMed

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  12. Serial and Parallel Processing in the Primate Auditory Cortex Revisited

    PubMed Central

    Recanzone, Gregg H.; Cohen, Yale E.

    2009-01-01

    Over a decade ago it was proposed that the primate auditory cortex is organized in a serial and parallel manner in which there is a dorsal stream processing spatial information and a ventral stream processing non-spatial information. This organization is similar to the “what”/“where” processing of the primate visual cortex. This review will examine several key studies, primarily electrophysiological, that have tested this hypothesis. We also review several human imaging studies that have attempted to define these processing streams in the human auditory cortex. While there is good evidence that spatial information is processed along a particular series of cortical areas, the support for a non-spatial processing stream is not as strong. Why this should be the case and how to better test this hypothesis is also discussed. PMID:19686779

  13. Neural practice effect during cross-modal selective attention: Supra-modal and modality-specific effects.

    PubMed

    Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi

    2018-05-16

    Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. On the organization of the perisylvian cortex: Insights from the electrophysiology of language. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by M.A. Arbib

    NASA Astrophysics Data System (ADS)

    Brouwer, Harm; Crocker, Matthew W.

    2016-03-01

    The Mirror System Hypothesis (MSH) on the evolution of the language-ready brain draws upon the parallel dorsal-ventral stream architecture for vision [1]. The dorsal ;how; stream provides a mapping of parietally-mediated affordances onto the motor system (supporting preshape), whereas the ventral ;what; stream engages in object recognition and visual scene analysis (supporting pantomime and verbal description). Arbib attempts to integrate this MSH perspective with a recent conceptual dorsal-ventral stream model of auditory language comprehension [5] (henceforth, the B&S model). In the B&S model, the dorsal stream engages in time-dependent combinatorial processing, which subserves syntactic structuring and linkage to action, whereas the ventral stream performs time-independent unification of conceptual schemata. These streams are integrated in the left Inferior Frontal Gyrus (lIFG), which is assumed to subserve cognitive control, and no linguistic processing functions. Arbib criticizes the B&S model on two grounds: (i) the time-independence of the semantic processing in the ventral stream (by arguing that semantic processing is just as time-dependent as syntactic processing), and (ii) the absence of linguistic processing in the lIFG (reconciling syntactic and semantic representations is very much linguistic processing proper). Here, we provide further support for these two points of criticism on the basis of insights from the electrophysiology of language. In the course of our argument, we also sketch the contours of an alternative model that may prove better suited for integration with the MSH.

  15. Adaptations to vision-for-action in primate brain evolution: Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael A. Arbib

    NASA Astrophysics Data System (ADS)

    Hecht, Erin

    2016-03-01

    As Arbib [1] notes, the two-streams hypothesis [5] has provided a powerful explanatory framework for understanding visual processing. The inferotemporal ventral stream recognizes objects and agents - ;what; one is seeing. The dorsal ;how; or ;where; stream through parietal cortex processes motion, spatial location, and visuo-proprioceptive relationships - ;vision for action.; Hickock and Poeppel's [3] extension of this model to the auditory system raises the question of deeper, multi- or supra-sensory themes in dorsal vs. ventral processing. Petrides and Pandya [10] postulate that the evolution of language may have been influenced by the fact that the dorsal stream terminates in posterior Broca's area (BA44) while the ventral stream terminates in anterior Broca's area (BA45). In an intriguing potential parallel, a recent ALE metanalysis of 54 fMRI studies found that semantic processing is located more anteriorly and superiorly than syntactic processing in Broca's area [13]. But clearly, macaques do not have language, nor other likely pre- or co-adaptations to language, such as complex imitation and tool use. What changed in the brain that enabled these functions to evolve?

  16. Neurobiological roots of language in primate audition: common computational properties.

    PubMed

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias; Small, Steven L; Rauschecker, Josef P

    2015-03-01

    Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. An Expanded Role for the Dorsal Auditory Pathway in Sensorimotor Control and Integration

    PubMed Central

    Rauschecker, Josef P.

    2010-01-01

    The dual-pathway model of auditory cortical processing assumes that two largely segregated processing streams originating in the lateral belt subserve the two main functions of hearing: identification of auditory “objects”, including speech; and localization of sounds in space (Rauschecker and Tian, 2000). Evidence has accumulated, chiefly from work in humans and nonhuman primates, that an antero-ventral pathway supports the former function, whereas a postero-dorsal stream supports the latter, i.e. processing of space and motion-in-space. In addition, the postero-dorsal stream has also been postulated to subserve some functions of speech and language in humans. A recent review (Rauschecker and Scott, 2009) has proposed the possibility that both functions of the postero-dorsal pathway can be subsumed under the same structural forward model: an efference copy sent from prefrontal and premotor cortex provides the basis for “optimal state estimation” in the inferior parietal lobe and in sensory areas of the posterior auditory cortex. The current article corroborates this model by adding and discussing recent evidence. PMID:20850511

  18. Impairment of Auditory-Motor Timing and Compensatory Reorganization after Ventral Premotor Cortex Stimulation

    PubMed Central

    Kornysheva, Katja; Schubotz, Ricarda I.

    2011-01-01

    Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657

  19. Interaction between dorsal and ventral processing streams: where, when and how?

    PubMed

    Cloutman, Lauren L

    2013-11-01

    The execution of complex visual, auditory, and linguistic behaviors requires a dynamic interplay between spatial ('where/how') and non-spatial ('what') information processed along the dorsal and ventral processing streams. However, while it is acknowledged that there must be some degree of interaction between the two processing networks, how they interact, both anatomically and functionally, is a question which remains little explored. The current review examines the anatomical, temporal, and behavioral evidence regarding three potential models of dual stream interaction: (1) computations along the two pathways proceed independently and in parallel, reintegrating within shared target brain regions; (2) processing along the separate pathways is modulated by the existence of recurrent feedback loops; and (3) information is transferred directly between the two pathways at multiple stages and locations along their trajectories. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V

    2013-11-15

    Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    ERIC Educational Resources Information Center

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  2. Dissociated repetition deficits in aphasia can reflect flexible interactions between left dorsal and ventral streams and gender-dimorphic architecture of the right dorsal stream

    PubMed Central

    Berthier, Marcelo L.; Froudist Walsh, Seán; Dávila, Guadalupe; Nabrozidis, Alejandro; Juárez y Ruiz de Mier, Rocío; Gutiérrez, Antonio; De-Torres, Irene; Ruiz-Cruces, Rafael; Alfaro, Francisco; García-Casares, Natalia

    2013-01-01

    Assessment of brain-damaged subjects presenting with dissociated repetition deficits after selective injury to either the left dorsal or ventral auditory pathways can provide further insight on their respective roles in verbal repetition. We evaluated repetition performance and its neural correlates using multimodal imaging (anatomical MRI, DTI, fMRI, and18FDG-PET) in a female patient with transcortical motor aphasia (TCMA) and in a male patient with conduction aphasia (CA) who had small contiguous but non-overlapping left perisylvian infarctions. Repetition in the TCMA patient was fully preserved except for a mild impairment in nonwords and digits, whereas the CA patient had impaired repetition of nonwords, digits and word triplet lists. Sentence repetition was impaired, but he repeated novel sentences significantly better than clichés. The TCMA patient had tissue damage and reduced metabolism in the left sensorimotor cortex and insula. DTI showed damage to the left temporo-frontal and parieto-frontal segments of the arcuate fasciculus (AF) and part of the left ventral stream together with well-developed right dorsal and ventral streams, as has been reported in more than one-third of females. The CA patient had tissue damage and reduced metabolic activity in the left temporoparietal cortex with additional metabolic decrements in the left frontal lobe. DTI showed damage to the left temporo-parietal and temporo-frontal segments of the AF, but the ventral stream was spared. The direct segment of the AF in the right hemisphere was also absent with only vestigial remains of the other dorsal subcomponents present, as is often found in males. fMRI during word and nonword repetition revealed bilateral perisylvian activation in the TCMA patient suggesting recruitment of spared segments of the left dorsal stream and right dorsal stream with propagation of signals to temporal lobe structures suggesting a compensatory reallocation of resources via the ventral streams. The CA patient showed a greater activation of these cortical areas than the TCMA patient, but these changes did not result in normal performance. Repetition of word triplet lists activated bilateral perisylvian cortices in both patients, but activation in the CA patient with very poor performance was restricted to small frontal and posterior temporal foci bilaterally. These findings suggest that dissociated repetition deficits in our cases are probably reliant on flexible interactions between left dorsal stream (spared segments, short tracts remains) and left ventral stream and on gender-dimorphic architecture of the right dorsal stream. PMID:24391569

  3. Dissociated repetition deficits in aphasia can reflect flexible interactions between left dorsal and ventral streams and gender-dimorphic architecture of the right dorsal stream.

    PubMed

    Berthier, Marcelo L; Froudist Walsh, Seán; Dávila, Guadalupe; Nabrozidis, Alejandro; Juárez Y Ruiz de Mier, Rocío; Gutiérrez, Antonio; De-Torres, Irene; Ruiz-Cruces, Rafael; Alfaro, Francisco; García-Casares, Natalia

    2013-01-01

    Assessment of brain-damaged subjects presenting with dissociated repetition deficits after selective injury to either the left dorsal or ventral auditory pathways can provide further insight on their respective roles in verbal repetition. We evaluated repetition performance and its neural correlates using multimodal imaging (anatomical MRI, DTI, fMRI, and(18)FDG-PET) in a female patient with transcortical motor aphasia (TCMA) and in a male patient with conduction aphasia (CA) who had small contiguous but non-overlapping left perisylvian infarctions. Repetition in the TCMA patient was fully preserved except for a mild impairment in nonwords and digits, whereas the CA patient had impaired repetition of nonwords, digits and word triplet lists. Sentence repetition was impaired, but he repeated novel sentences significantly better than clichés. The TCMA patient had tissue damage and reduced metabolism in the left sensorimotor cortex and insula. DTI showed damage to the left temporo-frontal and parieto-frontal segments of the arcuate fasciculus (AF) and part of the left ventral stream together with well-developed right dorsal and ventral streams, as has been reported in more than one-third of females. The CA patient had tissue damage and reduced metabolic activity in the left temporoparietal cortex with additional metabolic decrements in the left frontal lobe. DTI showed damage to the left temporo-parietal and temporo-frontal segments of the AF, but the ventral stream was spared. The direct segment of the AF in the right hemisphere was also absent with only vestigial remains of the other dorsal subcomponents present, as is often found in males. fMRI during word and nonword repetition revealed bilateral perisylvian activation in the TCMA patient suggesting recruitment of spared segments of the left dorsal stream and right dorsal stream with propagation of signals to temporal lobe structures suggesting a compensatory reallocation of resources via the ventral streams. The CA patient showed a greater activation of these cortical areas than the TCMA patient, but these changes did not result in normal performance. Repetition of word triplet lists activated bilateral perisylvian cortices in both patients, but activation in the CA patient with very poor performance was restricted to small frontal and posterior temporal foci bilaterally. These findings suggest that dissociated repetition deficits in our cases are probably reliant on flexible interactions between left dorsal stream (spared segments, short tracts remains) and left ventral stream and on gender-dimorphic architecture of the right dorsal stream.

  4. Emergence of Spatial Stream Segregation in the Ascending Auditory Pathway.

    PubMed

    Yao, Justin D; Bremen, Peter; Middlebrooks, John C

    2015-12-09

    Stream segregation enables a listener to disentangle multiple competing sequences of sounds. A recent study from our laboratory demonstrated that cortical neurons in anesthetized cats exhibit spatial stream segregation (SSS) by synchronizing preferentially to one of two sequences of noise bursts that alternate between two source locations. Here, we examine the emergence of SSS along the ascending auditory pathway. Extracellular recordings were made in anesthetized rats from the inferior colliculus (IC), the nucleus of the brachium of the IC (BIN), the medial geniculate body (MGB), and the primary auditory cortex (A1). Stimuli consisted of interleaved sequences of broadband noise bursts that alternated between two source locations. At stimulus presentation rates of 5 and 10 bursts per second, at which human listeners report robust SSS, neural SSS is weak in the central nucleus of the IC (ICC), it appears in the nucleus of the brachium of the IC (BIN) and in approximately two-thirds of neurons in the ventral MGB (MGBv), and is prominent throughout A1. The enhancement of SSS at the cortical level reflects both increased spatial sensitivity and increased forward suppression. We demonstrate that forward suppression in A1 does not result from synaptic inhibition at the cortical level. Instead, forward suppression might reflect synaptic depression in the thalamocortical projection. Together, our findings indicate that auditory streams are increasingly segregated along the ascending auditory pathway as distinct mutually synchronized neural populations. Listeners are capable of disentangling multiple competing sequences of sounds that originate from distinct sources. This stream segregation is aided by differences in spatial location between the sources. A possible substrate of spatial stream segregation (SSS) has been described in the auditory cortex, but the mechanisms leading to those cortical responses are unknown. Here, we investigated SSS in three levels of the ascending auditory pathway with extracellular unit recordings in anesthetized rats. We found that neural SSS emerges within the ascending auditory pathway as a consequence of sharpening of spatial sensitivity and increasing forward suppression. Our results highlight brainstem mechanisms that culminate in SSS at the level of the auditory cortex. Copyright © 2015 Yao et al.

  5. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    PubMed

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  6. Differential Coding of Conspecific Vocalizations in the Ventral Auditory Cortical Stream

    PubMed Central

    Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.

    2014-01-01

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  7. A Dual-Stream Neuroanatomy of Singing

    PubMed Central

    Loui, Psyche

    2015-01-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment. PMID:26120242

  8. A Dual-Stream Neuroanatomy of Singing.

    PubMed

    Loui, Psyche

    2015-02-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment.

  9. Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design

    PubMed Central

    Pernet, Cyril; Latinus, Marianne; Crabbe, Frances; Belin, Pascal

    2013-01-01

    Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception. PMID:22490550

  10. Contributions of local speech encoding and functional connectivity to audio-visual speech perception

    PubMed Central

    Giordano, Bruno L; Ince, Robin A A; Gross, Joachim; Schyns, Philippe G; Panzeri, Stefano; Kayser, Christoph

    2017-01-01

    Seeing a speaker’s face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker’s face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments. DOI: http://dx.doi.org/10.7554/eLife.24763.001 PMID:28590903

  11. Neural correlates of auditory short-term memory in rostral superior temporal cortex

    PubMed Central

    Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo

    2014-01-01

    Summary Background Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. Results We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed-match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing, and in their resistance to sounds intervening between the sample and match. Conclusions Like the monkeys’ behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. PMID:25456448

  12. Functional MRI of the vocalization-processing network in the macaque brain

    PubMed Central

    Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.

    2015-01-01

    Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546

  13. Tonic effects of the dopaminergic ventral midbrain on the auditory cortex of awake macaque monkeys.

    PubMed

    Huang, Ying; Mylius, Judith; Scheich, Henning; Brosch, Michael

    2016-03-01

    This study shows that ongoing electrical stimulation of the dopaminergic ventral midbrain can modify neuronal activity in the auditory cortex of awake primates for several seconds. This was reflected in a decrease of the spontaneous firing and in a bidirectional modification of the power of auditory evoked potentials. We consider that both effects are due to an increase in the dopamine tone in auditory cortex induced by the electrical stimulation. Thus, the dopaminergic ventral midbrain may contribute to the tonic activity in auditory cortex that has been proposed to be involved in associating events of auditory tasks (Brosch et al. Hear Res 271:66-73, 2011) and may modulate the signal-to-noise ratio of the responses to auditory stimuli.

  14. Focused attention in a simple dichotic listening task: an fMRI experiment.

    PubMed

    Jäncke, Lutz; Specht, Karsten; Shah, Joni Nadim; Hugdahl, Kenneth

    2003-04-01

    Whole-head functional magnetic resonance imaging (fMRI) was used in nine neurologically intact subjects to measure the hemodynamic responses in the context of dichotic listening (DL). In order to eliminate the influence of verbal information processing, tones of different frequencies were used as stimuli. Three different dichotic listening tasks were used: the subjects were instructed to either concentrate on the stimuli presented in both ears (DIV), or only in the left (FL) or right (FR) ear and to monitor the auditory input for a specific target tone. When the target tone was detected, the subjects were required to indicate this by pressing a response button. Compared to the resting state, all dichotic listening tasks evoked strong hemodynamic responses within a distributed network comprising of temporal, parietal, and frontal brain areas. Thus, it is clear that dichotic listening makes use of various cognitive functions located within the dorsal and ventral stream of auditory information processing (i.e., the 'what' and 'where' streams). Comparing the three different dichotic listening conditions with each other only revealed a significant difference in the pre-SMA and within the left planum temporale area. The pre-SMA was generally more strongly activated during the DIV condition than during the FR and FL conditions. Within the planum temporale, the strongest activation was found during the FR condition and the weakest during the DIV condition. These findings were taken as evidence that even a simple dichotic listening task such as the one used here, makes use of a distributed neural network comprising of the dorsal and ventral stream of auditory information processing. In addition, these results support the previously made assumption that planum temporale activation is modulated by attentional strategies. Finally, the present findings uncovered that the pre-SMA, which is mostly thought to be involved in higher-order motor control processes, is also involved in cognitive processes operative during dichotic listening.

  15. Neuroimaging investigations of dorsal stream processing and effects of stimulus synchrony in schizophrenia.

    PubMed

    Sanfratello, Lori; Aine, Cheryl; Stephen, Julia

    2018-05-25

    Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    PubMed

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  18. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  20. Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain.

    PubMed

    Arbib, Michael A

    2016-03-01

    We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining "what language is about" in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain

    NASA Astrophysics Data System (ADS)

    Arbib, Michael A.

    2016-03-01

    We make the case for developing a Computational Comparative Neuroprimatology to inform the analysis of the function and evolution of the human brain. First, we update the mirror system hypothesis on the evolution of the language-ready brain by (i) modeling action and action recognition and opportunistic scheduling of macaque brains to hypothesize the nature of the last common ancestor of macaque and human (LCA-m); and then we (ii) introduce dynamic brain modeling to show how apes could acquire gesture through ontogenetic ritualization, hypothesizing the nature of evolution from LCA-m to the last common ancestor of chimpanzee and human (LCA-c). We then (iii) hypothesize the role of imitation, pantomime, protosign and protospeech in biological and cultural evolution from LCA-c to Homo sapiens with a language-ready brain. Second, we suggest how cultural evolution in Homo sapiens led from protolanguages to full languages with grammar and compositional semantics. Third, we assess the similarities and differences between the dorsal and ventral streams in audition and vision as the basis for presenting and comparing two models of language processing in the human brain: A model of (i) the auditory dorsal and ventral streams in sentence comprehension; and (ii) the visual dorsal and ventral streams in defining ;what language is about; in both production and perception of utterances related to visual scenes provide the basis for (iii) a first step towards a synthesis and a look at challenges for further research.

  2. Dynamic speech representations in the human temporal lobe.

    PubMed

    Leonard, Matthew K; Chang, Edward F

    2014-09-01

    Speech perception requires rapid integration of acoustic input with context-dependent knowledge. Recent methodological advances have allowed researchers to identify underlying information representations in primary and secondary auditory cortex and to examine how context modulates these representations. We review recent studies that focus on contextual modulations of neural activity in the superior temporal gyrus (STG), a major hub for spectrotemporal encoding. Recent findings suggest a highly interactive flow of information processing through the auditory ventral stream, including influences of higher-level linguistic and metalinguistic knowledge, even within individual areas. Such mechanisms may give rise to more abstract representations, such as those for words. We discuss the importance of characterizing representations of context-dependent and dynamic patterns of neural activity in the approach to speech perception research. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Neural correlates of auditory scene analysis and perception

    PubMed Central

    Cohen, Yale E.

    2014-01-01

    The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354

  4. Interactions between dorsal and ventral streams for controlling skilled grasp

    PubMed Central

    van Polanen, Vonne; Davare, Marco

    2015-01-01

    The two visual systems hypothesis suggests processing of visual information into two distinct routes in the brain: a dorsal stream for the control of actions and a ventral stream for the identification of objects. Recently, increasing evidence has shown that the dorsal and ventral streams are not strictly independent, but do interact with each other. In this paper, we argue that the interactions between dorsal and ventral streams are important for controlling complex object-oriented hand movements, especially skilled grasp. Anatomical studies have reported the existence of direct connections between dorsal and ventral stream areas. These physiological interconnections appear to be gradually more active as the precision demands of the grasp become higher. It is hypothesised that the dorsal stream needs to retrieve detailed information about object identity, stored in ventral stream areas, when the object properties require complex fine-tuning of the grasp. In turn, the ventral stream might receive up to date grasp-related information from dorsal stream areas to refine the object internal representation. Future research will provide direct evidence for which specific areas of the two streams interact, the timing of their interactions and in which behavioural context they occur. PMID:26169317

  5. Multiple brain networks underpinning word learning from fluent speech revealed by independent component analysis.

    PubMed

    López-Barroso, Diana; Ripollés, Pablo; Marco-Pallarés, Josep; Mohammadi, Bahram; Münte, Thomas F; Bachoud-Lévi, Anne-Catherine; Rodriguez-Fornells, Antoni; de Diego-Balaguer, Ruth

    2015-04-15

    Although neuroimaging studies using standard subtraction-based analysis from functional magnetic resonance imaging (fMRI) have suggested that frontal and temporal regions are involved in word learning from fluent speech, the possible contribution of different brain networks during this type of learning is still largely unknown. Indeed, univariate fMRI analyses cannot identify the full extent of distributed networks that are engaged by a complex task such as word learning. Here we used Independent Component Analysis (ICA) to characterize the different brain networks subserving word learning from an artificial language speech stream. Results were replicated in a second cohort of participants with a different linguistic background. Four spatially independent networks were associated with the task in both cohorts: (i) a dorsal Auditory-Premotor network; (ii) a dorsal Sensory-Motor network; (iii) a dorsal Fronto-Parietal network; and (iv) a ventral Fronto-Temporal network. The level of engagement of these networks varied through the learning period with only the dorsal Auditory-Premotor network being engaged across all blocks. In addition, the connectivity strength of this network in the second block of the learning phase correlated with the individual variability in word learning performance. These findings suggest that: (i) word learning relies on segregated connectivity patterns involving dorsal and ventral networks; and (ii) specifically, the dorsal auditory-premotor network connectivity strength is directly correlated with word learning performance. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Resting state functional connectivity of the ventral auditory pathway in musicians with absolute pitch.

    PubMed

    Kim, Seung-Goo; Knösche, Thomas R

    2017-08-01

    Absolute pitch (AP) is the ability to recognize pitch chroma of tonal sound without external references, providing a unique model of the human auditory system (Zatorre: Nat Neurosci 6 () 692-695). In a previous study (Kim and Knösche: Hum Brain Mapp () 3486-3501), we identified enhanced intracortical myelination in the right planum polare (PP) in musicians with AP, which could be a potential site for perceptional processing of pitch chroma information. We speculated that this area, which initiates the ventral auditory pathway, might be crucially involved in the perceptual stage of the AP process in the context of the "dual pathway hypothesis" that suggests the role of the ventral pathway in processing nonspatial information related to the identity of an auditory object (Rauschecker: Eur J Neurosci 41 () 579-585). To test our conjecture on the ventral pathway, we investigated resting state functional connectivity (RSFC) using functional magnetic resonance imaging (fMRI) from musicians with varying degrees of AP. Should our hypothesis be correct, RSFC via the ventral pathway is expected to be stronger in musicians with AP, whereas such group effect is not predicted in the RSFC via the dorsal pathway. In the current data, we found greater RSFC between the right PP and bilateral anteroventral auditory cortices in musicians with AP. In contrast, we did not find any group difference in the RSFC of the planum temporale (PT) between musicians with and without AP. We believe that these findings support our conjecture on the critical role of the ventral pathway in AP recognition. Hum Brain Mapp 38:3899-3916, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Stream-related preferences of inputs to the superior colliculus from areas of dorsal and ventral streams of mouse visual cortex.

    PubMed

    Wang, Quanxin; Burkhalter, Andreas

    2013-01-23

    Previous studies of intracortical connections in mouse visual cortex have revealed two subnetworks that resemble the dorsal and ventral streams in primates. Although calcium imaging studies have shown that many areas of the ventral stream have high spatial acuity whereas areas of the dorsal stream are highly sensitive for transient visual stimuli, there are some functional inconsistencies that challenge a simple grouping into "what/perception" and "where/action" streams known in primates. The superior colliculus (SC) is a major center for processing of multimodal sensory information and the motor control of orienting the eyes, head, and body. Visual processing is performed in superficial layers, whereas premotor activity is generated in deep layers of the SC. Because the SC is known to receive input from visual cortex, we asked whether the projections from 10 visual areas of the dorsal and ventral streams terminate in differential depth profiles within the SC. We found that inputs from primary visual cortex are by far the strongest. Projections from the ventral stream were substantially weaker, whereas the sparsest input originated from areas of the dorsal stream. Importantly, we found that ventral stream inputs terminated in superficial layers, whereas dorsal stream inputs tended to be patchy and either projected equally to superficial and deep layers or strongly preferred deep layers. The results suggest that the anatomically defined ventral and dorsal streams contain areas that belong to distinct functional systems, specialized for the processing of visual information and visually guided action, respectively.

  8. Cellular and Molecular Underpinnings of Neuronal Assembly in the Central Auditory System during Mouse Development

    PubMed Central

    Di Bonito, Maria; Studer, Michèle

    2017-01-01

    During development, the organization of the auditory system into distinct functional subcircuits depends on the spatially and temporally ordered sequence of neuronal specification, differentiation, migration and connectivity. Regional patterning along the antero-posterior axis and neuronal subtype specification along the dorso-ventral axis intersect to determine proper neuronal fate and assembly of rhombomere-specific auditory subcircuits. By taking advantage of the increasing number of transgenic mouse lines, recent studies have expanded the knowledge of developmental mechanisms involved in the formation and refinement of the auditory system. Here, we summarize several findings dealing with the molecular and cellular mechanisms that underlie the assembly of central auditory subcircuits during mouse development, focusing primarily on the rhombomeric and dorso-ventral origin of auditory nuclei and their associated molecular genetic pathways. PMID:28469562

  9. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia.

    PubMed

    Ewing, Samuel G; Grace, Anthony A

    2013-02-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Deep brain stimulation of the ventral hippocampus restores deficits in processing of auditory evoked potentials in a rodent developmental disruption model of schizophrenia

    PubMed Central

    Ewing, Samuel G.; Grace, Anthony A.

    2012-01-01

    Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia, but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. PMID:23269227

  11. Feature integration and object representations along the dorsal stream visual hierarchy

    PubMed Central

    Perry, Carolyn Jeane; Fallah, Mazyar

    2014-01-01

    The visual system is split into two processing streams: a ventral stream that receives color and form information and a dorsal stream that receives motion information. Each stream processes that information hierarchically, with each stage building upon the previous. In the ventral stream this leads to the formation of object representations that ultimately allow for object recognition regardless of changes in the surrounding environment. In the dorsal stream, this hierarchical processing has classically been thought to lead to the computation of complex motion in three dimensions. However, there is evidence to suggest that there is integration of both dorsal and ventral stream information into motion computation processes, giving rise to intermediate object representations, which facilitate object selection and decision making mechanisms in the dorsal stream. First we review the hierarchical processing of motion along the dorsal stream and the building up of object representations along the ventral stream. Then we discuss recent work on the integration of ventral and dorsal stream features that lead to intermediate object representations in the dorsal stream. Finally we propose a framework describing how and at what stage different features are integrated into dorsal visual stream object representations. Determining the integration of features along the dorsal stream is necessary to understand not only how the dorsal stream builds up an object representation but also which computations are performed on object representations instead of local features. PMID:25140147

  12. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision.

    PubMed

    Van Dromme, Ilse C; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-04-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams.

  13. Posterior Parietal Cortex Drives Inferotemporal Activations During Three-Dimensional Object Vision

    PubMed Central

    Van Dromme, Ilse C.; Premereur, Elsie; Verhoef, Bram-Ernst; Vanduffel, Wim; Janssen, Peter

    2016-01-01

    The primate visual system consists of a ventral stream, specialized for object recognition, and a dorsal visual stream, which is crucial for spatial vision and actions. However, little is known about the interactions and information flow between these two streams. We investigated these interactions within the network processing three-dimensional (3D) object information, comprising both the dorsal and ventral stream. Reversible inactivation of the macaque caudal intraparietal area (CIP) during functional magnetic resonance imaging (fMRI) reduced fMRI activations in posterior parietal cortex in the dorsal stream and, surprisingly, also in the inferotemporal cortex (ITC) in the ventral visual stream. Moreover, CIP inactivation caused a perceptual deficit in a depth-structure categorization task. CIP-microstimulation during fMRI further suggests that CIP projects via posterior parietal areas to the ITC in the ventral stream. To our knowledge, these results provide the first causal evidence for the flow of visual 3D information from the dorsal stream to the ventral stream, and identify CIP as a key area for depth-structure processing. Thus, combining reversible inactivation and electrical microstimulation during fMRI provides a detailed view of the functional interactions between the two visual processing streams. PMID:27082854

  14. Degraded speech sound processing in a rat model of fragile X syndrome

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.

    2014-01-01

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347

  15. Cortical Representations of Speech in a Multitalker Auditory Scene.

    PubMed

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory scene, with both attended and unattended speech streams represented with almost equal fidelity. We also show that higher-order auditory cortical areas, by contrast, represent an attended speech stream separately from, and with significantly higher fidelity than, unattended speech streams. Furthermore, the unattended background streams are represented as a single undivided background object rather than as distinct background objects. Copyright © 2017 the authors 0270-6474/17/379189-08$15.00/0.

  16. Tracking Training-Related Plasticity by Combining fMRI and DTI: The Right Hemisphere Ventral Stream Mediates Musical Syntax Processing.

    PubMed

    Oechslin, Mathias S; Gschwind, Markus; James, Clara E

    2018-04-01

    As a functional homolog for left-hemispheric syntax processing in language, neuroimaging studies evidenced involvement of right prefrontal regions in musical syntax processing, of which underlying white matter connectivity remains unexplored so far. In the current experiment, we investigated the underlying pathway architecture in subjects with 3 levels of musical expertise. Employing diffusion tensor imaging tractography, departing from seeds from our previous functional magnetic resonance imaging study on music syntax processing in the same participants, we identified a pathway in the right ventral stream that connects the middle temporal lobe with the inferior frontal cortex via the extreme capsule, and corresponds to the left hemisphere ventral stream, classically attributed to syntax processing in language comprehension. Additional morphometric consistency analyses allowed dissociating tract core from more dispersed fiber portions. Musical expertise related to higher tract consistency of the right ventral stream pathway. Specifically, tract consistency in this pathway predicted the sensitivity for musical syntax violations. We conclude that enduring musical practice sculpts ventral stream architecture. Our results suggest that training-related pathway plasticity facilitates the right hemisphere ventral stream information transfer, supporting an improved sound-to-meaning mapping in music.

  17. Two Visual Pathways in Primates Based on Sampling of Space: Exploitation and Exploration of Visual Information.

    PubMed

    Sheth, Bhavin R; Young, Ryan

    2016-01-01

    Evidence is strong that the visual pathway is segregated into two distinct streams-ventral and dorsal. Two proposals theorize that the pathways are segregated in function: The ventral stream processes information about object identity, whereas the dorsal stream, according to one model, processes information about either object location, and according to another, is responsible in executing movements under visual control. The models are influential; however recent experimental evidence challenges them, e.g., the ventral stream is not solely responsible for object recognition; conversely, its function is not strictly limited to object vision; the dorsal stream is not responsible by itself for spatial vision or visuomotor control; conversely, its function extends beyond vision or visuomotor control. In their place, we suggest a robust dichotomy consisting of a ventral stream selectively sampling high-resolution/ focal spaces, and a dorsal stream sampling nearly all of space with reduced foveal bias. The proposal hews closely to the theme of embodied cognition: Function arises as a consequence of an extant sensory underpinning. A continuous, not sharp, segregation based on function emerges, and carries with it an undercurrent of an exploitation-exploration dichotomy. Under this interpretation, cells of the ventral stream, which individually have more punctate receptive fields that generally include the fovea or parafovea, provide detailed information about object shapes and features and lead to the systematic exploitation of said information; cells of the dorsal stream, which individually have large receptive fields, contribute to visuospatial perception, provide information about the presence/absence of salient objects and their locations for novel exploration and subsequent exploitation by the ventral stream or, under certain conditions, the dorsal stream. We leverage the dichotomy to unify neuropsychological cases under a common umbrella, account for the increased prevalence of multisensory integration in the dorsal stream under a Bayesian framework, predict conditions under which object recognition utilizes the ventral or dorsal stream, and explain why cells of the dorsal stream drive sensorimotor control and motion processing and have poorer feature selectivity. Finally, the model speculates on a dynamic interaction between the two streams that underscores a unified, seamless perception. Existing theories are subsumed under our proposal.

  18. Two Visual Pathways in Primates Based on Sampling of Space: Exploitation and Exploration of Visual Information

    PubMed Central

    Sheth, Bhavin R.; Young, Ryan

    2016-01-01

    Evidence is strong that the visual pathway is segregated into two distinct streams—ventral and dorsal. Two proposals theorize that the pathways are segregated in function: The ventral stream processes information about object identity, whereas the dorsal stream, according to one model, processes information about either object location, and according to another, is responsible in executing movements under visual control. The models are influential; however recent experimental evidence challenges them, e.g., the ventral stream is not solely responsible for object recognition; conversely, its function is not strictly limited to object vision; the dorsal stream is not responsible by itself for spatial vision or visuomotor control; conversely, its function extends beyond vision or visuomotor control. In their place, we suggest a robust dichotomy consisting of a ventral stream selectively sampling high-resolution/focal spaces, and a dorsal stream sampling nearly all of space with reduced foveal bias. The proposal hews closely to the theme of embodied cognition: Function arises as a consequence of an extant sensory underpinning. A continuous, not sharp, segregation based on function emerges, and carries with it an undercurrent of an exploitation-exploration dichotomy. Under this interpretation, cells of the ventral stream, which individually have more punctate receptive fields that generally include the fovea or parafovea, provide detailed information about object shapes and features and lead to the systematic exploitation of said information; cells of the dorsal stream, which individually have large receptive fields, contribute to visuospatial perception, provide information about the presence/absence of salient objects and their locations for novel exploration and subsequent exploitation by the ventral stream or, under certain conditions, the dorsal stream. We leverage the dichotomy to unify neuropsychological cases under a common umbrella, account for the increased prevalence of multisensory integration in the dorsal stream under a Bayesian framework, predict conditions under which object recognition utilizes the ventral or dorsal stream, and explain why cells of the dorsal stream drive sensorimotor control and motion processing and have poorer feature selectivity. Finally, the model speculates on a dynamic interaction between the two streams that underscores a unified, seamless perception. Existing theories are subsumed under our proposal. PMID:27920670

  19. Behavioral Measures of Auditory Streaming in Ferrets (Mustela putorius)

    PubMed Central

    Ma, Ling; Yin, Pingbo; Micheyl, Christophe; Oxenham, Andrew J.; Shamma, Shihab A.

    2015-01-01

    An important aspect of the analysis of auditory “scenes” relates to the perceptual organization of sound sequences into auditory “streams.” In this study, we adapted two auditory perception tasks, used in recent human psychophysical studies, to obtain behavioral measures of auditory streaming in ferrets (Mustela putorius). One task involved the detection of shifts in the frequency of tones within an alternating tone sequence. The other task involved the detection of a stream of regularly repeating target tones embedded within a randomly varying multitone background. In both tasks, performance was measured as a function of various stimulus parameters, which previous psychophysical studies in humans have shown to influence auditory streaming. Ferret performance in the two tasks was found to vary as a function of these parameters in a way that is qualitatively consistent with the human data. These results suggest that auditory streaming occurs in ferrets, and that the two tasks described here may provide a valuable tool in future behavioral and neurophysiological studies of the phenomenon. PMID:20695663

  20. Differential modulation of visual object processing in dorsal and ventral stream by stimulus visibility.

    PubMed

    Ludwig, Karin; Sterzer, Philipp; Kathmann, Norbert; Hesselmann, Guido

    2016-10-01

    As a functional organization principle in cortical visual information processing, the influential 'two visual systems' hypothesis proposes a division of labor between a dorsal "vision-for-action" and a ventral "vision-for-perception" stream. A core assumption of this model is that the two visual streams are differentially involved in visual awareness: ventral stream processing is closely linked to awareness while dorsal stream processing is not. In this functional magnetic resonance imaging (fMRI) study with human observers, we directly probed the stimulus-related information encoded in fMRI response patterns in both visual streams as a function of stimulus visibility. We parametrically modulated the visibility of face and tool stimuli by varying the contrasts of the masks in a continuous flash suppression (CFS) paradigm. We found that visibility - operationalized by objective and subjective measures - decreased proportionally with increasing log CFS mask contrast. Neuronally, this relationship was closely matched by ventral visual areas, showing a linear decrease of stimulus-related information with increasing mask contrast. Stimulus-related information in dorsal areas also showed a dependency on mask contrast, but the decrease rather followed a step function instead of a linear function. Together, our results suggest that both the ventral and the dorsal visual stream are linked to visual awareness, but neural activity in ventral areas more closely reflects graded differences in awareness compared to dorsal areas. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Electrophysiological Evidence for Ventral Stream Deficits in Schizophrenia Patients

    PubMed Central

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H.

    2013-01-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies. PMID:22258884

  2. Electrophysiological evidence for ventral stream deficits in schizophrenia patients.

    PubMed

    Plomp, Gijs; Roinishvili, Maya; Chkonia, Eka; Kapanadze, George; Kereselidze, Maia; Brand, Andreas; Herzog, Michael H

    2013-05-01

    Schizophrenic patients suffer from many deficits including visual, attentional, and cognitive ones. Visual deficits are of particular interest because they are at the fore-end of information processing and can provide clear examples of interactions between sensory, perceptual, and higher cognitive functions. Visual deficits in schizophrenic patients are often attributed to impairments in the dorsal (where) rather than the ventral (what) stream of visual processing. We used a visual-masking paradigm in which patients and matched controls discriminated small vernier offsets. We analyzed the evoked electroencephalography (EEG) responses and applied distributed electrical source imaging techniques to estimate activity differences between conditions and groups throughout the brain. Compared with controls, patients showed strongly reduced discrimination accuracy, confirming previous work. The behavioral deficits corresponded to pronounced decreases in the evoked EEG response at around 200 ms after stimulus onset. At this latency, patients showed decreased activity for targets in left parietal cortex (dorsal stream), but the decrease was most pronounced in lateral occipital cortex (in the ventral stream). These deficiencies occurred at latencies that reflect object processing and fine shape discriminations. We relate the reduced ventral stream activity to deficient top-down processing of target stimuli and provide a framework for relating the commonly observed dorsal stream deficiencies with the currently observed ventral stream deficiencies.

  3. Dorsal and ventral stream contributions to form-from-motion perception in a patient with form-from motion deficit: a case report.

    PubMed

    Mercier, Manuel R; Schwartz, Sophie; Spinelli, Laurent; Michel, Christoph M; Blanke, Olaf

    2017-03-01

    The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.

  4. Assembly of the Auditory Circuitry by a Hox Genetic Network in the Mouse Brainstem

    PubMed Central

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M.; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem. PMID:23408898

  5. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    PubMed

    Di Bonito, Maria; Narita, Yuichi; Avallone, Bice; Sequino, Luigi; Mancuso, Marta; Andolfi, Gennaro; Franzè, Anna Maria; Puelles, Luis; Rijli, Filippo M; Studer, Michèle

    2013-01-01

    Rhombomeres (r) contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN), and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  6. Objects, Numbers, Fingers, Space: Clustering of Ventral and Dorsal Functions in Young Children and Adults

    ERIC Educational Resources Information Center

    Chinello, Alessandro; Cattani, Veronica; Bonfiglioli, Claudia; Dehaene, Stanislas; Piazza, Manuela

    2013-01-01

    In the primate brain, sensory information is processed along two partially segregated cortical streams: the ventral stream, mainly coding for objects' shape and identity, and the dorsal stream, mainly coding for objects' quantitative information (including size, number, and spatial position). Neurophysiological measures indicate that such…

  7. Increased functional connectivity in the ventral and dorsal streams during retrieval of novel words in professional musicians.

    PubMed

    Dittinger, Eva; Valizadeh, Seyed Abolfazl; Jäncke, Lutz; Besson, Mireille; Elmer, Stefan

    2018-02-01

    Current models of speech and language processing postulate the involvement of two parallel processing streams (the dual stream model): a ventral stream involved in mapping sensory and phonological representations onto lexical and conceptual representations and a dorsal stream contributing to sound-to-motor mapping, articulation, and to how verbal information is encoded and manipulated in memory. Based on previous evidence showing that music training has an influence on language processing, cognitive functions, and word learning, we examined EEG-based intracranial functional connectivity in the ventral and dorsal streams while musicians and nonmusicians learned the meaning of novel words through picture-word associations. In accordance with the dual stream model, word learning was generally associated with increased beta functional connectivity in the ventral stream compared to the dorsal stream. In addition, in the linguistically most demanding "semantic task," musicians outperformed nonmusicians, and this behavioral advantage was accompanied by increased left-hemispheric theta connectivity in both streams. Moreover, theta coherence in the left dorsal pathway was positively correlated with the number of years of music training. These results provide evidence for a complex interplay within a network of brain regions involved in semantic processing and verbal memory functions, and suggest that intensive music training can modify its functional architecture leading to advantages in novel word learning. © 2017 Wiley Periodicals, Inc.

  8. Opposing dorsal/ventral stream dynamics during figure-ground segregation.

    PubMed

    Wokke, Martijn E; Scholte, H Steven; Lamme, Victor A F

    2014-02-01

    The visual system has been commonly subdivided into two segregated visual processing streams: The dorsal pathway processes mainly spatial information, and the ventral pathway specializes in object perception. Recent findings, however, indicate that different forms of interaction (cross-talk) exist between the dorsal and the ventral stream. Here, we used TMS and concurrent EEG recordings to explore these interactions between the dorsal and ventral stream during figure-ground segregation. In two separate experiments, we used repetitive TMS and single-pulse TMS to disrupt processing in the dorsal (V5/HMT⁺) and the ventral (lateral occipital area) stream during a motion-defined figure discrimination task. We presented stimuli that made it possible to differentiate between relatively low-level (figure boundary detection) from higher-level (surface segregation) processing steps during figure-ground segregation. Results show that disruption of V5/HMT⁺ impaired performance related to surface segregation; this effect was mainly found when V5/HMT⁺ was perturbed in an early time window (100 msec) after stimulus presentation. Surprisingly, disruption of the lateral occipital area resulted in increased performance scores and enhanced neural correlates of surface segregation. This facilitatory effect was also mainly found in an early time window (100 msec) after stimulus presentation. These results suggest a "push-pull" interaction in which dorsal and ventral extrastriate areas are being recruited or inhibited depending on stimulus category and task demands.

  9. Specialization along the left superior temporal sulcus for auditory categorization.

    PubMed

    Liebenthal, Einat; Desai, Rutvik; Ellingson, Michael M; Ramachandran, Brinda; Desai, Anjali; Binder, Jeffrey R

    2010-12-01

    The affinity and temporal course of functional fields in middle and posterior superior temporal cortex for the categorization of complex sounds was examined using functional magnetic resonance imaging (fMRI) and event-related potentials (ERPs) recorded simultaneously. Data were compared before and after subjects were trained to categorize a continuum of unfamiliar nonphonemic auditory patterns with speech-like properties (NP) and a continuum of familiar phonemic patterns (P). fMRI activation for NP increased after training in left posterior superior temporal sulcus (pSTS). The ERP P2 response to NP also increased with training, and its scalp topography was consistent with left posterior superior temporal generators. In contrast, the left middle superior temporal sulcus (mSTS) showed fMRI activation only for P, and this response was not affected by training. The P2 response to P was also independent of training, and its estimated source was more anterior in left superior temporal cortex. Results are consistent with a role for left pSTS in short-term representation of relevant sound features that provide the basis for identifying newly acquired sound categories. Categorization of highly familiar phonemic patterns is mediated by long-term representations in left mSTS. Results provide new insight regarding the function of ventral and dorsal auditory streams.

  10. The mismatch negativity as a measure of auditory stream segregation in a simulated "cocktail-party" scenario: effect of age.

    PubMed

    Getzmann, Stephan; Näätänen, Risto

    2015-11-01

    With age the ability to understand speech in multitalker environments usually deteriorates. The central auditory system has to perceptually segregate and group the acoustic input into sequences of distinct auditory objects. The present study used electrophysiological measures to study effects of age on auditory stream segregation in a multitalker scenario. Younger and older adults were presented with streams of short speech stimuli. When a single target stream was presented, the occurrence of a rare (deviant) syllable among a frequent (standard) syllable elicited the mismatch negativity (MMN), an electrophysiological correlate of automatic deviance detection. The presence of a second, concurrent stream consisting of the deviant syllable of the target stream reduced the MMN amplitude, especially when located nearby the target stream. The decrease in MMN amplitude indicates that the rare syllable of the target stream was less perceived as deviant, suggesting reduced stream segregation with decreasing stream distance. Moreover, the presence of a concurrent stream increased the MMN peak latency of the older group but not that of the younger group. The results provide neurophysiological evidence for the effects of concurrent speech on auditory processing in older adults, suggesting that older adults need more time for stream segregation in the presence of concurrent speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language

    PubMed Central

    Poliva, Oren

    2016-01-01

    The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences). PMID:27445676

  12. Insights from event-related potentials into the temporal and hierarchical organization of the ventral and dorsal streams of the visual system in selective attention.

    PubMed

    Martín-Loeches, M; Hinojosa, J A; Rubia, F J

    1999-11-01

    The temporal and hierarchical relationships between the dorsal and the ventral streams in selective attention are known only in relation to the use of spatial location as the attentional cue mediated by the dorsal stream. To improve this state of affairs, event-related brain potentials were recorded while subjects attended simultaneously to motion direction (mediated by the dorsal stream) and to a property mediated by the ventral stream (color or shape). At about the same time, a selection positivity (SP) started for attention mediated by both streams. However, the SP for color and shape peaked about 60 ms later than motion SP. Subsequently, a selection negativity (SN) followed by a late positive component (LPC) were found simultaneously for attention mediated by both streams. A hierarchical relationship between the two streams was not observed, but neither SN nor LPC for one property was completely insensitive to the values of the other property.

  13. Blindness alters the microstructure of the ventral but not the dorsal visual stream.

    PubMed

    Reislev, Nina L; Kupers, Ron; Siebner, Hartwig R; Ptito, Maurice; Dyrby, Tim B

    2016-07-01

    Visual deprivation from birth leads to reorganisation of the brain through cross-modal plasticity. Although there is a general agreement that the primary afferent visual pathways are altered in congenitally blind individuals, our knowledge about microstructural changes within the higher-order visual streams, and how this is affected by onset of blindness, remains scant. We used diffusion tensor imaging and tractography to investigate microstructural features in the dorsal (superior longitudinal fasciculus) and ventral (inferior longitudinal and inferior fronto-occipital fasciculi) visual pathways in 12 congenitally blind, 15 late blind and 15 normal sighted controls. We also studied six prematurely born individuals with normal vision to control for the effects of prematurity on brain connectivity. Our data revealed a reduction in fractional anisotropy in the ventral but not the dorsal visual stream for both congenitally and late blind individuals. Prematurely born individuals, with normal vision, did not differ from normal sighted controls, born at term. Our data suggest that although the visual streams are structurally developing without normal visual input from the eyes, blindness selectively affects the microstructure of the ventral visual stream regardless of the time of onset. We suggest that the decreased fractional anisotropy of the ventral stream in the two groups of blind subjects is the combined result of both degenerative and cross-modal compensatory processes, affecting normal white matter development.

  14. Premotor cortex is sensitive to auditory-visual congruence for biological motion.

    PubMed

    Wuerger, Sophie M; Parkes, Laura; Lewis, Penelope A; Crocker-Buque, Alex; Rutschmann, Roland; Meyer, Georg F

    2012-03-01

    The auditory and visual perception systems have developed special processing strategies for ecologically valid motion stimuli, utilizing some of the statistical properties of the real world. A well-known example is the perception of biological motion, for example, the perception of a human walker. The aim of the current study was to identify the cortical network involved in the integration of auditory and visual biological motion signals. We first determined the cortical regions of auditory and visual coactivation (Experiment 1); a conjunction analysis based on unimodal brain activations identified four regions: middle temporal area, inferior parietal lobule, ventral premotor cortex, and cerebellum. The brain activations arising from bimodal motion stimuli (Experiment 2) were then analyzed within these regions of coactivation. Auditory footsteps were presented concurrently with either an intact visual point-light walker (biological motion) or a scrambled point-light walker; auditory and visual motion in depth (walking direction) could either be congruent or incongruent. Our main finding is that motion incongruency (across modalities) increases the activity in the ventral premotor cortex, but only if the visual point-light walker is intact. Our results extend our current knowledge by providing new evidence consistent with the idea that the premotor area assimilates information across the auditory and visual modalities by comparing the incoming sensory input with an internal representation.

  15. Face and location processing in children with early unilateral brain injury.

    PubMed

    Paul, Brianna; Appelbaum, Mark; Carapetian, Stephanie; Hesselink, John; Nass, Ruth; Trauner, Doris; Stiles, Joan

    2014-07-01

    Human visuospatial functions are commonly divided into those dependent on the ventral visual stream (ventral occipitotemporal regions), which allows for processing the 'what' of an object, and the dorsal visual stream (dorsal occipitoparietal regions), which allows for processing 'where' an object is in space. Information about the development of each of the two streams has been accumulating, but very little is known about the effects of injury, particularly very early injury, on this developmental process. Using a set of computerized dorsal and ventral stream tasks matched for stimuli, required response, and difficulty (for typically-developing individuals), we sought to compare the differential effects of injury to the two systems by examining performance in individuals with perinatal brain injury (PBI), who present with selective deficits in visuospatial processing from a young age. Thirty participants (mean=15.1 years) with early unilateral brain injury (15 right hemisphere PBI, 15 left hemisphere PBI) and 16 matched controls participated. On our tasks children with PBI performed more poorly than controls (lower accuracy and longer response times), and this was particularly prominent for the ventral stream task. Lateralization of PBI was also a factor, as the dorsal stream task did not seem to be associated with lateralized deficits, with both PBI groups showing only subtle decrements in performance, while the ventral stream task elicited deficits from RPBI children that do not appear to improve with age. Our findings suggest that early injury results in lesion-specific visuospatial deficits that persist into adolescence. Further, as the stimuli used in our ventral stream task were faces, our findings are consistent with what is known about the neural systems for face processing, namely, that they are established relatively early, follow a comparatively rapid developmental trajectory (conferring a vulnerability to early insult), and are biased toward the right hemisphere. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Toward a Neurophysiological Theory of Auditory Stream Segregation

    ERIC Educational Resources Information Center

    Snyder, Joel S.; Alain, Claude

    2007-01-01

    Auditory stream segregation (or streaming) is a phenomenon in which 2 or more repeating sounds differing in at least 1 acoustic attribute are perceived as 2 or more separate sound sources (i.e., streams). This article selectively reviews psychophysical and computational studies of streaming and comprehensively reviews more recent…

  17. Integration and segregation in auditory scene analysis

    NASA Astrophysics Data System (ADS)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  18. Multistability in auditory stream segregation: a predictive coding view

    PubMed Central

    Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra

    2012-01-01

    Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621

  19. What Role Does "Elongation" Play in "Tool-Specific" Activation and Connectivity in the Dorsal and Ventral Visual Streams?

    PubMed

    Chen, Juan; Snow, Jacqueline C; Culham, Jody C; Goodale, Melvyn A

    2018-04-01

    Images of tools induce stronger activation than images of nontools in a left-lateralized network that includes ventral-stream areas implicated in tool identification and dorsal-stream areas implicated in tool manipulation. Importantly, however, graspable tools tend to be elongated rather than stubby, and so the tool-selective responses in some of these areas may, to some extent, reflect sensitivity to elongation rather than "toolness" per se. Using functional magnetic resonance imaging, we investigated the role of elongation in driving tool-specific activation in the 2 streams and their interconnections. We showed that in some "tool-selective" areas, the coding of toolness and elongation coexisted, but in others, elongation and toolness were coded independently. Psychophysiological interaction analysis revealed that toolness, but not elongation, had a strong modulation of the connectivity between the ventral and dorsal streams. Dynamic causal modeling revealed that viewing tools (either elongated or stubby) increased the connectivity from the ventral- to the dorsal-stream tool-selective areas, but only viewing elongated tools increased the reciprocal connectivity between these areas. Overall, these data disentangle how toolness and elongation affect the activation and connectivity of the tool network and help to resolve recent controversies regarding the relative contribution of "toolness" versus elongation in driving dorsal-stream "tool-selective" areas.

  20. Language Learning Variability within the Dorsal and Ventral Streams as a Cue for Compensatory Mechanisms in Aphasia Recovery

    PubMed Central

    López-Barroso, Diana; de Diego-Balaguer, Ruth

    2017-01-01

    Dorsal and ventral pathways connecting perisylvian language areas have been shown to be functionally and anatomically segregated. Whereas the dorsal pathway integrates the sensory-motor information required for verbal repetition, the ventral pathway has classically been associated with semantic processes. The great individual differences characterizing language learning through life partly correlate with brain structure and function within these dorsal and ventral language networks. Variability and plasticity within these networks also underlie inter-individual differences in the recovery of linguistic abilities in aphasia. Despite the division of labor of the dorsal and ventral streams, studies in healthy individuals have shown how the interaction of them and the redundancy in the areas they connect allow for compensatory strategies in functions that are usually segregated. In this mini-review we highlight the need to examine compensatory mechanisms between streams in healthy individuals as a helpful guide to choosing the most appropriate rehabilitation strategies, using spared functions and targeting preserved compensatory networks for brain plasticity. PMID:29021751

  1. Deep Neural Networks Reveal a Gradient in the Complexity of Neural Representations across the Ventral Stream.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2015-07-08

    Converging evidence suggests that the primate ventral visual pathway encodes increasingly complex stimulus features in downstream areas. We quantitatively show that there indeed exists an explicit gradient for feature complexity in the ventral pathway of the human brain. This was achieved by mapping thousands of stimulus features of increasing complexity across the cortical sheet using a deep neural network. Our approach also revealed a fine-grained functional specialization of downstream areas of the ventral stream. Furthermore, it allowed decoding of representations from human brain activity at an unsurpassed degree of accuracy, confirming the quality of the developed approach. Stimulus features that successfully explained neural responses indicate that population receptive fields were explicitly tuned for object categorization. This provides strong support for the hypothesis that object categorization is a guiding principle in the functional organization of the primate ventral stream. Copyright © 2015 the authors 0270-6474/15/3510005-10$15.00/0.

  2. Reconciling Time, Space and Function: A New Dorsal-Ventral Stream Model of Sentence Comprehension

    ERIC Educational Resources Information Center

    Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2013-01-01

    We present a new dorsal-ventral stream framework for language comprehension which unifies basic neurobiological assumptions (Rauschecker & Scott, 2009) with a cross-linguistic neurocognitive sentence comprehension model (eADM; Bornkessel & Schlesewsky, 2006). The dissociation between (time-dependent) syntactic structure-building and…

  3. Ventral and Dorsal Visual Stream Contributions to the Perception of Object Shape and Object Location

    PubMed Central

    Zachariou, Valentinos; Klatzky, Roberta; Behrmann, Marlene

    2017-01-01

    Growing evidence suggests that the functional specialization of the two cortical visual pathways may not be as distinct as originally proposed. Here, we explore possible contributions of the dorsal “where/how” visual stream to shape perception and, conversely, contributions of the ventral “what” visual stream to location perception in human adults. Participants performed a shape detection task and a location detection task while undergoing fMRI. For shape detection, comparable BOLD activation in the ventral and dorsal visual streams was observed, and the magnitude of this activation was correlated with behavioral performance. For location detection, cortical activation was significantly stronger in the dorsal than ventral visual pathway and did not correlate with the behavioral outcome. This asymmetry in cortical profile across tasks is particularly noteworthy given that the visual input was identical and that the tasks were matched for difficulty in performance. We confirmed the asymmetry in a subsequent psychophysical experiment in which participants detected changes in either object location or shape, while ignoring the other, task-irrelevant dimension. Detection of a location change was slowed by an irrelevant shape change matched for difficulty, but the reverse did not hold. We conclude that both ventral and dorsal visual streams contribute to shape perception, but that location processing appears to be essentially a function of the dorsal visual pathway. PMID:24001005

  4. Alpha-band rhythm modulation under the condition of subliminal face presentation: MEG study.

    PubMed

    Sakuraba, Satoshi; Kobayashi, Hana; Sakai, Shinya; Yokosawa, Koichi

    2013-01-01

    The human brain has two streams to process visual information: a dorsal stream and a ventral stream. Negative potential N170 or its magnetic counterpart M170 is known as the face-specific signal originating from the ventral stream. It is possible to present a visual image unconsciously by using continuous flash suppression (CFS), which is a visual masking technique adopting binocular rivalry. In this work, magnetoencephalograms were recorded during presentation of the three invisible images: face images, which are processed by the ventral stream; tool images, which could be processed by the dorsal stream, and a blank image. Alpha-band activities detected by sensors that are sensitive to M170 were compared. The alpha-band rhythm was suppressed more during presentation of face images than during presentation of the blank image (p=.028). The suppression remained for about 1 s after ending presentations. However, no significant difference was observed between tool and other images. These results suggest that alpha-band rhythm can be modulated also by unconscious visual images.

  5. Investigating category- and shape-selective neural processing in ventral and dorsal visual stream under interocular suppression.

    PubMed

    Ludwig, Karin; Kathmann, Norbert; Sterzer, Philipp; Hesselmann, Guido

    2015-01-01

    Recent behavioral and neuroimaging studies using continuous flash suppression (CFS) have suggested that action-related processing in the dorsal visual stream might be independent of perceptual awareness, in line with the "vision-for-perception" versus "vision-for-action" distinction of the influential dual-stream theory. It remains controversial if evidence suggesting exclusive dorsal stream processing of tool stimuli under CFS can be explained by their elongated shape alone or by action-relevant category representations in dorsal visual cortex. To approach this question, we investigated category- and shape-selective functional magnetic resonance imaging-blood-oxygen level-dependent responses in both visual streams using images of faces and tools. Multivariate pattern analysis showed enhanced decoding of elongated relative to non-elongated tools, both in the ventral and dorsal visual stream. The second aim of our study was to investigate whether the depth of interocular suppression might differentially affect processing in dorsal and ventral areas. However, parametric modulation of suppression depth by varying the CFS mask contrast did not yield any evidence for differential modulation of category-selective activity. Together, our data provide evidence for shape-selective processing under CFS in both dorsal and ventral stream areas and, therefore, do not support the notion that dorsal "vision-for-action" processing is exclusively preserved under interocular suppression. © 2014 Wiley Periodicals, Inc.

  6. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    PubMed

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  7. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    PubMed

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    PubMed

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  9. Neural spike-timing patterns vary with sound shape and periodicity in three auditory cortical fields

    PubMed Central

    Lee, Christopher M.; Osman, Ahmad F.; Volgushev, Maxim; Escabí, Monty A.

    2016-01-01

    Mammals perceive a wide range of temporal cues in natural sounds, and the auditory cortex is essential for their detection and discrimination. The rat primary (A1), ventral (VAF), and caudal suprarhinal (cSRAF) auditory cortical fields have separate thalamocortical pathways that may support unique temporal cue sensitivities. To explore this, we record responses of single neurons in the three fields to variations in envelope shape and modulation frequency of periodic noise sequences. Spike rate, relative synchrony, and first-spike latency metrics have previously been used to quantify neural sensitivities to temporal sound cues; however, such metrics do not measure absolute spike timing of sustained responses to sound shape. To address this, in this study we quantify two forms of spike-timing precision, jitter, and reliability. In all three fields, we find that jitter decreases logarithmically with increase in the basis spline (B-spline) cutoff frequency used to shape the sound envelope. In contrast, reliability decreases logarithmically with increase in sound envelope modulation frequency. In A1, jitter and reliability vary independently, whereas in ventral cortical fields, jitter and reliability covary. Jitter time scales increase (A1 < VAF < cSRAF) and modulation frequency upper cutoffs decrease (A1 > VAF > cSRAF) with ventral progression from A1. These results suggest a transition from independent encoding of shape and periodicity sound cues on short time scales in A1 to a joint encoding of these same cues on longer time scales in ventral nonprimary cortices. PMID:26843599

  10. Auditory connections and functions of prefrontal cortex

    PubMed Central

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  11. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    PubMed Central

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-01-01

    Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information. PMID:26989281

  12. Neural correlates of consciousness: a definition of the dorsal and ventral streams and their relation to phenomenology.

    PubMed

    Vakalopoulos, Costa

    2005-01-01

    The paper presents a hypothesis for a neural correlate of consciousness. A proposal is made that both the dorsal and ventral streams must be concurrently active to generate conscious awareness and that V1 (striate cortex) provides a serial link between them. An argument is presented against a true extrastriate communication between the dorsal and ventral streams. Secondly, a detailed theory is developed for the structure of the visual hierarchy. Premotor theory states that each organism-object interaction can be described by the two quantitative measures of torque and change in joint position served by the basal ganglia and cerebellum, respectively. This leads to a component theory of motor efference copy providing a fundamental tool for categorizing dorsal and ventral stream networks. The rationale for this is that the dorsal stream specifies spatial coordinates of the external world, which can be coded by the reafference of changes in joint position. The ventral stream is concerned with object recognition and is coded for by forces exerted on the world during a developmental exploratory phase of the organism. The proposed pathways for a component motor efference copy from both the cerebellum and basal ganglia converge on the thalamus and modulate thalamocortical projections via the thalamic reticular nucleus. The origin of the corticopontine projections, which are a massive pathway for cortical information to reach the cerebellum, coincides with the area typically considered as part of the dorsal stream, whereas the entire cortex projects to the striatum. This adds empirical support for a new conceptualization of the visual streams. The model also presents a solution to the binding problem of a neural correlate of consciousness, that is, how a distributed neural network synchronizes its activity during a cognitive event. It represents a reinterpretation of the current status of the visual hierarchy.

  13. Neural Systems Involved When Attending to a Speaker

    PubMed Central

    Kamourieh, Salwa; Braga, Rodrigo M.; Leech, Robert; Newbould, Rexford D.; Malhotra, Paresh; Wise, Richard J. S.

    2015-01-01

    Remembering what a speaker said depends on attention. During conversational speech, the emphasis is on working memory, but listening to a lecture encourages episodic memory encoding. With simultaneous interference from background speech, the need for auditory vigilance increases. We recreated these context-dependent demands on auditory attention in 2 ways. The first was to require participants to attend to one speaker in either the absence or presence of a distracting background speaker. The second was to alter the task demand, requiring either an immediate or delayed recall of the content of the attended speech. Across 2 fMRI studies, common activated regions associated with segregating attended from unattended speech were the right anterior insula and adjacent frontal operculum (aI/FOp), the left planum temporale, and the precuneus. In contrast, activity in a ventral right frontoparietal system was dependent on both the task demand and the presence of a competing speaker. Additional multivariate analyses identified other domain-general frontoparietal systems, where activity increased during attentive listening but was modulated little by the need for speech stream segregation in the presence of 2 speakers. These results make predictions about impairments in attentive listening in different communicative contexts following focal or diffuse brain pathology. PMID:25596592

  14. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans

    PubMed Central

    Poliva, Oren

    2017-01-01

    In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls. PMID:28928931

  15. Testing the dual-pathway model for auditory processing in human cortex.

    PubMed

    Zündorf, Ida C; Lewald, Jörg; Karnath, Hans-Otto

    2016-01-01

    Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Manipulating Instructions Strategically Affects Reliance on the Ventral-Lexical Reading Stream: Converging Evidence from Neuroimaging and Reaction Time

    ERIC Educational Resources Information Center

    Cummine, Jacqueline; Gould, Layla; Zhou, Crystal; Hrybouski, Stan; Siddiqi, Zohaib; Chouinard, Brea; Borowsky, Ron

    2013-01-01

    Neurobiology of reading research has yet to explore whether reliance on the ventral-lexical stream during word reading can be enhanced by the instructed reading strategy, or whether it is impervious to such strategies. We examined Instructions: "name all" vs. "name words" (based on spelling), Word Type: "regular words" vs. "exception words", and…

  17. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes

    ERIC Educational Resources Information Center

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-01-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with…

  18. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    PubMed Central

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming. PMID:23516340

  19. Revealing the dual streams of speech processing.

    PubMed

    Fridriksson, Julius; Yourganov, Grigori; Bonilha, Leonardo; Basilakos, Alexandra; Den Ouden, Dirk-Bart; Rorden, Christopher

    2016-12-27

    Several dual route models of human speech processing have been proposed suggesting a large-scale anatomical division between cortical regions that support motor-phonological aspects vs. lexical-semantic aspects of speech processing. However, to date, there is no complete agreement on what areas subserve each route or the nature of interactions across these routes that enables human speech processing. Relying on an extensive behavioral and neuroimaging assessment of a large sample of stroke survivors, we used a data-driven approach using principal components analysis of lesion-symptom mapping to identify brain regions crucial for performance on clusters of behavioral tasks without a priori separation into task types. Distinct anatomical boundaries were revealed between a dorsal frontoparietal stream and a ventral temporal-frontal stream associated with separate components. Collapsing over the tasks primarily supported by these streams, we characterize the dorsal stream as a form-to-articulation pathway and the ventral stream as a form-to-meaning pathway. This characterization of the division in the data reflects both the overlap between tasks supported by the two streams as well as the observation that there is a bias for phonological production tasks supported by the dorsal stream and lexical-semantic comprehension tasks supported by the ventral stream. As such, our findings show a division between two processing routes that underlie human speech processing and provide an empirical foundation for studying potential computational differences that distinguish between the two routes.

  20. Segregation and Integration of Auditory Streams when Listening to Multi-Part Music

    PubMed Central

    Ragert, Marie; Fairhurst, Merle T.; Keller, Peter E.

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively. PMID:24475030

  1. Segregation and integration of auditory streams when listening to multi-part music.

    PubMed

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively.

  2. Left ventral occipitotemporal activation during orthographic and semantic processing of auditory words.

    PubMed

    Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin

    2016-01-01

    The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Auditory stream segregation in children with Asperger syndrome

    PubMed Central

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  4. Integration and segregation in auditory streaming

    NASA Astrophysics Data System (ADS)

    Almonte, Felix; Jirsa, Viktor K.; Large, Edward W.; Tuller, Betty

    2005-12-01

    We aim to capture the perceptual dynamics of auditory streaming using a neurally inspired model of auditory processing. Traditional approaches view streaming as a competition of streams, realized within a tonotopically organized neural network. In contrast, we view streaming to be a dynamic integration process which resides at locations other than the sensory specific neural subsystems. This process finds its realization in the synchronization of neural ensembles or in the existence of informational convergence zones. Our approach uses two interacting dynamical systems, in which the first system responds to incoming acoustic stimuli and transforms them into a spatiotemporal neural field dynamics. The second system is a classification system coupled to the neural field and evolves to a stationary state. These states are identified with a single perceptual stream or multiple streams. Several results in human perception are modelled including temporal coherence and fission boundaries [L.P.A.S. van Noorden, Temporal coherence in the perception of tone sequences, Ph.D. Thesis, Eindhoven University of Technology, The Netherlands, 1975], and crossing of motions [A.S. Bregman, Auditory Scene Analysis: The Perceptual Organization of Sound, MIT Press, 1990]. Our model predicts phenomena such as the existence of two streams with the same pitch, which cannot be explained by the traditional stream competition models. An experimental study is performed to provide proof of existence of this phenomenon. The model elucidates possible mechanisms that may underlie perceptual phenomena.

  5. Endogenous Delta/Theta Sound-Brain Phase Entrainment Accelerates the Buildup of Auditory Streaming.

    PubMed

    Riecke, Lars; Sack, Alexander T; Schroeder, Charles E

    2015-12-21

    In many natural listening situations, meaningful sounds (e.g., speech) fluctuate in slow rhythms among other sounds. When a slow rhythmic auditory stream is selectively attended, endogenous delta (1‒4 Hz) oscillations in auditory cortex may shift their timing so that higher-excitability neuronal phases become aligned with salient events in that stream [1, 2]. As a consequence of this stream-brain phase entrainment [3], these events are processed and perceived more readily than temporally non-overlapping events [4-11], essentially enhancing the neural segregation between the attended stream and temporally noncoherent streams [12]. Stream-brain phase entrainment is robust to acoustic interference [13-20] provided that target stream-evoked rhythmic activity can be segregated from noncoherent activity evoked by other sounds [21], a process that usually builds up over time [22-27]. However, it has remained unclear whether stream-brain phase entrainment functionally contributes to this buildup of rhythmic streams or whether it is merely an epiphenomenon of it. Here, we addressed this issue directly by experimentally manipulating endogenous stream-brain phase entrainment in human auditory cortex with non-invasive transcranial alternating current stimulation (TACS) [28-30]. We assessed the consequences of these manipulations on the perceptual buildup of the target stream (the time required to recognize its presence in a noisy background), using behavioral measures in 20 healthy listeners performing a naturalistic listening task. Experimentally induced cyclic 4-Hz variations in stream-brain phase entrainment reliably caused a cyclic 4-Hz pattern in perceptual buildup time. Our findings demonstrate that strong endogenous delta/theta stream-brain phase entrainment accelerates the perceptual emergence of task-relevant rhythmic streams in noisy environments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Normal form from biological motion despite impaired ventral stream function.

    PubMed

    Gilaie-Dotan, S; Bentin, S; Harel, M; Rees, G; Saygin, A P

    2011-04-01

    We explored the extent to which biological motion perception depends on ventral stream integration by studying LG, an unusual case of developmental visual agnosia. LG has significant ventral stream processing deficits but no discernable structural cortical abnormality. LG's intermediate visual areas and object-sensitive regions exhibit abnormal activation during visual object perception, in contrast to area V5/MT+ which responds normally to visual motion (Gilaie-Dotan, Perry, Bonneh, Malach, & Bentin, 2009). Here, in three studies we used point light displays, which require visual integration, in adaptive threshold experiments to examine LG's ability to detect form from biological and non-biological motion cues. LG's ability to detect and discriminate form from biological motion was similar to healthy controls. In contrast, he was significantly deficient in processing form from non-biological motion. Thus, LG can rely on biological motion cues to perceive human forms, but is considerably impaired in extracting form from non-biological motion. Finally, we found that while LG viewed biological motion, activity in a network of brain regions associated with processing biological motion was functionally correlated with his V5/MT+ activity, indicating that normal inputs from V5/MT+ might suffice to activate his action perception system. These results indicate that processing of biologically moving form can dissociate from other form processing in the ventral pathway. Furthermore, the present results indicate that integrative ventral stream processing is necessary for uncompromised processing of non-biological form from motion. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Feature assignment in perception of auditory figure.

    PubMed

    Gregg, Melissa K; Samuel, Arthur G

    2012-08-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed. (c) 2012 APA, all rights reserved.

  8. Colour discrimination and categorisation in Williams syndrome.

    PubMed

    Farran, Emily K; Cranwell, Matthew B; Alvarez, James; Franklin, Anna

    2013-10-01

    Individuals with Williams syndrome (WS) present with impaired functioning of the dorsal visual stream relative to the ventral visual stream. As such, little attention has been given to ventral stream functions in WS. We investigated colour processing, a predominantly ventral stream function, for the first time in nineteen individuals with Williams syndrome. Colour discrimination was assessed using the Farnsworth-Munsell 100 hue test. Colour categorisation was assessed using a match-to-sample test and a colour naming task. A visual search task was also included as a measure of sensitivity to the size of perceptual colour difference. Results showed that individuals with WS have reduced colour discrimination relative to typically developing participants matched for chronological age; performance was commensurate with a typically developing group matched for non-verbal ability. In contrast, categorisation was typical in WS, although there was some evidence that sensitivity to the size of perceptual colour differences was reduced in this group. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. The spectrotemporal filter mechanism of auditory selective attention

    PubMed Central

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  10. Auditory stream segregation in monkey auditory cortex: effects of frequency separation, presentation rate, and tone duration

    NASA Astrophysics Data System (ADS)

    Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell

    2004-09-01

    Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.

  11. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    PubMed

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole auditory scene and how increasing background noise corrupts this process is still debated. In this magnetoencephalography study, subjects had to attend a speech stream with or without multitalker background noise. Results argue for frequency-dependent cortical tracking mechanisms for the attended speech stream. The left superior temporal gyrus tracked the ∼0.5 Hz modulations of the attended speech stream only when the speech was embedded in multitalker background, whereas the right supratemporal auditory cortex tracked 4-8 Hz modulations during both noiseless and cocktail-party conditions. Copyright © 2016 the authors 0270-6474/16/361597-11$15.00/0.

  12. A spatiotemporal profile of visual system activation revealed by current source density analysis in the awake macaque.

    PubMed

    Schroeder, C E; Mehta, A D; Givre, S J

    1998-01-01

    We investigated the spatiotemporal activation pattern, produced by one visual stimulus, across cerebral cortical regions in awake monkeys. Laminar profiles of postsynaptic potentials and action potentials were indexed with current source density (CSD) and multiunit activity profiles respectively. Locally, we found contrasting activation profiles in dorsal and ventral stream areas. The former, like V1 and V2, exhibit a 'feedforward' profile, with excitation beginning at the depth of Lamina 4, followed by activation of the extragranular laminae. The latter often displayed a multilaminar/columnar profile, with initial responses distributed across the laminae and reflecting modulation rather than excitation; CSD components were accompanied by either no changes or by suppression of action potentials. System-wide, response latencies indicated a large dorsal/ventral stream latency advantage, which generalizes across a wide range of methods. This predicts a specific temporal ordering of dorsal and ventral stream components of visual analysis, as well as specific patterns of dorsal-ventral stream interaction. Our findings support a hierarchical model of cortical organization that combines serial and parallel elements. Critical in such a model is the recognition that processing within a location typically entails multiple temporal components or 'waves' of activity, driven by input conveyed over heterogeneous pathways from the retina.

  13. Lateral prefrontal cortex is organized into parallel dorsal and ventral streams along the rostro-caudal axis.

    PubMed

    Blumenfeld, Robert S; Nomura, Emi M; Gratton, Caterina; D'Esposito, Mark

    2013-10-01

    Anatomical connectivity differences between the dorsal and ventral lateral prefrontal cortex (PFC) of the non-human primate strongly suggests that these regions support different functions. However, after years of study, it remains unclear whether these regions are functionally distinct. In contrast, there has been a groundswell of recent studies providing evidence for a rostro-caudal functional organization, along the lateral as well as dorsomedial frontal cortex. Thus, it is not known whether dorsal and ventral regions of lateral PFC form distinct functional networks and how to reconcile any dorso-ventral organization with the medio-lateral and rostro-caudal axes. Here, we used resting-state connectivity data to identify parallel dorsolateral and ventrolateral streams of intrinsic connectivity with the dorsomedial frontal cortex. Moreover, we show that this connectivity follows a rostro-caudal gradient. Our results provide evidence for a novel framework for the intrinsic organization of the frontal cortex that incorporates connections between medio-lateral, dorso-ventral, and rostro-caudal axes.

  14. Diversion of the urine stream by surgical modification of the preputial ostium in a dog.

    PubMed

    Pavletic, Michael M; Brum, Douglas E

    2009-11-01

    A 1.4-year-old sexually intact male Standard Poodle was evaluated with a history of urinating on its left forelimb and lower portion of the thorax. Physical examination revealed that the dog had an unusually elevated (tucked) abdominal wall and prominent dome-shaped thoracic wall. These anatomic changes altered the angle of the urine stream, resulting in the dog's soiling the xiphoid region of the thorax and left forelimb. The dorsal half of the preputial ostium was closed surgically to divert the urine stream in a ventral direction. The ventral portion of the ostium was reciprocally enlarged. Postoperatively, the dog urinated in a downward direction, eliminating urine contact with the body. The preputial orifice (ostium) plays an important role in the shape and direction of the urine stream exiting the penile urethra. Dogs with an elevated abdominal wall and prominent dome-shaped thorax may be prone to contamination of the lower portion of the thorax and forelimbs with urine during normal micturition. Partial closure of the dorsal preputial ostium, with reciprocal enlargement of the lower half of the orifice, can create a deflective barrier that effectively diverts the urine stream in a ventral direction.

  15. Prevention and Treatment of Noise-Induced Tinnitus. Revision

    DTIC Science & Technology

    2013-07-01

    CTBP2 immunolabeling) for their loss following noise. Sub-Task 1c: Assessment of Auditory Nerve ( VGLUT1 immunolabel) terminals on neurons in Ventral...and Dorsal Cochlear Nucleus (VCN, DCN) for their loss following noise. Sub-Task 1d: Assessment of VGLUT2 , VAT & VGAT immunolabeled terminals in VCN...significant reduction in connections compared to animals without noise exposure. Sub-Task 1c: Assessment of Auditory Nerve ( VGLUT1 immunolabel

  16. Infant auditory short-term memory for non-linguistic sounds.

    PubMed

    Ross-Sheehy, Shannon; Newman, Rochelle S

    2015-04-01

    This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    PubMed

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  18. Right Occipital Cortex Activation Correlates with Superior Odor Processing Performance in the Early Blind

    PubMed Central

    Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.

    2013-01-01

    Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263

  19. Functional Dissociations within the Ventral Object Processing Pathway: Cognitive Modules or a Hierarchical Continuum?

    ERIC Educational Resources Information Center

    Cowell, Rosemary A.; Bussey, Timothy J.; Saksida, Lisa M.

    2010-01-01

    We examined the organization and function of the ventral object processing pathway. The prevailing theoretical approach in this field holds that the ventral object processing stream has a modular organization, in which visual perception is carried out in posterior regions and visual memory is carried out, independently, in the anterior temporal…

  20. Degraded Auditory Processing in a Rat Model of Autism Limits the Speech Representation in Non-primary Auditory Cortex

    PubMed Central

    Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.

    2014-01-01

    Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033

  1. A cross-validated cytoarchitectonic atlas of the human ventral visual stream.

    PubMed

    Rosenke, Mona; Weiner, Kevin S; Barnett, Michael A; Zilles, Karl; Amunts, Katrin; Goebel, Rainer; Grill-Spector, Kalanit

    2018-04-15

    The human ventral visual stream consists of several areas that are considered processing stages essential for perception and recognition. A fundamental microanatomical feature differentiating areas is cytoarchitecture, which refers to the distribution, size, and density of cells across cortical layers. Because cytoarchitectonic structure is measured in 20-micron-thick histological slices of postmortem tissue, it is difficult to assess (a) how anatomically consistent these areas are across brains and (b) how they relate to brain parcellations obtained with prevalent neuroimaging methods, acquired at the millimeter and centimeter scale. Therefore, the goal of this study was to (a) generate a cross-validated cytoarchitectonic atlas of the human ventral visual stream on a whole brain template that is commonly used in neuroimaging studies and (b) to compare this atlas to a recently published retinotopic parcellation of visual cortex (Wang et al., 2014). To achieve this goal, we generated an atlas of eight cytoarchitectonic areas: four areas in the occipital lobe (hOc1-hOc4v) and four in the fusiform gyrus (FG1-FG4), then we tested how the different alignment techniques affect the accuracy of the resulting atlas. Results show that both cortex-based alignment (CBA) and nonlinear volumetric alignment (NVA) generate an atlas with better cross-validation performance than affine volumetric alignment (AVA). Additionally, CBA outperformed NVA in 6/8 of the cytoarchitectonic areas. Finally, the comparison of the cytoarchitectonic atlas to a retinotopic atlas shows a clear correspondence between cytoarchitectonic and retinotopic areas in the ventral visual stream. The successful performance of CBA suggests a coupling between cytoarchitectonic areas and macroanatomical landmarks in the human ventral visual stream, and furthermore, that this coupling can be utilized for generating an accurate group atlas. In addition, the coupling between cytoarchitecture and retinotopy highlights the potential use of this atlas in understanding how anatomical features contribute to brain function. We make this cytoarchitectonic atlas freely available in both BrainVoyager and FreeSurfer formats (http://vpnl.stanford.edu/vcAtlas). The availability of this atlas will enable future studies to link cytoarchitectonic organization to other parcellations of the human ventral visual stream with potential to advance the understanding of this pathway in typical and atypical populations. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The time-course of activation in the dorsal and ventral visual streams during landmark cueing and perceptual discrimination tasks.

    PubMed

    Lambert, Anthony J; Wootton, Adrienne

    2017-08-01

    Different patterns of high density EEG activity were elicited by the same peripheral stimuli, in the context of Landmark Cueing and Perceptual Discrimination tasks. The C1 component of the visual event-related potential (ERP) at parietal - occipital electrode sites was larger in the Landmark Cueing task, and source localisation suggested greater activation in the superior parietal lobule (SPL) in this task, compared to the Perceptual Discrimination task, indicating stronger early recruitment of the dorsal visual stream. In the Perceptual Discrimination task, source localisation suggested widespread activation of the inferior temporal gyrus (ITG) and fusiform gyrus (FFG), structures associated with the ventral visual stream, during the early phase of the P1 ERP component. Moreover, during a later epoch (171-270ms after stimulus onset) increased temporal-occipital negativity, and stronger recruitment of ITG and FFG were observed in the Perceptual Discrimination task. These findings illuminate the contrasting functions of the dorsal and ventral visual streams, to support rapid shifts of attention in response to contextual landmarks, and conscious discrimination, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Perception of shapes targeting local and global processes in autism spectrum disorders.

    PubMed

    Grinter, Emma J; Maybery, Murray T; Pellicano, Elizabeth; Badcock, Johanna C; Badcock, David R

    2010-06-01

    Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and global form processing ability. Within the visual domain, radial frequency (RF) patterns - shapes formed by sinusoidally varying the radius of a circle to add 'bumps' of a certain number to a circle - can be used to examine local and global form perception. Typically developing children and children with an ASD discriminated between circles and RF patterns that are processed either locally (RF24) or globally (RF3). Children with an ASD required greater shape deformation to identify RF3 shapes compared to typically developing children, consistent with difficulty in global processing in the ventral stream. No group difference was observed for RF24 shapes, suggesting intact local ventral-stream processing. These outcomes support the position that a deficit in global visual processing is present in ASDs, consistent with the notion of Weak Central Coherence.

  4. A new neural framework for visuospatial processing.

    PubMed

    Kravitz, Dwight J; Saleem, Kadharbatcha S; Baker, Chris I; Mishkin, Mortimer

    2011-04-01

    The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a 'What' pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception ('Where'), more recent accounts suggest it primarily serves non-conscious visually guided action ('How'). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively.

  5. Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System.

    PubMed

    Brodal, Hans P; Osnes, Berge; Specht, Karsten

    2017-01-01

    Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music.

  6. Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System

    PubMed Central

    Brodal, Hans P.; Osnes, Berge; Specht, Karsten

    2017-01-01

    Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music. PMID:28400717

  7. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    PubMed

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  8. Object representations in ventral and dorsal visual streams: fMRI repetition effects depend on attention and part–whole configuration

    PubMed Central

    Thoma, Volker; Henson, Richard N.

    2011-01-01

    The effects of attention and object configuration on the neural responses to short-lag visual image repetition were investigated with fMRI. Attention to one of two object images in a prime display was cued spatially. The images were either intact or split vertically; a manipulation that negates the influence of view-based representations. A subsequent single intact probe image was named covertly. Behavioural priming observed as faster button presses was found for attended primes in both intact and split configurations, but only for uncued primes in the intact configuration. In a voxel-wise analysis, fMRI repetition suppression (RS) was observed in a left mid-fusiform region for attended primes, both intact and split, whilst a right intraparietal region showed repetition enhancement (RE) for intact primes, regardless of attention. In a factorial analysis across regions of interest (ROIs) defined from independent localiser contrasts, RS for attended objects in the ventral stream was significantly left-lateralised, whilst repetition effects in ventral and dorsal ROIs correlated with the amount of priming in specific conditions. These fMRI results extend hybrid theories of object recognition, implicating left ventral stream regions in analytic processing (requiring attention), consistent with prior hypotheses about hemispheric specialisation, and implicating dorsal stream regions in holistic processing (independent of attention). PMID:21554967

  9. Somatosensory Projections to Cochlear Nucleus are Up-regulated after Unilateral Deafness

    PubMed Central

    Zeng, Chunhua; Yang, Ziheng; Shreve, Lauren; Bledsoe, Sanford; Shore, Susan

    2012-01-01

    The cochlear nucleus (CN) receives innervation from auditory and somatosensory structures, which can be identified using vesicular glutamate transporters, VGLUT1 and VGLUT2. VGLUT1 is highly expressed in the magnocellular ventral CN (VCN), which receives auditory nerve inputs. VGLUT2 is predominantly expressed in the granule cell domain (GCD), which receives non-auditory inputs from somatosensory nuclei, including spinal trigeminal nucleus (Sp5) and cuneate nucleus (Cu). Two weeks after unilateral deafening VGLUT1 is significantly decreased in ipsilateral VCN while VGLUT2 is significantly increased in the ipsilateral GCD (Zeng et al., 2009), putatively reflecting decreased inputs from auditory nerve and increased inputs from non-auditory structures in guinea pigs. Here we wished to determine whether the upregulation of VGLUT2 represents increases in the number of somatosensory projections to the CN that are maintained for longer periods of time. Thus we examined concurrent changes in VGLUT levels and somatosensory projections in the CN using immunohistochemistry combined with anterograde tract tracing three and six weeks following unilateral deafening. The data reveal that unilateral deafness leads to increased numbers of VGLUT2-colabeled Sp5 and Cu projections to the ventral and dorsal CN. These findings suggest that Sp5 and Cu play significant and unique roles in cross-modal compensation and that, unlike after shorter term deafness, neurons in the magnocelluar regions also participate in the compensation. The enhanced glutamatergic somatosensory projections to the CN may play a role in neural spontaneous hyperactivity associated with tinnitus. PMID:23136418

  10. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  11. Brain networks of social action-outcome contingency: The role of the ventral striatum in integrating signals from the sensory cortex and medial prefrontal cortex.

    PubMed

    Sumiya, Motofumi; Koike, Takahiko; Okazaki, Shuntaro; Kitada, Ryo; Sadato, Norihiro

    2017-10-01

    Social interactions can be facilitated by action-outcome contingency, in which self-actions result in relevant responses from others. Research has indicated that the striatal reward system plays a role in generating action-outcome contingency signals. However, the neural mechanisms wherein signals regarding self-action and others' responses are integrated to generate the contingency signal remain poorly understood. We conducted a functional MRI study to test the hypothesis that brain activity representing the self modulates connectivity between the striatal reward system and sensory regions involved in the processing of others' responses. We employed a contingency task in which participants made the listener laugh by telling jokes. Participants reported more pleasure when greater laughter followed their own jokes than those of another. Self-relevant listener's responses produced stronger activation in the medial prefrontal cortex (mPFC). Laughter was associated with activity in the auditory cortex. The ventral striatum exhibited stronger activation when participants made listeners laugh than when another did. In physio-physiological interaction analyses, the ventral striatum showed interaction effects for signals extracted from the mPFC and auditory cortex. These results support the hypothesis that the mPFC, which is implicated in self-related processing, gates sensory input associated with others' responses during value processing in the ventral striatum. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  12. Delayed action does not always require the ventral stream: a study on a patient with visual form agnosia.

    PubMed

    Hesse, Constanze; Schenk, Thomas

    2014-05-01

    It has been suggested that while movements directed at visible targets are processed within the dorsal stream, movements executed after delay rely on the visual representations of the ventral stream (Milner & Goodale, 2006). This interpretation is supported by the observation that a patient with ventral stream damage (D.F.) has trouble performing accurate movements after a delay, but performs normally when the target is visible during movement programming. We tested D.F.'s visuomotor performance in a letter-posting task whilst varying the amount of visual feedback available. Additionally, we also varied whether D.F. received tactile feedback at the end of each trial (posting through a letter box vs posting on a screen) and whether environmental cues were available during the delay period (removing the target only vs suppressing vision completely with shutter glasses). We found that in the absence of environmental cues patient D.F. was unaffected by the introduction of delay and performed as accurately as healthy controls. However, when environmental cues and vision of the moving hand were available during and after the delay period, D.F.'s visuomotor performance was impaired. Thus, while healthy controls benefit from the availability of environmental landmarks and/or visual feedback of the moving hand, such cues seem less beneficial to D.F. Taken together our findings suggest that ventral stream damage does not always impact the ability to make delayed movements but compromises the ability to use environmental landmarks and visual feedback efficiently. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Visual and visuomotor processing of hands and tools as a case study of cross talk between the dorsal and ventral streams.

    PubMed

    Almeida, Jorge; Amaral, Lénia; Garcea, Frank E; Aguiar de Sousa, Diana; Xu, Shan; Mahon, Bradford Z; Martins, Isabel Pavão

    2018-05-24

    A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.

  14. The ‘Ventral Organs’ of Pycnogonida (Arthropoda) Are Neurogenic Niches of Late Embryonic and Post-Embryonic Nervous System Development

    PubMed Central

    Brenneis, Georg; Scholtz, Gerhard

    2014-01-01

    Early neurogenesis in arthropods has been in the focus of numerous studies, its cellular basis, spatio-temporal dynamics and underlying genetic network being by now comparably well characterized for representatives of chelicerates, myriapods, hexapods and crustaceans. By contrast, neurogenesis during late embryonic and/or post-embryonic development has received less attention, especially in myriapods and chelicerates. Here, we apply (i) immunolabeling, (ii) histology and (iii) scanning electron microscopy to study post-embryonic ventral nerve cord development in Pseudopallene sp., a representative of the sea spiders (Pycnogonida), the presumable sister group of the remaining chelicerates. During early post-embryonic development, large neural stem cells give rise to additional ganglion cell material in segmentally paired invaginations in the ventral ectoderm. These ectodermal cell regions – traditionally designated as ‘ventral organs’ – detach from the surface into the interior and persist as apical cell clusters on the ventral ganglion side. Each cluster is a post-embryonic neurogenic niche that features a tiny central cavity and initially still houses larger neural stem cells. The cluster stays connected to the underlying ganglionic somata cortex via an anterior and a posterior cell stream. Cell proliferation remains restricted to the cluster and streams, and migration of newly produced cells along the streams seems to account for increasing ganglion cell numbers in the cortex. The pycnogonid cluster-stream-systems show striking similarities to the life-long neurogenic system of decapod crustaceans, and due to their close vicinity to glomerulus-like neuropils, we consider their possible involvement in post-embryonic (perhaps even adult) replenishment of olfactory neurons – as in decapods. An instance of a potentially similar post-embryonic/adult neurogenic system in the arthropod outgroup Onychophora is discussed. Additionally, we document two transient posterior ganglia in the ventral nerve cord of Pseudopallene sp. and evaluate this finding in light of the often discussed reduction of a segmented ‘opisthosoma’ during pycnogonid evolution. PMID:24736377

  15. Damage to white matter bottlenecks contributes to language impairments after left hemispheric stroke.

    PubMed

    Griffis, Joseph C; Nenert, Rodolphe; Allendorfer, Jane B; Szaflarski, Jerzy P

    2017-01-01

    Damage to the white matter underlying the left posterior temporal lobe leads to deficits in multiple language functions. The posterior temporal white matter may correspond to a bottleneck where both dorsal and ventral language pathways are vulnerable to simultaneous damage. Damage to a second putative white matter bottleneck in the left deep prefrontal white matter involving projections associated with ventral language pathways and thalamo-cortical projections has recently been proposed as a source of semantic deficits after stroke. Here, we first used white matter atlases to identify the previously described white matter bottlenecks in the posterior temporal and deep prefrontal white matter. We then assessed the effects of damage to each region on measures of verbal fluency, picture naming, and auditory semantic decision-making in 43 chronic left hemispheric stroke patients. Damage to the posterior temporal bottleneck predicted deficits on all tasks, while damage to the anterior bottleneck only significantly predicted deficits in verbal fluency. Importantly, the effects of damage to the bottleneck regions were not attributable to lesion volume, lesion loads on the tracts traversing the bottlenecks, or damage to nearby cortical language areas. Multivariate lesion-symptom mapping revealed additional lesion predictors of deficits. Post-hoc fiber tracking of the peak white matter lesion predictors using a publicly available tractography atlas revealed evidence consistent with the results of the bottleneck analyses. Together, our results provide support for the proposal that spatially specific white matter damage affecting bottleneck regions, particularly in the posterior temporal lobe, contributes to chronic language deficits after left hemispheric stroke. This may reflect the simultaneous disruption of signaling in dorsal and ventral language processing streams.

  16. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    PubMed

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  18. Anatomy of aphasia revisited.

    PubMed

    Fridriksson, Julius; den Ouden, Dirk-Bart; Hillis, Argye E; Hickok, Gregory; Rorden, Chris; Basilakos, Alexandra; Yourganov, Grigori; Bonilha, Leonardo

    2018-01-17

    In most cases, aphasia is caused by strokes involving the left hemisphere, with more extensive damage typically being associated with more severe aphasia. The classical model of aphasia commonly adhered to in the Western world is the Wernicke-Lichtheim model. The model has been in existence for over a century, and classification of aphasic symptomatology continues to rely on it. However, far more detailed models of speech and language localization in the brain have been formulated. In this regard, the dual stream model of cortical brain organization proposed by Hickok and Poeppel is particularly influential. Their model describes two processing routes, a dorsal stream and a ventral stream, that roughly support speech production and speech comprehension, respectively, in normal subjects. Despite the strong influence of the dual stream model in current neuropsychological research, there has been relatively limited focus on explaining aphasic symptoms in the context of this model. Given that the dual stream model represents a more nuanced picture of cortical speech and language organization, cortical damage that causes aphasic impairment should map clearly onto the dual processing streams. Here, we present a follow-up study to our previous work that used lesion data to reveal the anatomical boundaries of the dorsal and ventral streams supporting speech and language processing. Specifically, by emphasizing clinical measures, we examine the effect of cortical damage and disconnection involving the dorsal and ventral streams on aphasic impairment. The results reveal that measures of motor speech impairment mostly involve damage to the dorsal stream, whereas measures of impaired speech comprehension are more strongly associated with ventral stream involvement. Equally important, many clinical tests that target behaviours such as naming, speech repetition, or grammatical processing rely on interactions between the two streams. This latter finding explains why patients with seemingly disparate lesion locations often experience similar impairments on given subtests. Namely, these individuals' cortical damage, although dissimilar, affects a broad cortical network that plays a role in carrying out a given speech or language task. The current data suggest this is a more accurate characterization than ascribing specific lesion locations as responsible for specific language deficits.awx363media15705668782001. © The Author(s) (2018). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  20. Ventral and Dorsal Pathways Relate Differently to Visual Awareness of Body Postures under Continuous Flash Suppression

    PubMed Central

    Goebel, Rainer

    2018-01-01

    Abstract Visual perception includes ventral and dorsal stream processes. However, it is still unclear whether the former is predominantly related to conscious and the latter to nonconscious visual perception as argued in the literature. In this study upright and inverted body postures were rendered either visible or invisible under continuous flash suppression (CFS), while brain activity of human participants was measured with functional MRI (fMRI). Activity in the ventral body-sensitive areas was higher during visible conditions. In comparison, activity in the posterior part of the bilateral intraparietal sulcus (IPS) showed a significant interaction of stimulus orientation and visibility. Our results provide evidence that dorsal stream areas are less associated with visual awareness. PMID:29445766

  1. A new neural framework for visuospatial processing

    PubMed Central

    Kravitz, Dwight J.; Saleem, Kadharbatcha S.; Baker, Chris I.; Mishkin, Mortimer

    2012-01-01

    The division of cortical visual processing into distinct dorsal and ventral streams is a key framework that has guided visual neuroscience. The characterization of the ventral stream as a ‘What’ pathway is relatively uncontroversial, but the nature of dorsal stream processing is less clear. Originally proposed as mediating spatial perception (‘Where’), more recent accounts suggest it primarily serves non-conscious visually guided action (‘How’). Here, we identify three pathways emerging from the dorsal stream that consist of projections to the prefrontal and premotor cortices, and a major projection to the medial temporal lobe that courses both directly and indirectly through the posterior cingulate and retrosplenial cortices. These three pathways support both conscious and non-conscious visuospatial processing, including spatial working memory, visually guided action and navigation, respectively. PMID:21415848

  2. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    PubMed

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  3. Sensitivity to timing and order in human visual cortex

    PubMed Central

    Singer, Jedediah M.; Madsen, Joseph R.; Anderson, William S.

    2014-01-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. PMID:25429116

  4. Inactivation of Primate Prefrontal Cortex Impairs Auditory and Audiovisual Working Memory.

    PubMed

    Plakke, Bethany; Hwang, Jaewon; Romanski, Lizabeth M

    2015-07-01

    The prefrontal cortex is associated with cognitive functions that include planning, reasoning, decision-making, working memory, and communication. Neurophysiology and neuropsychology studies have established that dorsolateral prefrontal cortex is essential in spatial working memory while the ventral frontal lobe processes language and communication signals. Single-unit recordings in nonhuman primates has shown that ventral prefrontal (VLPFC) neurons integrate face and vocal information and are active during audiovisual working memory. However, whether VLPFC is essential in remembering face and voice information is unknown. We therefore trained nonhuman primates in an audiovisual working memory paradigm using naturalistic face-vocalization movies as memoranda. We inactivated VLPFC, with reversible cortical cooling, and examined performance when faces, vocalizations or both faces and vocalization had to be remembered. We found that VLPFC inactivation impaired subjects' performance in audiovisual and auditory-alone versions of the task. In contrast, VLPFC inactivation did not disrupt visual working memory. Our studies demonstrate the importance of VLPFC in auditory and audiovisual working memory for social stimuli but suggest a different role for VLPFC in unimodal visual processing. The ventral frontal lobe, or inferior frontal gyrus, plays an important role in audiovisual communication in the human brain. Studies with nonhuman primates have found that neurons within ventral prefrontal cortex (VLPFC) encode both faces and vocalizations and that VLPFC is active when animals need to remember these social stimuli. In the present study, we temporarily inactivated VLPFC by cooling the cortex while nonhuman primates performed a working memory task. This impaired the ability of subjects to remember a face and vocalization pair or just the vocalization alone. Our work highlights the importance of the primate VLPFC in the processing of faces and vocalizations in a manner that is similar to the inferior frontal gyrus in the human brain. Copyright © 2015 the authors 0270-6474/15/359666-10$15.00/0.

  5. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    PubMed Central

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  6. Representations of Invariant Musical Categories Are Decodable by Pattern Analysis of Locally Distributed BOLD Responses in Superior Temporal and Intraparietal Sulci

    PubMed Central

    Klein, Mike E.; Zatorre, Robert J.

    2015-01-01

    In categorical perception (CP), continuous physical signals are mapped to discrete perceptual bins: mental categories not found in the physical world. CP has been demonstrated across multiple sensory modalities and, in audition, for certain over-learned speech and musical sounds. The neural basis of auditory CP, however, remains ambiguous, including its robustness in nonspeech processes and the relative roles of left/right hemispheres; primary/nonprimary cortices; and ventral/dorsal perceptual processing streams. Here, highly trained musicians listened to 2-tone musical intervals, which they perceive categorically while undergoing functional magnetic resonance imaging. Multivariate pattern analyses were performed after grouping sounds by interval quality (determined by frequency ratio between tones) or pitch height (perceived noncategorically, frequency ratios remain constant). Distributed activity patterns in spheres of voxels were used to determine sound sample identities. For intervals, significant decoding accuracy was observed in the right superior temporal and left intraparietal sulci, with smaller peaks observed homologously in contralateral hemispheres. For pitch height, no significant decoding accuracy was observed, consistent with the non-CP of this dimension. These results suggest that similar mechanisms are operative for nonspeech categories as for speech; espouse roles for 2 segregated processing streams; and support hierarchical processing models for CP. PMID:24488957

  7. Development of visual category selectivity in ventral visual cortex does not require visual experience

    PubMed Central

    van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.

    2017-01-01

    To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127

  8. An ALE meta-analysis on the audiovisual integration of speech signals.

    PubMed

    Erickson, Laura C; Heeg, Elizabeth; Rauschecker, Josef P; Turkeltaub, Peter E

    2014-11-01

    The brain improves speech processing through the integration of audiovisual (AV) signals. Situations involving AV speech integration may be crudely dichotomized into those where auditory and visual inputs contain (1) equivalent, complementary signals (validating AV speech) or (2) inconsistent, different signals (conflicting AV speech). This simple framework may allow the systematic examination of broad commonalities and differences between AV neural processes engaged by various experimental paradigms frequently used to study AV speech integration. We conducted an activation likelihood estimation metaanalysis of 22 functional imaging studies comprising 33 experiments, 311 subjects, and 347 foci examining "conflicting" versus "validating" AV speech. Experimental paradigms included content congruency, timing synchrony, and perceptual measures, such as the McGurk effect or synchrony judgments, across AV speech stimulus types (sublexical to sentence). Colocalization of conflicting AV speech experiments revealed consistency across at least two contrast types (e.g., synchrony and congruency) in a network of dorsal stream regions in the frontal, parietal, and temporal lobes. There was consistency across all contrast types (synchrony, congruency, and percept) in the bilateral posterior superior/middle temporal cortex. Although fewer studies were available, validating AV speech experiments were localized to other regions, such as ventral stream visual areas in the occipital and inferior temporal cortex. These results suggest that while equivalent, complementary AV speech signals may evoke activity in regions related to the corroboration of sensory input, conflicting AV speech signals recruit widespread dorsal stream areas likely involved in the resolution of conflicting sensory signals. Copyright © 2014 Wiley Periodicals, Inc.

  9. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  10. [Symptoms and lesion localization in visual agnosia].

    PubMed

    Suzuki, Kyoko

    2004-11-01

    There are two cortical visual processing streams, the ventral and dorsal stream. The ventral visual stream plays the major role in constructing our perceptual representation of the visual world and the objects within it. Disturbance of visual processing at any stage of the ventral stream could result in impairment of visual recognition. Thus we need systematic investigations to diagnose visual agnosia and its type. Two types of category-selective visual agnosia, prosopagnosia and landmark agnosia, are different from others in that patients could recognize a face as a face and buildings as buildings, but could not identify an individual person or building. Neuronal bases of prosopagnosia and landmark agnosia are distinct. Importance of the right fusiform gyrus for face recognition was confirmed by both clinical and neuroimaging studies. Landmark agnosia is related to lesions in the right parahippocampal gyrus. Enlarged lesions including both the right fusiform and parahippocampal gyri can result in prosopagnosia and landmark agnosia at the same time. Category non-selective visual agnosia is related to bilateral occipito-temporal lesions, which is in agreement with the results of neuroimaging studies that revealed activation of the bilateral occipito-temporal during object recognition tasks.

  11. Corollary discharge inhibition of wind-sensitive cercal giant interneurons in the singing field cricket

    PubMed Central

    Hedwig, Berthold

    2014-01-01

    Crickets carry wind-sensitive mechanoreceptors on their cerci, which, in response to the airflow produced by approaching predators, triggers escape reactions via ascending giant interneurons (GIs). Males also activate their cercal system by air currents generated due to the wing movements underlying sound production. Singing males still respond to external wind stimulation, but are not startled by the self-generated airflow. To investigate how the nervous system discriminates sensory responses to self-generated and external airflow, we intracellularly recorded wind-sensitive afferents and ventral GIs of the cercal escape pathway in fictively singing crickets, a situation lacking any self-stimulation. GI spiking was reduced whenever cercal wind stimulation coincided with singing motor activity. The axonal terminals of cercal afferents showed no indication of presynaptic inhibition during singing. In two ventral GIs, however, a corollary discharge inhibition occurred strictly in phase with the singing motor pattern. Paired intracellular recordings revealed that this inhibition was not mediated by the activity of the previously identified corollary discharge interneuron (CDI) that rhythmically inhibits the auditory pathway during singing. Cercal wind stimulation, however, reduced the spike activity of this CDI by postsynaptic inhibition. Our study reveals how precisely timed corollary discharge inhibition of ventral GIs can prevent self-generated airflow from triggering inadvertent escape responses in singing crickets. The results indicate that the responsiveness of the auditory and wind-sensitive pathway is modulated by distinct CDIs in singing crickets and that the corollary discharge inhibition in the auditory pathway can be attenuated by cercal wind stimulation. PMID:25318763

  12. Auditory and visual connectivity gradients in frontoparietal cortex

    PubMed Central

    Hellyer, Peter J.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    Abstract A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal–ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior–anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top–down modulation of modality‐specific information to occur within higher‐order cortex. This could provide a potentially faster and more efficient pathway by which top–down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long‐range connections to sensory cortices. Hum Brain Mapp 38:255–270, 2017. © 2016 Wiley Periodicals, Inc. PMID:27571304

  13. Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment

    PubMed Central

    Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  14. Neural circuits in Auditory and Audiovisual Memory

    PubMed Central

    Plakke, B.; Romanski, L.M.

    2016-01-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069

  15. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    PubMed

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  16. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  17. Neurodynamics for auditory stream segregation: tracking sounds in the mustached bat's natural environment.

    PubMed

    Kanwal, Jagmeet S; Medvedev, Andrei V; Micheyl, Christophe

    2003-08-01

    During navigation and the search phase of foraging, mustached bats emit approximately 25 ms long echolocation pulses (at 10-40 Hz) that contain multiple harmonics of a constant frequency (CF) component followed by a short (3 ms) downward frequency modulation. In the context of auditory stream segregation, therefore, bats may either perceive a coherent pulse-echo sequence (PEPE...), or segregated pulse and echo streams (P-P-P... and E-E-E...). To identify the neural mechanisms for stream segregation in bats, we developed a simple yet realistic neural network model with seven layers and 420 nodes. Our model required recurrent and lateral inhibition to enable output nodes in the network to 'latch-on' to a single tone (corresponding to a CF component in either the pulse or echo), i.e., exhibit differential suppression by the alternating two tones presented at a high rate (> 10 Hz). To test the applicability of our model to echolocation, we obtained neurophysiological data from the primary auditory cortex of awake mustached bats. Event-related potentials reliably reproduced the latching behaviour observed at output nodes in the network. Pulse as well as nontarget (clutter) echo CFs facilitated this latching. Individual single unit responses were erratic, but when summed over several recording sites, they also exhibited reliable latching behaviour even at 40 Hz. On the basis of these findings, we propose that a neural correlate of auditory stream segregation is present within localized synaptic activity in the mustached bat's auditory cortex and this mechanism may enhance the perception of echolocation sounds in the natural environment.

  18. Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters

    ERIC Educational Resources Information Center

    Smith, Nicholas A.; Trainor, Laurel J.

    2011-01-01

    This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…

  19. Brainstem origins for cortical 'what' and 'where' pathways in the auditory system.

    PubMed

    Kraus, Nina; Nicol, Trent

    2005-04-01

    We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.

  20. Prior Knowledge Guides Speech Segregation in Human Auditory Cortex.

    PubMed

    Wang, Yuanye; Zhang, Jianfeng; Zou, Jiajie; Luo, Huan; Ding, Nai

    2018-05-18

    Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.

  1. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    PubMed

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.

  2. Neural circuits in auditory and audiovisual memory.

    PubMed

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Dissociation and Convergence of the Dorsal and Ventral Visual Streams in the Human Prefrontal Cortex

    PubMed Central

    Takahashi, Emi; Ohki, Kenichi; Kim, Dae-Shik

    2012-01-01

    Visual information is largely processed through two pathways in the primate brain: an object pathway from the primary visual cortex to the temporal cortex (ventral stream) and a spatial pathway to the parietal cortex (dorsal stream). Whether and to what extent dissociation exists in the human prefrontal cortex (PFC) has long been debated. We examined anatomical connections from functionally defined areas in the temporal and parietal cortices to the PFC, using noninvasive functional and diffusion-weighted magnetic resonance imaging. The right inferior frontal gyrus (IFG) received converging input from both streams, while the right superior frontal gyrus received input only from the dorsal stream. Interstream functional connectivity to the IFG was dynamically recruited only when both object and spatial information were processed. These results suggest that the human PFC receives dissociated and converging visual pathways, and that the right IFG region serves as an integrator of the two types of information. PMID:23063444

  4. Defining the cortical visual systems: "what", "where", and "how"

    NASA Technical Reports Server (NTRS)

    Creem, S. H.; Proffitt, D. R.; Kaiser, M. K. (Principal Investigator)

    2001-01-01

    The visual system historically has been defined as consisting of at least two broad subsystems subserving object and spatial vision. These visual processing streams have been organized both structurally as two distinct pathways in the brain, and functionally for the types of tasks that they mediate. The classic definition by Ungerleider and Mishkin labeled a ventral "what" stream to process object information and a dorsal "where" stream to process spatial information. More recently, Goodale and Milner redefined the two visual systems with a focus on the different ways in which visual information is transformed for different goals. They relabeled the dorsal stream as a "how" system for transforming visual information using an egocentric frame of reference in preparation for direct action. This paper reviews recent research from psychophysics, neurophysiology, neuropsychology and neuroimaging to define the roles of the ventral and dorsal visual processing streams. We discuss a possible solution that allows for both "where" and "how" systems that are functionally and structurally organized within the posterior parietal lobe.

  5. Neural network retuning and neural predictors of learning success associated with cello training.

    PubMed

    Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J

    2018-06-26

    The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.

  6. Sensitivity to timing and order in human visual cortex.

    PubMed

    Singer, Jedediah M; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2015-03-01

    Visual recognition takes a small fraction of a second and relies on the cascade of signals along the ventral visual stream. Given the rapid path through multiple processing steps between photoreceptors and higher visual areas, information must progress from stage to stage very quickly. This rapid progression of information suggests that fine temporal details of the neural response may be important to the brain's encoding of visual signals. We investigated how changes in the relative timing of incoming visual stimulation affect the representation of object information by recording intracranial field potentials along the human ventral visual stream while subjects recognized objects whose parts were presented with varying asynchrony. Visual responses along the ventral stream were sensitive to timing differences as small as 17 ms between parts. In particular, there was a strong dependency on the temporal order of stimulus presentation, even at short asynchronies. From these observations we infer that the neural representation of complex information in visual cortex can be modulated by rapid dynamics on scales of tens of milliseconds. Copyright © 2015 the American Physiological Society.

  7. Network analysis of corticocortical connections reveals ventral and dorsal processing streams in mouse visual cortex

    PubMed Central

    Wang, Quanxin; Sporns, Olaf; Burkhalter, Andreas

    2012-01-01

    Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that previously shown in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e. connectivity profile) that was well-fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor and limbic cortex, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortex. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species. PMID:22457489

  8. A Network Model of Observation and Imitation of Speech

    PubMed Central

    Mashal, Nira; Solodkin, Ana; Dick, Anthony Steven; Chen, E. Elinor; Small, Steven L.

    2012-01-01

    Much evidence has now accumulated demonstrating and quantifying the extent of shared regional brain activation for observation and execution of speech. However, the nature of the actual networks that implement these functions, i.e., both the brain regions and the connections among them, and the similarities and differences across these networks has not been elucidated. The current study aims to characterize formally a network for observation and imitation of syllables in the healthy adult brain and to compare their structure and effective connectivity. Eleven healthy participants observed or imitated audiovisual syllables spoken by a human actor. We constructed four structural equation models to characterize the networks for observation and imitation in each of the two hemispheres. Our results show that the network models for observation and imitation comprise the same essential structure but differ in important ways from each other (in both hemispheres) based on connectivity. In particular, our results show that the connections from posterior superior temporal gyrus and sulcus to ventral premotor, ventral premotor to dorsal premotor, and dorsal premotor to primary motor cortex in the left hemisphere are stronger during imitation than during observation. The first two connections are implicated in a putative dorsal stream of speech perception, thought to involve translating auditory speech signals into motor representations. Thus, the current results suggest that flow of information during imitation, starting at the posterior superior temporal cortex and ending in the motor cortex, enhances input to the motor cortex in the service of speech execution. PMID:22470360

  9. Functional significance of the electrocorticographic auditory responses in the premotor cortex.

    PubMed

    Tanji, Kazuyo; Sakurada, Kaori; Funiu, Hayato; Matsuda, Kenichiro; Kayama, Takamasa; Ito, Sayuri; Suzuki, Kyoko

    2015-01-01

    Other than well-known motor activities in the precentral gyrus, functional magnetic resonance imaging (fMRI) studies have found that the ventral part of the precentral gyrus is activated in response to linguistic auditory stimuli. It has been proposed that the premotor cortex in the precentral gyrus is responsible for the comprehension of speech, but the precise function of this area is still debated because patients with frontal lesions that include the precentral gyrus do not exhibit disturbances in speech comprehension. We report on a patient who underwent resection of the tumor in the precentral gyrus with electrocorticographic recordings while she performed the verb generation task during awake brain craniotomy. Consistent with previous fMRI studies, high-gamma band auditory activity was observed in the precentral gyrus. Due to the location of the tumor, the patient underwent resection of the auditory responsive precentral area which resulted in the post-operative expression of a characteristic articulatory disturbance known as apraxia of speech (AOS). The language function of the patient was otherwise preserved and she exhibited intact comprehension of both spoken and written language. The present findings demonstrated that a lesion restricted to the ventral precentral gyrus is sufficient for the expression of AOS and suggest that the auditory-responsive area plays an important role in the execution of fluent speech rather than the comprehension of speech. These findings also confirm that the function of the premotor area is predominantly motor in nature and its sensory responses is more consistent with the "sensory theory of speech production," in which it was proposed that sensory representations are used to guide motor-articulatory processes.

  10. Cannabis Dampens the Effects of Music in Brain Regions Sensitive to Reward and Emotion

    PubMed Central

    Pope, Rebecca A; Wall, Matthew B; Bisby, James A; Luijten, Maartje; Hindocha, Chandni; Mokrysz, Claire; Lawn, Will; Moss, Abigail; Bloomfield, Michael A P; Morgan, Celia J A; Nutt, David J; Curran, H Valerie

    2018-01-01

    Abstract Background Despite the current shift towards permissive cannabis policies, few studies have investigated the pleasurable effects users seek. Here, we investigate the effects of cannabis on listening to music, a rewarding activity that frequently occurs in the context of recreational cannabis use. We additionally tested how these effects are influenced by cannabidiol, which may offset cannabis-related harms. Methods Across 3 sessions, 16 cannabis users inhaled cannabis with cannabidiol, cannabis without cannabidiol, and placebo. We compared their response to music relative to control excerpts of scrambled sound during functional Magnetic Resonance Imaging within regions identified in a meta-analysis of music-evoked reward and emotion. All results were False Discovery Rate corrected (P<.05). Results Compared with placebo, cannabis without cannabidiol dampened response to music in bilateral auditory cortex (right: P=.005, left: P=.008), right hippocampus/parahippocampal gyrus (P=.025), right amygdala (P=.025), and right ventral striatum (P=.033). Across all sessions, the effects of music in this ventral striatal region correlated with pleasure ratings (P=.002) and increased functional connectivity with auditory cortex (right: P< .001, left: P< .001), supporting its involvement in music reward. Functional connectivity between right ventral striatum and auditory cortex was increased by cannabidiol (right: P=.003, left: P=.030), and cannabis with cannabidiol did not differ from placebo on any functional Magnetic Resonance Imaging measures. Both types of cannabis increased ratings of wanting to listen to music (P<.002) and enhanced sound perception (P<.001). Conclusions Cannabis dampens the effects of music in brain regions sensitive to reward and emotion. These effects were offset by a key cannabis constituent, cannabidol. PMID:29025134

  11. Temporal coherence for pure tones in budgerigars (Melopsittacus undulatus) and humans (Homo sapiens).

    PubMed

    Neilans, Erikson G; Dent, Micheal L

    2015-02-01

    Auditory scene analysis has been suggested as a universal process that exists across all animals. Relative to humans, however, little work has been devoted to how animals perceptually isolate different sound sources. Frequency separation of sounds is arguably the most common parameter studied in auditory streaming, but it is not the only factor contributing to how the auditory scene is perceived. Researchers have found that in humans, even at large frequency separations, synchronous tones are heard as a single auditory stream, whereas asynchronous tones with the same frequency separations are perceived as 2 distinct sounds. These findings demonstrate how both the timing and frequency separation of sounds are important for auditory scene analysis. It is unclear how animals, such as budgerigars (Melopsittacus undulatus), perceive synchronous and asynchronous sounds. In this study, budgerigars and humans (Homo sapiens) were tested on their perception of synchronous, asynchronous, and partially overlapping pure tones using the same psychophysical procedures. Species differences were found between budgerigars and humans in how partially overlapping sounds were perceived, with budgerigars more likely to segregate overlapping sounds and humans more apt to fuse the 2 sounds together. The results also illustrated that temporal cues are particularly important for stream segregation of overlapping sounds. Lastly, budgerigars were found to segregate partially overlapping sounds in a manner predicted by computational models of streaming, whereas humans were not. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  12. Transient human auditory cortex activation during volitional attention shifting

    PubMed Central

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110

  13. An Objective Measurement of the Build-Up of Auditory Streaming and of Its Modulation by Attention

    ERIC Educational Resources Information Center

    Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri

    2011-01-01

    Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…

  14. Neural bases of imitation and pantomime in acute stroke patients: distinct streams for praxis.

    PubMed

    Hoeren, Markus; Kümmerer, Dorothee; Bormann, Tobias; Beume, Lena; Ludwig, Vera M; Vry, Magnus-Sebastian; Mader, Irina; Rijntjes, Michel; Kaller, Christoph P; Weiller, Cornelius

    2014-10-01

    Apraxia is a cognitive disorder of skilled movements that characteristically affects the ability to imitate meaningless gestures, or to pantomime the use of tools. Despite substantial research, the neural underpinnings of imitation and pantomime have remained debated. An influential model states that higher motor functions are supported by different processing streams. A dorso-dorsal stream may mediate movements based on physical object properties, like reaching or grasping, whereas skilled tool use or pantomime rely on action representations stored within a ventro-dorsal stream. However, given variable results of past studies, the role of the two streams for imitation of meaningless gestures has remained uncertain, and the importance of the ventro-dorsal stream for pantomime of tool use has been questioned. To clarify the involvement of ventral and dorsal streams in imitation and pantomime, we performed voxel-based lesion-symptom mapping in a sample of 96 consecutive left-hemisphere stroke patients (mean age ± SD, 63.4 ± 14.8 years, 56 male). Patients were examined in the acute phase after ischaemic stroke (after a mean of 5.3, maximum 10 days) to avoid interference of brain reorganization with a reliable lesion-symptom mapping as best as possible. Patients were asked to imitate 20 meaningless hand and finger postures, and to pantomime the use of 14 common tools depicted as line drawings. Following the distinction between movement engrams and action semantics, pantomime errors were characterized as either movement or content errors, respectively. Whereas movement errors referred to incorrect spatio-temporal features of overall recognizable movements, content errors reflected an inability to associate tools with their prototypical actions. Both imitation and pantomime deficits were associated with lesions within the lateral occipitotemporal cortex, posterior inferior parietal lobule, posterior intraparietal sulcus and superior parietal lobule. However, the areas specifically related to the dorso-dorsal stream, i.e. posterior intraparietal sulcus and superior parietal lobule, were more strongly associated with imitation. Conversely, in contrast to imitation, pantomime deficits were associated with ventro-dorsal regions such as the supramarginal gyrus, as well as brain structures counted to the ventral stream, such as the extreme capsule. Ventral stream involvement was especially clear for content errors which were related to anterior temporal damage. However, movement errors were not consistently associated with a specific lesion location. In summary, our results indicate that imitation mainly relies on the dorso-dorsal stream for visuo-motor conversion and on-line movement control. Conversely, pantomime additionally requires ventro-dorsal and ventral streams for access to stored action engrams and retrieval of tool-action relationships. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children

    PubMed Central

    Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha

    2012-01-01

    Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling, such as dyslexia. PMID:22833726

  16. Widespread Brain Areas Engaged during a Classical Auditory Streaming Task Revealed by Intracranial EEG

    PubMed Central

    Dykstra, Andrew R.; Halgren, Eric; Thesen, Thomas; Carlson, Chad E.; Doyle, Werner; Madsen, Joseph R.; Eskandar, Emad N.; Cash, Sydney S.

    2011-01-01

    The auditory system must constantly decompose the complex mixture of sound arriving at the ear into perceptually independent streams constituting accurate representations of individual sources in the acoustic environment. How the brain accomplishes this task is not well understood. The present study combined a classic behavioral paradigm with direct cortical recordings from neurosurgical patients with epilepsy in order to further describe the neural correlates of auditory streaming. Participants listened to sequences of pure tones alternating in frequency and indicated whether they heard one or two “streams.” The intracranial EEG was simultaneously recorded from sub-dural electrodes placed over temporal, frontal, and parietal cortex. Like healthy subjects, patients heard one stream when the frequency separation between tones was small and two when it was large. Robust evoked-potential correlates of frequency separation were observed over widespread brain areas. Waveform morphology was highly variable across individual electrode sites both within and across gross brain regions. Surprisingly, few evoked-potential correlates of perceptual organization were observed after controlling for physical stimulus differences. The results indicate that the cortical areas engaged during the streaming task are more complex and widespread than has been demonstrated by previous work, and that, by-and-large, correlates of bistability during streaming are probably located on a spatial scale not assessed – or in a brain area not examined – by the present study. PMID:21886615

  17. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway

    PubMed Central

    Hackett, Troy A.; Takahata, Toru; Balaram, Pooja

    2011-01-01

    The vesicular glutamate transporters (VGLUTs) regulate storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II – VI of the core, belt and parabelt regions. VGLUT2 was most strongly expressed by neurons in layers IIIb and IV, weakly by neurons in layers II – IIIa, and at very low levels in layers V – VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in cortico-cortical (CC) and cortico-thalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establishes a baseline for detailed studies of these transporters in selected circuits of the auditory system. PMID:21111036

  18. VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway.

    PubMed

    Hackett, Troy A; Takahata, Toru; Balaram, Pooja

    2011-04-01

    The vesicular glutamate transporters (VGLUTs) regulate the storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in the cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in the cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In the auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II-VI of the core, belt and parabelt regions. VGLUT2 was expressed most strongly by neurons in layers IIIb and IV, weakly by neurons in layers II-IIIa, and at very low levels in layers V-VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of the cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in the auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in corticocortical (CC) and corticothalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establish a baseline for detailed studies of these transporters in selected circuits of the auditory system. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. Audio Watermark Embedding Technique Applying Auditory Stream Segregation: "G-encoder Mark" Able to Be Extracted by Mobile Phone

    NASA Astrophysics Data System (ADS)

    Modegi, Toshio

    We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.

  20. Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.

    NASA Astrophysics Data System (ADS)

    Hunter, Chyren

    The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin in the ventral and lateral nuclei of the trapezoid body (MVPO and LVPO). Bilateral projections from the nucleus reticularis pontis oralis (RPOo), to the VLL were also identified as glycinergic. This projection may link motor output systems to ascending auditory input, generating the auditory behavioral responses seen with glycine antagonism in animal models of myoclonus and seizure.

  1. The effects of spatially separated call components on phonotaxis in túngara frogs: evidence for auditory grouping.

    PubMed

    Farris, Hamilton E; Rand, A Stanley; Ryan, Michael J

    2002-01-01

    Numerous animals across disparate taxa must identify and locate complex acoustic signals imbedded in multiple overlapping signals and ambient noise. A requirement of this task is the ability to group sounds into auditory streams in which sounds are perceived as emanating from the same source. Although numerous studies over the past 50 years have examined aspects of auditory grouping in humans, surprisingly few assays have demonstrated auditory stream formation or the assignment of multicomponent signals to a single source in non-human animals. In our study, we present evidence for auditory grouping in female túngara frogs. In contrast to humans, in which auditory grouping may be facilitated by the cues produced when sounds arrive from the same location, we show that spatial cues play a limited role in grouping, as females group discrete components of the species' complex call over wide angular separations. Furthermore, we show that once grouped the separate call components are weighted differently in recognizing and locating the call, so called 'what' and 'where' decisions, respectively. Copyright 2002 S. Karger AG, Basel

  2. Does a Flatter General Gradient of Visual Attention Explain Peripheral Advantages and Central Deficits in Deaf Adults?

    PubMed Central

    Samar, Vincent J.; Berger, Lauren

    2017-01-01

    Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4°) and peripheral (13.3°) targets among distractors. No-CI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group’s central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals. PMID:28559861

  3. Weighing the evidence for a dorsal processing bias under continuous flash suppression.

    PubMed

    Ludwig, Karin; Hesselmann, Guido

    2015-09-01

    With the introduction of continuous flash suppression (CFS) as a method to render stimuli invisible and study unconscious visual processing, a novel hypothesis has gained popularity. It states that processes typically ascribed to the dorsal visual stream can escape CFS and remain functional, while ventral stream processes are suppressed when stimuli are invisible under CFS. This notion of a CFS-specific "dorsal processing bias" has been argued to be in line with core characteristics of the influential dual-stream hypothesis of visual processing which proposes a dissociation between dorsally mediated vision-for-action and ventrally mediated vision-for-perception. Here, we provide an overview of neuroimaging and behavioral studies that either examine this dorsal processing bias or base their conclusions on it. We show that both evidence for preserved ventral processing as well as lack of dorsal processing can be found in studies using CFS. To reconcile the diverging results, differences in the paradigms and their effects are worthy of future research. We conclude that given the current level of information a dorsal processing bias under CFS cannot be universally assumed. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Body-part-specific representations of semantic noun categories.

    PubMed

    Carota, Francesca; Moseley, Rachel; Pulvermüller, Friedemann

    2012-06-01

    Word meaning processing in the brain involves ventrolateral temporal cortex, but a semantic contribution of the dorsal stream, especially frontocentral sensorimotor areas, has been controversial. We here examine brain activation during passive reading of object-related nouns from different semantic categories, notably animal, food, and tool words, matched for a range of psycholinguistic features. Results show ventral stream activation in temporal cortex along with category-specific activation patterns in both ventral and dorsal streams, including sensorimotor systems and adjacent pFC. Precentral activation reflected action-related semantic features of the word categories. Cortical regions implicated in mouth and face movements were sparked by food words, and hand area activation was seen for tool words, consistent with the actions implicated by the objects the words are used to speak about. Furthermore, tool words specifically activated the right cerebellum, and food words activated the left orbito-frontal and fusiform areas. We discuss our results in the context of category-specific semantic deficits in the processing of words and concepts, along with previous neuroimaging research, and conclude that specific dorsal and ventral areas in frontocentral and temporal cortex index visual and affective-emotional semantic attributes of object-related nouns and action-related affordances of their referent objects.

  5. Auditory and audio-vocal responses of single neurons in the monkey ventral premotor cortex.

    PubMed

    Hage, Steffen R

    2018-03-20

    Monkey vocalization is a complex behavioral pattern, which is flexibly used in audio-vocal communication. A recently proposed dual neural network model suggests that cognitive control might be involved in this behavior, originating from a frontal cortical network in the prefrontal cortex and mediated via projections from the rostral portion of the ventral premotor cortex (PMvr) and motor cortex to the primary vocal motor network in the brainstem. For the rapid adjustment of vocal output to external acoustic events, strong interconnections between vocal motor and auditory sites are needed, which are present at cortical and subcortical levels. However, the role of the PMvr in audio-vocal integration processes remains unclear. In the present study, single neurons in the PMvr were recorded in rhesus monkeys (Macaca mulatta) while volitionally producing vocalizations in a visual detection task or passively listening to monkey vocalizations. Ten percent of randomly selected neurons in the PMvr modulated their discharge rate in response to acoustic stimulation with species-specific calls. More than four-fifths of these auditory neurons showed an additional modulation of their discharge rates either before and/or during the monkeys' motor production of the vocalization. Based on these audio-vocal interactions, the PMvr might be well positioned to mediate higher order auditory processing with cognitive control of the vocal motor output to the primary vocal motor network. Such audio-vocal integration processes in the premotor cortex might constitute a precursor for the evolution of complex learned audio-vocal integration systems, ultimately giving rise to human speech. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Early musical training is linked to gray matter structure in the ventral premotor cortex and auditory-motor rhythm synchronization performance.

    PubMed

    Bailey, Jennifer Anne; Zatorre, Robert J; Penhune, Virginia B

    2014-04-01

    Evidence in animals and humans indicates that there are sensitive periods during development, times when experience or stimulation has a greater influence on behavior and brain structure. Sensitive periods are the result of an interaction between maturational processes and experience-dependent plasticity mechanisms. Previous work from our laboratory has shown that adult musicians who begin training before the age of 7 show enhancements in behavior and white matter structure compared with those who begin later. Plastic changes in white matter and gray matter are hypothesized to co-occur; therefore, the current study investigated possible differences in gray matter structure between early-trained (ET; <7) and late-trained (LT; >7) musicians, matched for years of experience. Gray matter structure was assessed using voxel-wise analysis techniques (optimized voxel-based morphometry, traditional voxel-based morphometry, and deformation-based morphometry) and surface-based measures (cortical thickness, surface area and mean curvature). Deformation-based morphometry analyses identified group differences between ET and LT musicians in right ventral premotor cortex (vPMC), which correlated with performance on an auditory motor synchronization task and with age of onset of musical training. In addition, cortical surface area in vPMC was greater for ET musicians. These results are consistent with evidence that premotor cortex shows greatest maturational change between the ages of 6-9 years and that this region is important for integrating auditory and motor information. We propose that the auditory and motor interactions required by musical practice drive plasticity in vPMC and that this plasticity is greatest when maturation is near its peak.

  7. Separating pitch chroma and pitch height in the human brain

    PubMed Central

    Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.

    2003-01-01

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719

  8. Separating pitch chroma and pitch height in the human brain.

    PubMed

    Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D

    2003-08-19

    Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.

  9. Do Object-Category Selective Regions in the Ventral Visual Stream Represent Perceived Distance Information?

    ERIC Educational Resources Information Center

    Amit, Elinor; Mehoudar, Eyal; Trope, Yaacov; Yovel, Galit

    2012-01-01

    It is well established that scenes and objects elicit a highly selective response in specific brain regions in the ventral visual cortex. An inherent difference between these categories that has not been explored yet is their perceived distance from the observer (i.e. scenes are distal whereas objects are proximal). The current study aimed to test…

  10. Mirror System Activity for Action and Language Is Embedded in the Integration of Dorsal and Ventral Pathways

    ERIC Educational Resources Information Center

    Arbib, Michael A.

    2010-01-01

    We develop the view that the involvement of mirror neurons in embodied experience grounds brain structures that underlie language, but that many other brain regions are involved. We stress the cooperation between the dorsal and ventral streams in praxis and language. Both have perceptual and motor schemas but the perceptual schemas in the dorsal…

  11. Acoustic imprinting leads to differential 2-deoxy-D-glucose uptake in the chick forebrain.

    PubMed Central

    Maier, V; Scheich, H

    1983-01-01

    This report describes experiments in which successful acoustic imprinting correlates with differential uptake of D-2-deoxy[14C]glucose in particular forebrain areas that are not considered primarily auditory. Newly hatched guinea chicks (Numida meleagris meleagris) were imprinted by playing 1.8-kHz or 2.5-kHz tone bursts for prolonged periods. Those chicks were considered to be imprinted who approached the imprinting stimulus (emitted from a loudspeaker) and preferred it over a new stimulus in a simultaneous discrimination test. In the 2-deoxy-D-glucose experiment all chicks, imprinted and naive, were exposed to 1.8-kHz tone bursts for 1 hr. As shown by the autoradiographic analysis of the brains, neurons in the 1.8-kHz isofrequency plane of the auditory "cortex" (field L) were activated in all chicks, whether imprinted or not. However, in the most rostral forebrain striking differences were found. Imprinted chicks showed an increased 2-deoxy-D-glucose uptake in three areas, as compared to naive chicks: (i) the lateral neostriatum and hyperstriatum ventrale, (ii) a medial magnocellular field (medial neostriatum/hyperstriatum ventrale), and (iii) the most dorsal layers of the hyperstriatum. Based on these findings we conclude that these areas are involved in the processing of auditory stimuli once they have become meaningful by experience. Images PMID:6574519

  12. Vision for perception and vision for action in the primate brain.

    PubMed

    Goodale, M A

    1998-01-01

    Visual systems first evolved not to enable animals to see, but to provide distal sensory control of their movements. Vision as 'sight' is a relative newcomer to the evolutionary landscape, but its emergence has enabled animals to carry out complex cognitive operations on perceptual representations of the world. The two streams of visual processing that have been identified in the primate cerebral cortex are a reflection of these two functions of vision. The dorsal 'action' stream projecting from primary visual cortex to the posterior parietal cortex provides flexible control of more ancient subcortical visuomotor modules for the production of motor acts. The ventral 'perceptual' stream projecting from the primary visual cortex to the temporal lobe provides the rich and detailed representation of the world required for cognitive operations. Both streams process information about the structure of objects and about their spatial locations--and both are subject to the modulatory influences of attention. Each stream, however, uses visual information in different ways. Transformations carried out in the ventral stream permit the formation of perceptual representations that embody the enduring characteristics of objects and their relations; those carried out in the dorsal stream which utilize moment-to-moment information about objects within egocentric frames of reference, mediate the control of skilled actions. Both streams work together in the production of goal-directed behaviour.

  13. Identification of a pathway for intelligible speech in the left temporal lobe

    PubMed Central

    Scott, Sophie K.; Blank, C. Catrin; Rosen, Stuart; Wise, Richard J. S.

    2017-01-01

    Summary It has been proposed that the identification of sounds, including species-specific vocalizations, by primates depends on anterior projections from the primary auditory cortex, an auditory pathway analogous to the ventral route proposed for the visual identification of objects. We have identified a similar route in the human for understanding intelligible speech. Using PET imaging to identify separable neural subsystems within the human auditory cortex, we used a variety of speech and speech-like stimuli with equivalent acoustic complexity but varying intelligibility. We have demonstrated that the left superior temporal sulcus responds to the presence of phonetic information, but its anterior part only responds if the stimulus is also intelligible. This novel observation demonstrates a left anterior temporal pathway for speech comprehension. PMID:11099443

  14. Object Recognition in Williams Syndrome: Uneven Ventral Stream Activation

    ERIC Educational Resources Information Center

    O'Hearn, Kirsten; Roth, Jennifer K.; Courtney, Susan M.; Luna, Beatriz; Street, Whitney; Terwillinger, Robert; Landau, Barbara

    2011-01-01

    Williams syndrome (WS) is a genetic disorder associated with severe visuospatial deficits, relatively strong language skills, heightened social interest, and increased attention to faces. On the basis of the visuospatial deficits, this disorder has been characterized primarily as a deficit of the dorsal stream, the occipitoparietal brain regions…

  15. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  16. A Mediating Role of the Premotor Cortex in Phoneme Segmentation

    ERIC Educational Resources Information Center

    Sato, Marc; Tremblay, Pascale; Gracco, Vincent L.

    2009-01-01

    Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech…

  17. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    ERIC Educational Resources Information Center

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  18. BMP regulates regional gene expression in the dorsal otocyst through canonical and non-canonical intracellular pathways

    PubMed Central

    2016-01-01

    The inner ear consists of two otocyst-derived, structurally and functionally distinct components: the dorsal vestibular and ventral auditory compartments. BMP signaling is required to form the vestibular compartment, but how it complements other required signaling molecules and acts intracellularly is unknown. Using spatially and temporally controlled delivery of signaling pathway regulators to developing chick otocysts, we show that BMP signaling regulates the expression of Dlx5 and Hmx3, both of which encode transcription factors essential for vestibular formation. However, although BMP regulates Dlx5 through the canonical SMAD pathway, surprisingly, it regulates Hmx3 through a non-canonical pathway involving both an increase in cAMP-dependent protein kinase A activity and the GLI3R to GLI3A ratio. Thus, both canonical and non-canonical BMP signaling establish the precise spatiotemporal expression of Dlx5 and Hmx3 during dorsal vestibular development. The identification of the non-canonical pathway suggests an intersection point between BMP and SHH signaling, which is required for ventral auditory development. PMID:27151948

  19. Frequency organization and responses to complex sounds in the medial geniculate body of the mustached bat.

    PubMed

    Wenstrup, J J

    1999-11-01

    The auditory cortex of the mustached bat (Pteronotus parnellii) displays some of the most highly developed physiological and organizational features described in mammalian auditory cortex. This study examines response properties and organization in the medial geniculate body (MGB) that may contribute to these features of auditory cortex. About 25% of 427 auditory responses had simple frequency tuning with single excitatory tuning curves. The remainder displayed more complex frequency tuning using two-tone or noise stimuli. Most of these were combination-sensitive, responsive to combinations of different frequency bands within sonar or social vocalizations. They included FM-FM neurons, responsive to different harmonic elements of the frequency modulated (FM) sweep in the sonar signal, and H1-CF neurons, responsive to combinations of the bat's first sonar harmonic (H1) and a higher harmonic of the constant frequency (CF) sonar signal. Most combination-sensitive neurons (86%) showed facilitatory interactions. Neurons tuned to frequencies outside the biosonar range also displayed combination-sensitive responses, perhaps related to analyses of social vocalizations. Complex spectral responses were distributed throughout dorsal and ventral divisions of the MGB, forming a major feature of this bat's analysis of complex sounds. The auditory sector of the thalamic reticular nucleus also was dominated by complex spectral responses to sounds. The ventral division was organized tonotopically, based on best frequencies of singly tuned neurons and higher best frequencies of combination-sensitive neurons. Best frequencies were lowest ventrolaterally, increasing dorsally and then ventromedially. However, representations of frequencies associated with higher harmonics of the FM sonar signal were reduced greatly. Frequency organization in the dorsal division was not tonotopic; within the middle one-third of MGB, combination-sensitive responses to second and third harmonic CF sonar signals (60-63 and 90-94 kHz) occurred in adjacent regions. In the rostral one-third, combination-sensitive responses to second, third, and fourth harmonic FM frequency bands predominated. These FM-FM neurons, thought to be selective for delay between an emitted pulse and echo, showed some organization of delay selectivity. The organization of frequency sensitivity in the MGB suggests a major rewiring of the output of the central nucleus of the inferior colliculus, by which collicular neurons tuned to the bat's FM sonar signals mostly project to the dorsal, not the ventral, division. Because physiological differences between collicular and MGB neurons are minor, a major role of the tecto-thalamic projection in the mustached bat may be the reorganization of responses to provide for cortical representations of sonar target features.

  20. Semaphorin6A acts as a gate keeper between the central and the peripheral nervous system

    PubMed Central

    Mauti, Olivier; Domanitskaya, Elena; Andermatt, Irwin; Sadhu, Rejina; Stoeckli, Esther T

    2007-01-01

    Background During spinal cord development, expression of chicken SEMAPHORIN6A (SEMA6A) is almost exclusively found in the boundary caps at the ventral motor axon exit point and at the dorsal root entry site. The boundary cap cells are derived from a population of late migrating neural crest cells. They form a transient structure at the transition zone between the peripheral nervous system (PNS) and the central nervous system (CNS). Ablation of the boundary cap resulted in emigration of motoneurons from the ventral spinal cord along the ventral roots. Based on its very restricted expression in boundary cap cells, we tested for a role of Sema6A as a gate keeper between the CNS and the PNS. Results Downregulation of Sema6A in boundary cap cells by in ovo RNA interference resulted in motoneurons streaming out of the spinal cord along the ventral roots, and in the failure of dorsal roots to form and segregate properly. PlexinAs interact with class 6 semaphorins and are expressed by both motoneurons and sensory neurons. Knockdown of PlexinA1 reproduced the phenotype seen after loss of Sema6A function both at the ventral motor exit point and at the dorsal root entry site of the lumbosacral spinal cord. Loss of either PlexinA4 or Sema6D function had an effect only at the dorsal root entry site but not at the ventral motor axon exit point. Conclusion Sema6A acts as a gate keeper between the PNS and the CNS both ventrally and dorsally. It is required for the clustering of boundary cap cells at the PNS/CNS interface and, thus, prevents motoneurons from streaming out of the ventral spinal cord. At the dorsal root entry site it organizes the segregation of dorsal roots. PMID:18088409

  1. Spatiotemporal dynamics underlying object completion in human ventral visual cortex.

    PubMed

    Tang, Hanlin; Buia, Calin; Madhavan, Radhika; Crone, Nathan E; Madsen, Joseph R; Anderson, William S; Kreiman, Gabriel

    2014-08-06

    Natural vision often involves recognizing objects from partial information. Recognition of objects from parts presents a significant challenge for theories of vision because it requires spatial integration and extrapolation from prior knowledge. Here we recorded intracranial field potentials of 113 visually selective electrodes from epilepsy patients in response to whole and partial objects. Responses along the ventral visual stream, particularly the inferior occipital and fusiform gyri, remained selective despite showing only 9%-25% of the object areas. However, these visually selective signals emerged ∼100 ms later for partial versus whole objects. These processing delays were particularly pronounced in higher visual areas within the ventral stream. This latency difference persisted when controlling for changes in contrast, signal amplitude, and the strength of selectivity. These results argue against a purely feedforward explanation of recognition from partial information, and provide spatiotemporal constraints on theories of object recognition that involve recurrent processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Differential Tuning of Ventral and Dorsal Streams during the Generation of Common and Uncommon Tool Uses.

    PubMed

    Matheson, Heath E; Buxbaum, Laurel J; Thompson-Schill, Sharon L

    2017-11-01

    Our use of tools is situated in different contexts. Prior evidence suggests that diverse regions within the ventral and dorsal streams represent information supporting common tool use. However, given the flexibility of object concepts, these regions may be tuned to different types of information when generating novel or uncommon uses of tools. To investigate this, we collected fMRI data from participants who reported common or uncommon tool uses in response to visually presented familiar objects. We performed a pattern dissimilarity analysis in which we correlated cortical patterns with behavioral measures of visual, action, and category information. The results showed that evoked cortical patterns within the dorsal tool use network reflected action and visual information to a greater extent in the uncommon use group, whereas evoked neural patterns within the ventral tool use network reflected categorical information more strongly in the common use group. These results reveal the flexibility of cortical representations of tool use and the situated nature of cortical representations more generally.

  3. A right-ear bias of auditory selective attention is evident in alpha oscillations.

    PubMed

    Payne, Lisa; Rogers, Chad S; Wingfield, Arthur; Sekuler, Robert

    2017-04-01

    Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening. © 2016 Society for Psychophysiological Research.

  4. Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”

    PubMed Central

    Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.

    2010-01-01

    Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526

  5. Neural Integration in Body Perception.

    PubMed

    Ramsey, Richard

    2018-06-19

    The perception of other people is instrumental in guiding social interactions. For example, the appearance of the human body cues a wide range of inferences regarding sex, age, health, and personality, as well as emotional state and intentions, which influence social behavior. To date, most neuroscience research on body perception has aimed to characterize the functional contribution of segregated patches of cortex in the ventral visual stream. In light of the growing prominence of network architectures in neuroscience, the current article reviews neuroimaging studies that measure functional integration between different brain regions during body perception. The review demonstrates that body perception is not restricted to processing in the ventral visual stream but instead reflects a functional alliance between the ventral visual stream and extended neural systems associated with action perception, executive functions, and theory of mind. Overall, these findings demonstrate how body percepts are constructed through interactions in distributed brain networks and underscore that functional segregation and integration should be considered together when formulating neurocognitive theories of body perception. Insight from such an updated model of body perception generalizes to inform the organizational structure of social perception and cognition more generally and also informs disorders of body image, such as anorexia nervosa, which may rely on atypical integration of body-related information.

  6. Concurrent visuomotor behaviour improves form discrimination in a patient with visual form agnosia.

    PubMed

    Schenk, Thomas; Milner, A David

    2006-09-01

    It is now well established that the visual brain is divided into two visual streams, the ventral and the dorsal stream. Milner and Goodale have suggested that the ventral stream is dedicated for processing vision for perception and the dorsal stream vision for action [A.D. Milner & M.A. Goodale (1995) The Visual Brain in Action, Oxford University Press, Oxford]. However, it is possible that ongoing processes in the visuomotor stream will nevertheless have an effect on perceptual processes. This possibility was examined in the present study. We have examined the visual form-discrimination performance of the form-agnosic patient D.F. with and without a concurrent visuomotor task, and found that her performance was significantly improved in the former condition. This suggests that the visuomotor behaviour provides cues that enhance her ability to recognize the form of the target object. In control experiments we have ruled out proprioceptive and efferent cues, and therefore propose that D.F. can, to a significant degree, access the object's visuomotor representation in the dorsal stream. Moreover, we show that the grasping-induced perceptual improvement disappears if the target objects only differ with respect to their shape but not their width. This suggests that shape information per se is not used for this grasping task.

  7. Mapping the cortical representation of speech sounds in a syllable repetition task.

    PubMed

    Markiewicz, Christopher J; Bohland, Jason W

    2016-11-01

    Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    PubMed

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  9. Perception of Shapes Targeting Local and Global Processes in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Grinter, Emma J.; Maybery, Murray T.; Pellicano, Elizabeth; Badcock, Johanna C.; Badcock, David R.

    2010-01-01

    Background: Several researchers have found evidence for impaired global processing in the dorsal visual stream in individuals with autism spectrum disorders (ASDs). However, support for a similar pattern of visual processing in the ventral visual stream is less consistent. Critical to resolving the inconsistency is the assessment of local and…

  10. Working Memory Impairment in People with Williams Syndrome: Effects of Delay, Task and Stimuli

    ERIC Educational Resources Information Center

    O'Hearn, Kirsten; Courtney, Susan; Street, Whitney; Landau, Barbara

    2009-01-01

    Williams syndrome (WS) is a neurodevelopmental disorder associated with impaired visuospatial representations subserved by the dorsal stream and relatively strong object recognition abilities subserved by the ventral stream. There is conflicting evidence on whether this uneven pattern in WS extends to working memory (WM). The present studies…

  11. Pattern Specificity in the Effect of Prior [delta]f on Auditory Stream Segregation

    ERIC Educational Resources Information Center

    Snyder, Joel S.; Weintraub, David M.

    2011-01-01

    During repeating sequences of low (A) and high (B) tones, perception of two separate streams ("streaming") increases with greater frequency separation ([delta]f) between the A and B tones; in contrast, a prior context with large [delta]f results in less streaming during a subsequent test pattern. The purpose of the present study was to…

  12. Modulations of neural activity in auditory streaming caused by spectral and temporal alternation in subsequent stimuli: a magnetoencephalographic study.

    PubMed

    Chakalov, Ivan; Draganova, Rossitza; Wollbrink, Andreas; Preissl, Hubert; Pantev, Christo

    2012-06-20

    The aim of the present study was to identify a specific neuronal correlate underlying the pre-attentive auditory stream segregation of subsequent sound patterns alternating in spectral or temporal cues. Fifteen participants with normal hearing were presented with series' of two consecutive ABA auditory tone-triplet sequences, the initial triplets being the Adaptation sequence and the subsequent triplets being the Test sequence. In the first experiment, the frequency separation (delta-f) between A and B tones in the sequences was varied by 2, 4 and 10 semitones. In the second experiment, a constant delta-f of 6 semitones was maintained but the Inter-Stimulus Intervals (ISIs) between A and B tones were varied. Auditory evoked magnetic fields (AEFs) were recorded using magnetoencephalography (MEG). Participants watched a muted video of their choice and ignored the auditory stimuli. In a subsequent behavioral study both MEG experiments were replicated to provide information about the participants' perceptual state. MEG measurements showed a significant increase in the amplitude of the B-tone related P1 component of the AEFs as delta-f increased. This effect was seen predominantly in the left hemisphere. A significant increase in the amplitude of the N1 component was only obtained for a Test sequence delta-f of 10 semitones with a prior Adaptation sequence of 2 semitones. This effect was more pronounced in the right hemisphere. The additional behavioral data indicated an increased probability of two-stream perception for delta-f = 4 and delta-f = 10 semitones with a preceding Adaptation sequence of 2 semitones. However, neither the neural activity nor the perception of the successive streaming sequences were modulated when the ISIs were alternated. Our MEG experiment demonstrated differences in the behavior of P1 and N1 components during the automatic segregation of sounds when induced by an initial Adaptation sequence. The P1 component appeared enhanced in all Test-conditions and thus demonstrates the preceding context effect, whereas N1 was specifically modulated only by large delta-f Test sequences induced by a preceding small delta-f Adaptation sequence. These results suggest that P1 and N1 components represent at least partially-different systems that underlie the neural representation of auditory streaming.

  13. Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres.

    PubMed

    Fiveash, Anna; Thompson, William Forde; Badcock, Nicholas A; McArthur, Genevieve

    2018-07-01

    Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. A role for descending auditory cortical projections in songbird vocal learning

    PubMed Central

    Mandelblat-Cerf, Yael; Las, Liora; Denisenko, Natalia; Fee, Michale S

    2014-01-01

    Many learned motor behaviors are acquired by comparing ongoing behavior with an internal representation of correct performance, rather than using an explicit external reward. For example, juvenile songbirds learn to sing by comparing their song with the memory of a tutor song. At present, the brain regions subserving song evaluation are not known. In this study, we report several findings suggesting that song evaluation involves an avian 'cortical' area previously shown to project to the dopaminergic midbrain and other downstream targets. We find that this ventral portion of the intermediate arcopallium (AIV) receives inputs from auditory cortical areas, and that lesions of AIV result in significant deficits in vocal learning. Additionally, AIV neurons exhibit fast responses to disruptive auditory feedback presented during singing, but not during nonsinging periods. Our findings suggest that auditory cortical areas may guide learning by transmitting song evaluation signals to the dopaminergic midbrain and/or other subcortical targets. DOI: http://dx.doi.org/10.7554/eLife.02152.001 PMID:24935934

  15. The midline metathoracic ear of the praying mantis, Mantis religiosa.

    PubMed

    Yager, D D; Hoy, R R

    1987-12-01

    The praying mantis, Mantis religiosa, is unique in possessing a single, tympanal auditory organ located in the ventral midline of its body between the metathoracic coxae. The ear is in a deep groove and consists of two tympana facing each other and backed by large air sacs. Neural transduction takes place in a structure at the anterior end of the groove. This tympanal organ contains 32 chordotonal sensilla organized into three groups, two of which are 180 degrees out of line with the one attaching directly to the tympanum. Innervation is provided by Nerve root 7 from the metathoracic ganglion. Cobalt backfills show that the auditory neuropile is a series of finger-like projections terminating ipsilaterally near the midline, primarily near DC III and SMC. The auditory neuropile thus differs from the pattern common to all other insects previously studied.

  16. Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech.

    PubMed

    Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath

    2018-05-24

    Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    PubMed

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  18. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    PubMed Central

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  19. Finding and recognizing objects in natural scenes: complementary computations in the dorsal and ventral visual systems

    PubMed Central

    Rolls, Edmund T.; Webb, Tristan J.

    2014-01-01

    Searching for and recognizing objects in complex natural scenes is implemented by multiple saccades until the eyes reach within the reduced receptive field sizes of inferior temporal cortex (IT) neurons. We analyze and model how the dorsal and ventral visual streams both contribute to this. Saliency detection in the dorsal visual system including area LIP is modeled by graph-based visual saliency, and allows the eyes to fixate potential objects within several degrees. Visual information at the fixated location subtending approximately 9° corresponding to the receptive fields of IT neurons is then passed through a four layer hierarchical model of the ventral cortical visual system, VisNet. We show that VisNet can be trained using a synaptic modification rule with a short-term memory trace of recent neuronal activity to capture both the required view and translation invariances to allow in the model approximately 90% correct object recognition for 4 objects shown in any view across a range of 135° anywhere in a scene. The model was able to generalize correctly within the four trained views and the 25 trained translations. This approach analyses the principles by which complementary computations in the dorsal and ventral visual cortical streams enable objects to be located and recognized in complex natural scenes. PMID:25161619

  20. Temporal Structure and Complexity Affect Audio-Visual Correspondence Detection

    PubMed Central

    Denison, Rachel N.; Driver, Jon; Ruff, Christian C.

    2013-01-01

    Synchrony between events in different senses has long been considered the critical temporal cue for multisensory integration. Here, using rapid streams of auditory and visual events, we demonstrate how humans can use temporal structure (rather than mere temporal coincidence) to detect multisensory relatedness. We find psychophysically that participants can detect matching auditory and visual streams via shared temporal structure for crossmodal lags of up to 200 ms. Performance on this task reproduced features of past findings based on explicit timing judgments but did not show any special advantage for perfectly synchronous streams. Importantly, the complexity of temporal patterns influences sensitivity to correspondence. Stochastic, irregular streams – with richer temporal pattern information – led to higher audio-visual matching sensitivity than predictable, rhythmic streams. Our results reveal that temporal structure and its complexity are key determinants for human detection of audio-visual correspondence. The distinctive emphasis of our new paradigms on temporal patterning could be useful for studying special populations with suspected abnormalities in audio-visual temporal perception and multisensory integration. PMID:23346067

  1. Emergence of neural encoding of auditory objects while listening to competing speakers

    PubMed Central

    Ding, Nai; Simon, Jonathan Z.

    2012-01-01

    A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation. PMID:22753470

  2. Auditory Scene Analysis: An Attention Perspective

    PubMed Central

    2017-01-01

    Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599

  3. Temporal properties of responses to sound in the ventral nucleus of the lateral lemniscus.

    PubMed

    Recio-Spinoso, Alberto; Joris, Philip X

    2014-02-01

    Besides the rapid fluctuations in pressure that constitute the "fine structure" of a sound stimulus, slower fluctuations in the sound's envelope represent an important temporal feature. At various stages in the auditory system, neurons exhibit tuning to envelope frequency and have been described as modulation filters. We examine such tuning in the ventral nucleus of the lateral lemniscus (VNLL) of the pentobarbital-anesthetized cat. The VNLL is a large but poorly accessible auditory structure that provides a massive inhibitory input to the inferior colliculus. We test whether envelope filtering effectively applies to the envelope spectrum when multiple envelope components are simultaneously present. We find two broad classes of response with often complementary properties. The firing rate of onset neurons is tuned to a band of modulation frequencies, over which they also synchronize strongly to the envelope waveform. Although most sustained neurons show little firing rate dependence on modulation frequency, some of them are weakly tuned. The latter neurons are usually band-pass or low-pass tuned in synchronization, and a reverse-correlation approach demonstrates that their modulation tuning is preserved to nonperiodic, noisy envelope modulations of a tonal carrier. Modulation tuning to this type of stimulus is weaker for onset neurons. In response to broadband noise, sustained and onset neurons tend to filter out envelope components over a frequency range consistent with their modulation tuning to periodically modulated tones. The results support a role for VNLL in providing temporal reference signals to the auditory midbrain.

  4. The anatomy of object recognition--visual form agnosia caused by medial occipitotemporal stroke.

    PubMed

    Karnath, Hans-Otto; Rüter, Johannes; Mandler, André; Himmelbach, Marc

    2009-05-06

    The influential model on visual information processing by Milner and Goodale (1995) has suggested a dissociation between action- and perception-related processing in a dorsal versus ventral stream projection. It was inspired substantially by the observation of a double dissociation of disturbed visual action versus perception in patients with optic ataxia on the one hand and patients with visual form agnosia (VFA) on the other. Unfortunately, almost all cases with VFA reported so far suffered from inhalational intoxication, the majority with carbon monoxide (CO). Since CO induces a diffuse and widespread pattern of neuronal and white matter damage throughout the whole brain, precise conclusions from these patients with VFA on the selective role of ventral stream structures for shape and orientation perception were difficult. Here, we report patient J.S., who demonstrated VFA after a well circumscribed brain lesion due to stroke etiology. Like the famous patient D.F. with VFA after CO intoxication studied by Milner, Goodale, and coworkers (Goodale et al., 1991, 1994; Milner et al., 1991; Servos et al., 1995; Mon-Williams et al., 2001a,b; Wann et al., 2001; Westwood et al., 2002; McIntosh et al., 2004; Schenk and Milner, 2006), J.S. showed an obvious dissociation between disturbed visual perception of shape and orientation information on the one side and preserved visuomotor abilities based on the same information on the other. In both hemispheres, damage primarily affected the fusiform and the lingual gyri as well as the adjacent posterior cingulate gyrus. We conclude that these medial structures of the ventral occipitotemporal cortex are integral for the normal flow of shape and of contour information into the ventral stream system allowing to recognize objects.

  5. Structural and functional integration between dorsal and ventral language streams as revealed by blunt dissection and direct electrical stimulation.

    PubMed

    Sarubbo, Silvio; De Benedictis, Alessandro; Merler, Stefano; Mandonnet, Emmanuel; Barbareschi, Mattia; Dallabona, Monica; Chioffi, Franco; Duffau, Hugues

    2016-11-01

    The most accepted framework of language processing includes a dorsal phonological and a ventral semantic pathway, connecting a wide network of distributed cortical hubs. However, the cortico-subcortical connectivity and the reciprocal anatomical relationships of this dual-stream system are not completely clarified. We performed an original blunt microdissection of 10 hemispheres with the exposition of locoregional short fibers and six long-range fascicles involved in language elaboration. Special attention was addressed to the analysis of termination sites and anatomical relationships between long- and short-range fascicles. We correlated these anatomical findings with a topographical analysis of 93 functional responses located at the terminal sites of the language bundles, collected by direct electrical stimulation in 108 right-handers. The locations of phonological and semantic paraphasias, verbal apraxia, speech arrest, pure anomia, and alexia were statistically analyzed, and the respective barycenters were computed in the MNI space. We found that terminations of main language bundles and functional responses have a wider distribution in respect to the classical definition of language territories. Our analysis showed that dorsal and ventral streams have a similar anatomical layer organization. These pathways are parallel and relatively segregated over their subcortical course while their terminal fibers are strictly overlapped at the cortical level. Finally, the anatomical features of the U-fibers suggested a role of locoregional integration between the phonological, semantic, and executive subnetworks of language, in particular within the inferoventral frontal lobe and the temporoparietal junction, which revealed to be the main criss-cross regions between the dorsal and ventral pathways. Hum Brain Mapp 37:3858-3872, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    ERIC Educational Resources Information Center

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  7. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  8. 'What' and 'where' in the human brain.

    PubMed

    Ungerleider, L G; Haxby, J V

    1994-04-01

    Multiple visual areas in the cortex of nonhuman primates are organized into two hierarchically organized and functionally specialized processing pathways, a 'ventral stream' for object vision and a 'dorsal stream' for spatial vision. Recent findings from positron emission tomography activation studies have localized these pathways within the human brain, yielding insights into cortical hierarchies, specialization of function, and attentional mechanisms.

  9. Increased ventral-striatal activity during monetary decision making is a marker of problem poker gambling severity.

    PubMed

    Brevers, Damien; Noël, Xavier; He, Qinghua; Melrose, James A; Bechara, Antoine

    2016-05-01

    The aim of this study was to examine the impact of different neural systems on monetary decision making in frequent poker gamblers, who vary in their degree of problem gambling. Fifteen frequent poker players, ranging from non-problem to high-problem gambling, and 15 non-gambler controls were scanned using functional magnetic resonance imaging (fMRI) while performing the Iowa Gambling Task (IGT). During IGT deck selection, between-group fMRI analyses showed that frequent poker gamblers exhibited higher ventral-striatal but lower dorsolateral prefrontal and orbitofrontal activations as compared with controls. Moreover, using functional connectivity analyses, we observed higher ventral-striatal connectivity in poker players, and in regions involved in attentional/motor control (posterior cingulate), visual (occipital gyrus) and auditory (temporal gyrus) processing. In poker gamblers, scores of problem gambling severity were positively associated with ventral-striatal activations and with the connectivity between the ventral-striatum seed and the occipital fusiform gyrus and the middle temporal gyrus. Present results are consistent with findings from recent brain imaging studies showing that gambling disorder is associated with heightened motivational-reward processes during monetary decision making, which may hamper one's ability to moderate his level of monetary risk taking. © 2015 Society for the Study of Addiction.

  10. Auditory motion-specific mechanisms in the primate brain

    PubMed Central

    Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.

    2017-01-01

    This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038

  11. The Extrastriate Body Area Computes Desired Goal States during Action Planning123

    PubMed Central

    2016-01-01

    Abstract How do object perception and action interact at a neural level? Here we test the hypothesis that perceptual features, processed by the ventral visuoperceptual stream, are used as priors by the dorsal visuomotor stream to specify goal-directed grasping actions. We present three main findings, which were obtained by combining time-resolved transcranial magnetic stimulation and kinematic tracking of grasp-and-rotate object manipulations, in a group of healthy human participants (N = 22). First, the extrastriate body area (EBA), in the ventral stream, provides an initial structure to motor plans, based on current and desired states of a grasped object and of the grasping hand. Second, the contributions of EBA are earlier in time than those of a caudal intraparietal region known to specify the action plan. Third, the contributions of EBA are particularly important when desired and current object configurations differ, and multiple courses of actions are possible. These findings specify the temporal and functional characteristics for a mechanism that integrates perceptual processing with motor planning. PMID:27066535

  12. Attention Effects on Neural Population Representations for Shape and Location Are Stronger in the Ventral than Dorsal Stream

    PubMed Central

    2018-01-01

    Abstract We examined how attention causes neural population representations of shape and location to change in ventral stream (AIT) and dorsal stream (LIP). Monkeys performed two identical delayed-match-to-sample (DMTS) tasks, attending either to shape or location. In AIT, shapes were more discriminable when directing attention to shape rather than location, measured by an increase in mean distance between population response vectors. In LIP, attending to location rather than shape did not increase the discriminability of different stimulus locations. Even when factoring out the change in mean vector response distance, multidimensional scaling (MDS) still showed a significant task difference in AIT, but not LIP, indicating that beyond increasing discriminability, attention also causes a nonlinear warping of representation space in AIT. Despite single-cell attentional modulations in both areas, our data show that attentional modulations of population representations are weaker in LIP, likely due to a need to maintain veridical representations for visuomotor control. PMID:29876521

  13. Balanced increases in selectivity and tolerance produce constant sparseness along the ventral visual stream

    PubMed Central

    Rust, Nicole C.; DiCarlo, James J.

    2012-01-01

    While popular accounts suggest that neurons along the ventral visual processing stream become increasingly selective for particular objects, this appears at odds with the fact that inferior temporal cortical (IT) neurons are broadly tuned. To explore this apparent contradiction, we compared processing in two ventral stream stages (V4 and IT) in the rhesus macaque monkey. We confirmed that IT neurons are indeed more selective for conjunctions of visual features than V4 neurons, and that this increase in feature conjunction selectivity is accompanied by an increase in tolerance (“invariance”) to identity-preserving transformations (e.g. shifting, scaling) of those features. We report here that V4 and IT neurons are, on average, tightly matched in their tuning breadth for natural images (“sparseness”), and that the average V4 or IT neuron will produce a robust firing rate response (over 50% of its peak observed firing rate) to ~10% of all natural images. We also observed that sparseness was positively correlated with conjunction selectivity and negatively correlated with tolerance within both V4 and IT, consistent with selectivity-building and invariance-building computations that offset one another to produce sparseness. Our results imply that the conjunction-selectivity-building and invariance-building computations necessary to support object recognition are implemented in a balanced fashion to maintain sparseness at each stage of processing. PMID:22836252

  14. Cognitive And Neural Sciences Division 1992 Programs

    DTIC Science & Technology

    1992-08-01

    Thalamic short-term plasticity in the auditory system: Associative retuning of receptive fields in the ventral medial geniculate body . Behavioral...prediction and enhancement of human performance in training and operational environments. A second goal is to understand the neurobiological constraints and...such complex, structured bodies of knowledge and skill are acquired. Fourth, to provide a precise theory of instruction, founded on cognitive theory

  15. Auditory brainstem responses of CBA/J mice with neonatal conductive hearing losses and treatment with GM1 ganglioside.

    PubMed

    Money, M K; Pippin, G W; Weaver, K E; Kirsch, J P; Webster, D B

    1995-07-01

    Exogenous administration of GM1 ganglioside to CBA/J mice with a neonatal conductive hearing loss ameliorates the atrophy of spiral ganglion neurons, ventral cochlear nucleus neurons, and ventral cochlear nucleus volume. The present investigation demonstrates the extent of a conductive loss caused by atresia and tests the hypothesis that GM1 ganglioside treatment will ameliorate the conductive hearing loss. Auditory brainstem responses were recorded from four groups of seven mice each: two groups received daily subcutaneous injections of saline (one group had normal hearing; the other had a conductive hearing loss); the other two groups received daily subcutaneous injections of GM1 ganglioside (one group had normal hearing; the other had a conductive hearing loss). In mice with a conductive loss, decreases in hearing sensitivity were greatest at high frequencies. The decreases were determined by comparing mean ABR thresholds of the conductive loss mice with those of normal hearing mice. The conductive hearing loss induced in the mice in this study was similar to that seen in humans with congenital aural atresias. GM1 ganglioside treatment had no significant effect on ABR wave I thresholds or latencies in either group.

  16. Hox2 Genes Are Required for Tonotopic Map Precision and Sound Discrimination in the Mouse Auditory Brainstem.

    PubMed

    Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M

    2017-01-03

    Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  17. Reality of auditory verbal hallucinations.

    PubMed

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  18. Reality of auditory verbal hallucinations

    PubMed Central

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  19. Competing streams at the cocktail party: Exploring the mechanisms of attention and temporal integration

    PubMed Central

    Xiang, Juanjuan; Simon, Jonathan; Elhilali, Mounya

    2010-01-01

    Processing of complex acoustic scenes depends critically on the temporal integration of sensory information as sounds evolve naturally over time. It has been previously speculated that this process is guided by both innate mechanisms of temporal processing in the auditory system, as well as top-down mechanisms of attention, and possibly other schema-based processes. In an effort to unravel the neural underpinnings of these processes and their role in scene analysis, we combine Magnetoencephalography (MEG) with behavioral measures in humans in the context of polyrhythmic tone sequences. While maintaining unchanged sensory input, we manipulate subjects’ attention to one of two competing rhythmic streams in the same sequence. The results reveal that the neural representation of the attended rhythm is significantly enhanced both in its steady-state power and spatial phase coherence relative to its unattended state, closely correlating with its perceptual detectability for each listener. Interestingly, the data reveals a differential efficiency of rhythmic rates of the order of few hertz during the streaming process, closely following known neural and behavioral measures of temporal modulation sensitivity in the auditory system. These findings establish a direct link between known temporal modulation tuning in the auditory system (particularly at the level of auditory cortex) and the temporal integration of perceptual features in a complex acoustic scene, while mediated by processes of attention. PMID:20826671

  20. Spontaneous in-flight accommodation of hand orientation to unseen grasp targets: A case of action blindsight.

    PubMed

    Prentiss, Emily K; Schneider, Colleen L; Williams, Zoë R; Sahin, Bogachan; Mahon, Bradford Z

    2018-03-15

    The division of labour between the dorsal and ventral visual pathways is well established. The ventral stream supports object identification, while the dorsal stream supports online processing of visual information in the service of visually guided actions. Here, we report a case of an individual with a right inferior quadrantanopia who exhibited accurate spontaneous rotation of his wrist when grasping a target object in his blind visual field. His accurate wrist orientation was observed despite the fact that he exhibited no sensitivity to the orientation of the handle in a perceptual matching task. These findings indicate that non-geniculostriate visual pathways process basic volumetric information relevant to grasping, and reinforce the observation that phenomenal awareness is not necessary for an object's volumetric properties to influence visuomotor performance.

  1. Thalamic connections of the core auditory cortex and rostral supratemporal plane in the macaque monkey.

    PubMed

    Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-11-01

    In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.

  2. Exposures to fine particulate matter (PM2.5) and ozone above USA standards are associated with auditory brainstem dysmorphology and abnormal auditory brainstem evoked potentials in healthy young dogs.

    PubMed

    Calderón-Garcidueñas, Lilian; González-González, Luis O; Kulesza, Randy J; Fech, Tatiana M; Pérez-Guillé, Gabriela; Luna, Miguel Angel Jiménez-Bravo; Soriano-Rosales, Rosa Eugenia; Solorio, Edelmira; Miramontes-Higuera, José de Jesús; Gómez-Maqueo Chew, Aline; Bernal-Morúa, Alexia F; Mukherjee, Partha S; Torres-Jardón, Ricardo; Mills, Paul C; Wilson, Wayne J; Pérez-Guillé, Beatriz; D'Angiulli, Amedeo

    2017-10-01

    Delayed central conduction times in the auditory brainstem have been observed in Mexico City (MC) healthy children exposed to fine particulate matter (PM 2.5 ) and ozone (O 3 ) above the current United States Environmental Protection Agency (US-EPA) standards. MC children have α synuclein brainstem accumulation and medial superior olivary complex (MSO) dysmorphology. The present study used a dog model to investigate the potential effects of air pollution on the function and morphology of the auditory brainstem. Twenty-four dogs living in clean air v MC, average age 37.1 ± 26.3 months, underwent brainstem auditory evoked potential (BAEP) measurements. Eight dogs (4 MC, 4 Controls) were analysed for auditory brainstem morphology and histopathology. MC dogs showed ventral cochlear nuclei hypotrophy and MSO dysmorphology with a significant decrease in cell body size, decreased neuronal packing density with regions in the nucleus devoid of neurons and marked gliosis. MC dogs showed significant delayed BAEP absolute wave I, III and V latencies compared to controls. MC dogs show auditory nuclei dysmorphology and BAEPs consistent with an alteration of the generator sites of the auditory brainstem response waveform. This study puts forward the usefulness of BAEPs to study auditory brainstem neurodegenerative changes associated with air pollution in dogs. Recognition of the role of non-invasive BAEPs in urban dogs is warranted to elucidate novel neurodegenerative pathways link to air pollution and a promising early diagnostic strategy for Alzheimer's Disease. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Task-specific reorganization of the auditory cortex in deaf humans

    PubMed Central

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-01

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior–lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain. PMID:28069964

  4. Task-specific reorganization of the auditory cortex in deaf humans.

    PubMed

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  5. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    ERIC Educational Resources Information Center

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  6. Cross-Modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study

    ERIC Educational Resources Information Center

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2011-01-01

    During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich,…

  7. Auditory Stream Segregation and the Perception of Across-Frequency Synchrony

    ERIC Educational Resources Information Center

    Micheyl, Christophe; Hunter, Cynthia; Oxenham, Andrew J.

    2010-01-01

    This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous "target" tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally…

  8. Concentric scheme of monkey auditory cortex

    NASA Astrophysics Data System (ADS)

    Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer

    2003-04-01

    The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.

  9. A functional MRI study of happy and sad affective states induced by classical music.

    PubMed

    Mitterschiffthaler, Martina T; Fu, Cynthia H Y; Dalton, Jeffrey A; Andrew, Christopher M; Williams, Steven C R

    2007-11-01

    The present study investigated the functional neuroanatomy of transient mood changes in response to Western classical music. In a pilot experiment, 53 healthy volunteers (mean age: 32.0; SD = 9.6) evaluated their emotional responses to 60 classical musical pieces using a visual analogue scale (VAS) ranging from 0 (sad) through 50 (neutral) to 100 (happy). Twenty pieces were found to accurately induce the intended emotional states with good reliability, consisting of 5 happy, 5 sad, and 10 emotionally unevocative, neutral musical pieces. In a subsequent functional magnetic resonance imaging (fMRI) study, the blood oxygenation level dependent (BOLD) signal contrast was measured in response to the mood state induced by each musical stimulus in a separate group of 16 healthy participants (mean age: 29.5; SD = 5.5). Mood state ratings during scanning were made by a VAS, which confirmed the emotional valence of the selected stimuli. Increased BOLD signal contrast during presentation of happy music was found in the ventral and dorsal striatum, anterior cingulate, parahippocampal gyrus, and auditory association areas. With sad music, increased BOLD signal responses were noted in the hippocampus/amygdala and auditory association areas. Presentation of neutral music was associated with increased BOLD signal responses in the insula and auditory association areas. Our findings suggest that an emotion processing network in response to music integrates the ventral and dorsal striatum, areas involved in reward experience and movement; the anterior cingulate, which is important for targeting attention; and medial temporal areas, traditionally found in the appraisal and processing of emotions. Copyright 2006 Wiley-Liss, Inc.

  10. Neural pathways for visual speech perception

    PubMed Central

    Bernstein, Lynne E.; Liebenthal, Einat

    2014-01-01

    This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA. PMID:25520611

  11. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545

  12. Auditory working memory load impairs visual ventral stream processing: toward a unified model of attentional load.

    PubMed

    Klemen, Jane; Büchel, Christian; Bühler, Mira; Menz, Mareike M; Rose, Michael

    2010-03-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences that disruptions to attention can have. According to the load theory of cognitive control, processing of task-irrelevant stimuli is increased by attending in parallel to a relevant task with high cognitive demands. This is due to the relevant task engaging cognitive control resources that are, hence, unavailable to inhibit the processing of task-irrelevant stimuli. However, it has also been demonstrated that a variety of types of load (perceptual and emotional) can result in a reduction of the processing of task-irrelevant stimuli, suggesting a uniform effect of increased load irrespective of the type of load. In the present study, we concurrently presented a relevant auditory matching task [n-back working memory (WM)] of low or high cognitive load (1-back or 2-back WM) and task-irrelevant images at one of three object visibility levels (0%, 50%, or 100%). fMRI activation during the processing of the task-irrelevant visual stimuli was measured in the lateral occipital cortex and found to be reduced under high, compared to low, WM load. In combination with previous findings, this result is suggestive of a more generalized load theory, whereby cognitive load, as well as other types of load (e.g., perceptual), can result in a reduction of the processing of task-irrelevant stimuli, in line with a uniform effect of increased load irrespective of the type of load.

  13. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  14. Selective Entrainment of Theta Oscillations in the Dorsal Stream Causally Enhances Auditory Working Memory Performance.

    PubMed

    Albouy, Philippe; Weiss, Aurélien; Baillet, Sylvain; Zatorre, Robert J

    2017-04-05

    The implication of the dorsal stream in manipulating auditory information in working memory has been recently established. However, the oscillatory dynamics within this network and its causal relationship with behavior remain undefined. Using simultaneous MEG/EEG, we show that theta oscillations in the dorsal stream predict participants' manipulation abilities during memory retention in a task requiring the comparison of two patterns differing in temporal order. We investigated the causal relationship between brain oscillations and behavior by applying theta-rhythmic TMS combined with EEG over the MEG-identified target (left intraparietal sulcus) during the silent interval between the two stimuli. Rhythmic TMS entrained theta oscillation and boosted participants' accuracy. TMS-induced oscillatory entrainment scaled with behavioral enhancement, and both gains varied with participants' baseline abilities. These effects were not seen for a melody-comparison control task and were not observed for arrhythmic TMS. These data establish theta activity in the dorsal stream as causally related to memory manipulation. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    PubMed

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  16. Auditory processing, speech perception and phonological ability in pre-school children at high-risk for dyslexia: a longitudinal study of the auditory temporal processing theory.

    PubMed

    Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquière, Pol

    2007-04-09

    This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first grade literacy achievement children were categorized in groups and pre-school data were retrospectively reanalyzed. On average, children showing both increased family risk and literacy-impairment at the end of first grade, presented significant pre-school deficits in phonological awareness, rapid automatized naming, speech-in-noise perception and frequency modulation detection. The concurrent presence of these deficits before receiving any formal reading instruction, might suggest a causal relation with problematic literacy development. However, a closer inspection of the individual data indicates that the core of the literacy problem is situated at the level of higher-order phonological processing. Although auditory and speech perception problems are relatively over-represented in literacy-impaired subjects and might possibly aggravate the phonological and literacy problem, it is unlikely that they would be at the basis of these problems. At a neurobiological level, results are interpreted as evidence for dysfunctional processing along the auditory-to-articulation stream that is implied in phonological processing, in combination with a relatively intact or inconsistently impaired functioning of the auditory-to-meaning stream that subserves auditory processing and speech perception.

  17. Assessing the validity of subjective reports in the auditory streaming paradigm.

    PubMed

    Farkas, Dávid; Denham, Susan L; Bendixen, Alexandra; Winkler, István

    2016-04-01

    While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm.

  18. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes.

    PubMed

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-05-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with children in the autism spectrum disorder (ASD). In this task, a probe melody is followed by a mixed sequence, made up of a target melody interleaved with a distractor melody. These two melodies have either the same [0 semitone (ST)] or a different mean frequency (6, 12 or 24 ST separation conditions). Children have to identify if the probe melody is present in the mixed sequence. Children with ASD performed better than typical children when melodies were completely embedded. Conversely, they were impaired in the ST separation conditions. Our results confirm the difficulty of children with ASD in using a frequency cue to organize auditory perceptual information. However, superior performance in the completely embedded condition may result from superior perceptual processes in autism. We propose that this atypical pattern of results might reflect the expression of a single cognitive feature in autism.

  19. Investigating the dynamics of the brain response to music: A central role of the ventral striatum/nucleus accumbens.

    PubMed

    Mueller, Karsten; Fritz, Thomas; Mildner, Toralf; Richter, Maxi; Schulze, Katrin; Lepsien, Jöran; Schroeter, Matthias L; Möller, Harald E

    2015-08-01

    Ventral striatal activity has been previously shown to correspond well to reward value mediated by music. Here, we investigate the dynamic brain response to music and manipulated counterparts using functional magnetic resonance imaging (fMRI). Counterparts of musical excerpts were produced by either manipulating the consonance/dissonance of the musical fragments or playing them backwards (or both). Results show a greater involvement of the ventral striatum/nucleus accumbens both when contrasting listening to music that is perceived as pleasant and listening to a manipulated version perceived as unpleasant (backward dissonant), as well as in a parametric analysis for increasing pleasantness. Notably, both analyses yielded a ventral striatal response that was strongest during an early phase of stimulus presentation. A hippocampal response to the musical stimuli was also observed, and was largely mediated by processing differences between listening to forward and backward music. This hippocampal involvement was again strongest during the early response to the music. Auditory cortex activity was more strongly evoked by the original (pleasant) music compared to its manipulated counterparts, but did not display a similar decline of activation over time as subcortical activity. These findings rather suggest that the ventral striatal/nucleus accumbens response during music listening is strongest in the first seconds and then declines. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    PubMed

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  1. Ventral and dorsal streams for choosing word order during sentence production

    PubMed Central

    Thothathiri, Malathi; Rattinger, Michelle

    2015-01-01

    Proficient language use requires speakers to vary word order and choose between different ways of expressing the same meaning. Prior statistical associations between individual verbs and different word orders are known to influence speakers’ choices, but the underlying neural mechanisms are unknown. Here we show that distinct neural pathways are used for verbs with different statistical associations. We manipulated statistical experience by training participants in a language containing novel verbs and two alternative word orders (agent-before-patient, AP; patient-before-agent, PA). Some verbs appeared exclusively in AP, others exclusively in PA, and yet others in both orders. Subsequently, we used sparse sampling neuroimaging to examine the neural substrates as participants generated new sentences in the scanner. Behaviorally, participants showed an overall preference for AP order, but also increased PA order for verbs experienced in that order, reflecting statistical learning. Functional activation and connectivity analyses revealed distinct networks underlying the increased PA production. Verbs experienced in both orders during training preferentially recruited a ventral stream, indicating the use of conceptual processing for mapping meaning to word order. In contrast, verbs experienced solely in PA order recruited dorsal pathways, indicating the use of selective attention and sensorimotor integration for choosing words in the right order. These results show that the brain tracks the structural associations of individual verbs and that the same structural output may be achieved via ventral or dorsal streams, depending on the type of regularities in the input. PMID:26621706

  2. Here, there and everywhere: higher visual function and the dorsal visual stream.

    PubMed

    Cooper, Sarah Anne; O'Sullivan, Michael

    2016-06-01

    The dorsal visual stream, often referred to as the 'where' stream, represents the pathway taken by visual information from the primary visual cortex to the posterior parietal lobe and onwards. It partners the ventral or 'what' stream, the subject of a previous review and largely a temporal-based system. Here, we consider the dorsal stream disorders of perception (simultanagnosia, akinetopsia) along with their consequences on action (eg, optic ataxia and oculomotor apraxia, along with Balint's syndrome). The role of the dorsal stream in blindsight and hemispatial neglect is also considered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Stereotactically-guided Ablation of the Rat Auditory Cortex, and Localization of the Lesion in the Brain.

    PubMed

    Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A

    2017-10-11

    The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.

  4. Cortico-Cortical Connectivity Within Ferret Auditory Cortex.

    PubMed

    Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J

    2015-10-15

    Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.

  5. Bioacoustic Signal Classification in Cat Auditory Cortex

    DTIC Science & Technology

    1994-01-01

    for fast FM sweeps. A second maximum (i.e., sub- In Fig. 8D (87-001) the orie.-tation of the mapped area Iwo 11 .MWRN NOWO 0 lo 74 was tilted 214...Brashear, H.R., and Heilman, K.M. Pure word deafness after bilateral primary auditory cortex infarcts. Neuroiogy 34: 347 -352, 1984. Cranford, J.L., Stream

  6. Anatomy of the auditory thalamocortical system in the Mongolian gerbil: nuclear origins and cortical field-, layer-, and frequency-specificities.

    PubMed

    Saldeitis, Katja; Happel, Max F K; Ohl, Frank W; Scheich, Henning; Budinger, Eike

    2014-07-01

    Knowledge of the anatomical organization of the auditory thalamocortical (TC) system is fundamental for the understanding of auditory information processing in the brain. In the Mongolian gerbil (Meriones unguiculatus), a valuable model species in auditory research, the detailed anatomy of this system has not yet been worked out in detail. Here, we investigated the projections from the three subnuclei of the medial geniculate body (MGB), namely, its ventral (MGv), dorsal (MGd), and medial (MGm) divisions, as well as from several of their subdivisions (MGv: pars lateralis [LV], pars ovoidea [OV], rostral pole [RP]; MGd: deep dorsal nucleus [DD]), to the auditory cortex (AC) by stereotaxic pressure injections and electrophysiologically guided iontophoretic injections of the anterograde tract tracer biocytin. Our data reveal highly specific features of the TC connections regarding their nuclear origin in the subdivisions of the MGB and their termination patterns in the auditory cortical fields and layers. In addition to tonotopically organized projections, primarily of the LV, OV, and DD to the AC, a large number of axons diverge across the tonotopic gradient. These originate mainly from the RP, MGd (proper), and MGm. In particular, neurons of the MGm project in a columnar fashion to several auditory fields, forming small- and medium-sized boutons, and also hitherto unknown giant terminals. The distinctive layer-specific distribution of axonal endings within the AC indicates that each of the TC connectivity systems has a specific function in auditory cortical processing. Copyright © 2014 Wiley Periodicals, Inc.

  7. A bilateral cortical network responds to pitch perturbations in speech feedback

    PubMed Central

    Kort, Naomi S.; Nagarajan, Srikantan S.; Houde, John F.

    2014-01-01

    Auditory feedback is used to monitor and correct for errors in speech production, and one of the clearest demonstrations of this is the pitch perturbation reflex. During ongoing phonation, speakers respond rapidly to shifts of the pitch of their auditory feedback, altering their pitch production to oppose the direction of the applied pitch shift. In this study, we examine the timing of activity within a network of brain regions thought to be involved in mediating this behavior. To isolate auditory feedback processing relevant for motor control of speech, we used magnetoencephalography (MEG) to compare neural responses to speech onset and to transient (400ms) pitch feedback perturbations during speaking with responses to identical acoustic stimuli during passive listening. We found overlapping, but distinct bilateral cortical networks involved in monitoring speech onset and feedback alterations in ongoing speech. Responses to speech onset during speaking were suppressed in bilateral auditory and left ventral supramarginal gyrus/posterior superior temporal sulcus (vSMG/pSTS). In contrast, during pitch perturbations, activity was enhanced in bilateral vSMG/pSTS, bilateral premotor cortex, right primary auditory cortex, and left higher order auditory cortex. We also found speaking-induced delays in responses to both unaltered and altered speech in bilateral primary and secondary auditory regions, the left vSMG/pSTS and right premotor cortex. The network dynamics reveal the cortical processing involved in both detecting the speech error and updating the motor plan to create the new pitch output. These results implicate vSMG/pSTS as critical in both monitoring auditory feedback and initiating rapid compensation to feedback errors. PMID:24076223

  8. Predictive cues for auditory stream formation in humans and monkeys.

    PubMed

    Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael

    2017-12-18

    Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  9. Comparative Study on Interaction of Form and Motion Processing Streams by Applying Two Different Classifiers in Mechanism for Recognition of Biological Movement

    PubMed Central

    2014-01-01

    Research on psychophysics, neurophysiology, and functional imaging shows particular representation of biological movements which contains two pathways. The visual perception of biological movements formed through the visual system called dorsal and ventral processing streams. Ventral processing stream is associated with the form information extraction; on the other hand, dorsal processing stream provides motion information. Active basic model (ABM) as hierarchical representation of the human object had revealed novelty in form pathway due to applying Gabor based supervised object recognition method. It creates more biological plausibility along with similarity with original model. Fuzzy inference system is used for motion pattern information in motion pathway creating more robustness in recognition process. Besides, interaction of these paths is intriguing and many studies in various fields considered it. Here, the interaction of the pathways to get more appropriated results has been investigated. Extreme learning machine (ELM) has been implied for classification unit of this model, due to having the main properties of artificial neural networks, but crosses from the difficulty of training time substantially diminished in it. Here, there will be a comparison between two different configurations, interactions using synergetic neural network and ELM, in terms of accuracy and compatibility. PMID:25276860

  10. Auditory attention strategy depends on target linguistic properties and spatial configurationa)

    PubMed Central

    McCloy, Daniel R.; Lee, Adrian K. C.

    2015-01-01

    Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011

  11. Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison

    NASA Astrophysics Data System (ADS)

    Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan

    2016-12-01

    Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.

  12. Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison.

    PubMed

    Bleichner, Martin G; Mirkovic, Bojana; Debener, Stefan

    2016-12-01

    This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.

  13. Binding and unbinding the auditory and visual streams in the McGurk effect.

    PubMed

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  14. The auditory representation of speech sounds in human motor cortex

    PubMed Central

    Cheung, Connie; Hamilton, Liberty S; Johnson, Keith; Chang, Edward F

    2016-01-01

    In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information. DOI: http://dx.doi.org/10.7554/eLife.12577.001 PMID:26943778

  15. What the success of brain imaging implies about the neural code.

    PubMed

    Guest, Olivia; Love, Bradley C

    2017-01-19

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI's limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI's successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI.

  16. Disruption of medial septum and diagonal bands of Broca cholinergic projections to the ventral hippocampus disrupt auditory fear memory.

    PubMed

    Staib, Jennifer M; Della Valle, Rebecca; Knox, Dayan K

    2018-07-01

    In classical fear conditioning, a neutral conditioned stimulus (CS) is paired with an aversive unconditioned stimulus (US), which leads to a fear memory. If the CS is repeatedly presented without the US after fear conditioning, the formation of an extinction memory occurs, which inhibits fear memory expression. A previous study has demonstrated that selective cholinergic lesions in the medial septum and vertical limb of the diagonal bands of Broca (MS/vDBB) prior to fear and extinction learning disrupt contextual fear memory discrimination and acquisition of extinction memory. MS/vDBB cholinergic neurons project to a number of substrates that are critical for fear and extinction memory. However, it is currently unknown which of these efferent projections are critical for contextual fear memory discrimination and extinction memory. To address this, we induced cholinergic lesions in efferent targets of MS/vDBB cholinergic neurons. These included the dorsal hippocampus (dHipp), ventral hippocampus (vHipp), medial prefrontal cortex (mPFC), and in the mPFC and dHipp combined. None of these lesion groups exhibited deficits in contextual fear memory discrimination or extinction memory. However, vHipp cholinergic lesions disrupted auditory fear memory. Because MS/vDBB cholinergic neurons are the sole source of acetylcholine in the vHipp, these results suggest that MS/vDBB cholinergic input to the vHipp is critical for auditory fear memory. Taken together with previous findings, the results of this study suggest that MS/vDBB cholinergic neurons are critical for fear and extinction memory, though further research is needed to elucidate the role of MS/vDBB cholinergic neurons in these types of emotional memory. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Projections from the dorsal and ventral cochlear nuclei to the medial geniculate body.

    PubMed

    Schofield, Brett R; Motts, Susan D; Mellott, Jeffrey G; Foster, Nichole L

    2014-01-01

    Direct projections from the cochlear nucleus (CN) to the medial geniculate body (MG) mediate a high-speed transfer of acoustic information to the auditory thalamus. Anderson etal. (2006) used anterograde tracers to label the projection from the dorsal CN (DCN) to the MG in guinea pigs. We examined this pathway with retrograde tracers. The results confirm a pathway from the DCN, originating primarily from the deep layers. Labeled cells included a few giant cells and a larger number of small cells of unknown type. Many more labeled cells were present in the ventral CN (VCN). These cells, identifiable as multipolar (stellate) or small cells, were found throughout much of the VCN. Most of the labeled cells were located contralateral to the injection site. The CN to MG pathway bypasses the inferior colliculus (IC), where most ascending auditory information is processed. Anderson etal. (2006) hypothesized that CN-MG axons are collaterals of axons that reach the IC. We tested this hypothesis by injecting different fluorescent tracers into the MG and IC and examining the CN for double-labeled cells. After injections on the same side of the brain, double-labeled cells were found in the contralateral VCN and DCN. Most double-labeled cells were in the VCN, where they accounted for up to 37% of the cells labeled by the MG injection. We conclude that projections from the CN to the MG originate from the VCN and, less so, from the DCN. A significant proportion of the cells send a collateral projection to the IC. Presumably, the collateral projections send the same information to both the MG and the IC. The results suggest that T-stellate cells of the VCN are a major source of direct projections to the auditory thalamus.

  18. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    PubMed

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Visual motion disambiguation by a subliminal sound.

    PubMed

    Dufour, Andre; Touzalin, Pascale; Moessinger, Michèle; Brochard, Renaud; Després, Olivier

    2008-09-01

    There is growing interest in the effect of sound on visual motion perception. One model involves the illusion created when two identical objects moving towards each other on a two-dimensional visual display can be seen to either bounce off or stream through each other. Previous studies show that the large bias normally seen toward the streaming percept can be modulated by the presentation of an auditory event at the moment of coincidence. However, no reports to date provide sufficient evidence to indicate whether the sound bounce-inducing effect is due to a perceptual binding process or merely to an explicit inference resulting from the transient auditory stimulus resembling a physical collision of two objects. In the present study, we used a novel experimental design in which a subliminal sound was presented either 150 ms before, at, or 150 ms after the moment of coincidence of two disks moving towards each other. The results showed that there was an increased perception of bouncing (rather than streaming) when the subliminal sound was presented at or 150 ms after the moment of coincidence compared to when no sound was presented. These findings provide the first empirical demonstration that activation of the human auditory system without reaching consciousness affects the perception of an ambiguous visual motion display.

  20. Still holding after all these years: An action-perception dissociation in patient DF.

    PubMed

    Ganel, Tzvi; Goodale, Melvyn A

    2017-09-23

    Patient DF, who has bilateral damage in the ventral visual stream, is perhaps the best known individual with visual form agnosia in the world, and has been the focus of scores of research papers over the past twenty-five years. The remarkable dissociation she exhibits between a profound deficit in perceptual report and a preserved ability to generate relatively normal visuomotor behaviour was early on a cornerstone in Goodale and Milner's (1992) two visual systems hypothesis. In recent years, however, there has been a greater emphasis on the damage that is evident in the posterior regions of her parietal cortex in both hemispheres. Deficits in several aspects of visuomotor control in the visual periphery have been demonstrated, leading some researchers to conclude that the double dissociation between vision-for-perception and vision-for-action in DF and patients with classic optic ataxia can no longer be assumed to be strong evidence for the division of labour between the dorsal and ventral streams of visual processing. In this short review, we argue that this is not the case. Indeed, after evaluating DF's performance and the location of her brain lesions, a clear picture of a double dissociation between DF and patients with optic ataxia is revealed. More than quarter of a century after the initial presentation of DF's unique case, she continues to provide compelling evidence for the idea that the ventral stream is critical for the perception of the shape and orientation of objects but not the visual control of skilled actions directed at those objects. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Enhanced Fine-Form Perception Does Not Contribute to Gestalt Face Perception in Autism Spectrum Disorder

    PubMed Central

    Maekawa, Toshihiko; Miyanaga, Yuka; Takahashi, Kenji; Takamiya, Naomi; Ogata, Katsuya; Tobimatsu, Shozo

    2017-01-01

    Individuals with autism spectrum disorder (ASD) show superior performance in processing fine detail, but often exhibit impaired gestalt face perception. The ventral visual stream from the primary visual cortex (V1) to the fusiform gyrus (V4) plays an important role in form (including faces) and color perception. The aim of this study was to investigate how the ventral stream is functionally altered in ASD. Visual evoked potentials were recorded in high-functioning ASD adults (n = 14) and typically developing (TD) adults (n = 14). We used three types of visual stimuli as follows: isoluminant chromatic (red/green, RG) gratings, high-contrast achromatic (black/white, BW) gratings with high spatial frequency (HSF, 5.3 cycles/degree), and face (neutral, happy, and angry faces) stimuli. Compared with TD controls, ASD adults exhibited longer N1 latency for RG, shorter N1 latency for BW, and shorter P1 latency, but prolonged N170 latency, for face stimuli. Moreover, a greater difference in latency between P1 and N170, or between N1 for BW and N170 (i.e., the prolongation of cortico-cortical conduction time between V1 and V4) was observed in ASD adults. These findings indicate that ASD adults have enhanced fine-form (local HSF) processing, but impaired color processing at V1. In addition, they exhibit impaired gestalt face processing due to deficits in integration of multiple local HSF facial information at V4. Thus, altered ventral stream function may contribute to abnormal social processing in ASD. PMID:28146575

  2. Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized

    PubMed Central

    Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.

    2012-01-01

    Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051

  3. Coupling between Theta Oscillations and Cognitive Control Network during Cross-Modal Visual and Auditory Attention: Supramodal vs Modality-Specific Mechanisms.

    PubMed

    Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T

    2016-01-01

    Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.

  4. Expression of Glutamate and Inhibitory Amino Acid Vesicular Transporters in the Rodent Auditory Brainstem

    PubMed Central

    Ito, Tetsufumi; Bishop, Deborah C.; Oliver, Douglas L.

    2011-01-01

    Glutamate is the main excitatory neurotransmitter in the auditory system, but associations between glutamatergic neuronal populations and the distribution of their synaptic terminations have been difficult. Different subsets of glutamatergic terminals employ one of three vesicular glutamate transporters (VGLUT) to load synaptic vesicles. Recently, VGLUT1 and VGLUT2 terminals were found to have different patterns of organization in the inferior colliculus suggesting that there are different types of glutamatergic neurons in the brainstem auditory system with projections to the colliculus. To positively identify VGLUT-expressing neurons as well as inhibitory neurons in the auditory brainstem, we used in situ hybridization to identify the mRNA for VGLUT1, VGLUT2, and VIAAT (the vesicular inhibitory amino acid transporter used by GABAergic and glycinergic terminals). Similar expression patterns were found in subsets of glutamatergic and inhibitory neurons in the auditory brainstem and thalamus of adult rats and mice. Four patterns of gene expression were seen in individual neurons. 1) VGLUT2 expressed alone was the prevalent pattern. 2) VGLUT1 co-expressed with VGLUT2 was seen in scattered neurons in most nuclei but was common in the medial geniculate body and ventral cochlear nucleus. 3) VGLUT1 expressed alone was found only in granule cells. 4) VIAAT expression was common in most nuclei but dominated in some. These data show that the expression of the VGLUT1/2 and VIAAT genes can identify different subsets of auditory neurons. This may facilitate the identification of different components in auditory circuits. PMID:21165977

  5. Effects of Nicotine and Nicotinic Antagonists on the Acoustic Startle Response and on Pre-Pulse Inhibition in Rats

    DTIC Science & Technology

    1996-06-07

    the auditory nerve, the ventral cochlear nucleus , nuclei of the lateral lemniscus, nucleus reticularis pontis caudalis, spinal neuron, and lower... nucleus , nuclei of the lateral lemniscus, nucleus reticularis pontis caudalis, hippocampus, and striatum (Davis, et al., 1982; Swerdlow, et aI, 1992...Davis, M. (1985) Cocaine effects on acoustic startle and startle elicited electrically from cochlear nucleus . P§ychQpharmacology, 87, 396-399 James

  6. Prevention and Treatment of Noise-Induced Tinnitus

    DTIC Science & Technology

    2012-07-01

    process of completing the normative data base(s) of VGLUT1 , VAT and VGAT immunostaining in the rat AVCN and DCN that will allow assessment of changes under...our experimental conditions. Initial results indicate some loss of VGLUT1 immunolabeled auditory nerve terminals in the ventral cochlear nucleus...Research Accomplishments for TASK 3: Test the hypothesis that the loss of AN terminals (marked by VGLUT1 immunolabel) on neurons in the AVCN and

  7. What You See Isn’t Always What You Get: Auditory Word Signals Trump Consciously Perceived Words in Lexical Access

    PubMed Central

    Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.

    2016-01-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  8. Cortico‐cortical connectivity within ferret auditory cortex

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.

    2015-01-01

    ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831

  9. Local and Global Auditory Processing: Behavioral and ERP Evidence

    PubMed Central

    Sanders, Lisa D.; Poeppel, David

    2007-01-01

    Differential processing of local and global visual features is well established. Global precedence effects, differences in event-related potentials (ERPs) elicited when attention is focused on local versus global levels, and hemispheric specialization for local and global features all indicate that relative scale of detail is an important distinction in visual processing. Observing analogous differential processing of local and global auditory information would suggest that scale of detail is a general organizational principle of the brain. However, to date the research on auditory local and global processing has primarily focused on music perception or on the perceptual analysis of relatively higher and lower frequencies. The study described here suggests that temporal aspects of auditory stimuli better capture the local-global distinction. By combining short (40 ms) frequency modulated tones in series to create global auditory patterns (500 ms), we independently varied whether pitch increased or decreased over short time spans (local) and longer time spans (global). Accuracy and reaction time measures revealed better performance for global judgments and asymmetric interference that were modulated by amount of pitch change. ERPs recorded while participants listened to identical sounds and indicated the direction of pitch change at the local or global levels provided evidence for differential processing similar to that found in ERP studies employing hierarchical visual stimuli. ERP measures failed to provide evidence for lateralization of local and global auditory perception, but differences in distributions suggest preferential processing in more ventral and dorsal areas respectively. PMID:17113115

  10. The Non-Lemniscal Auditory Cortex in Ferrets: Convergence of Corticotectal Inputs in the Superior Colliculus

    PubMed Central

    Bajo, Victoria M.; Nodal, Fernando R.; Bizley, Jennifer K.; King, Andrew J.

    2010-01-01

    Descending cortical inputs to the superior colliculus (SC) contribute to the unisensory response properties of the neurons found there and are critical for multisensory integration. However, little is known about the relative contribution of different auditory cortical areas to this projection or the distribution of their terminals in the SC. We characterized this projection in the ferret by injecting tracers in the SC and auditory cortex. Large pyramidal neurons were labeled in layer V of different parts of the ectosylvian gyrus after tracer injections in the SC. Those cells were most numerous in the anterior ectosylvian gyrus (AEG), and particularly in the anterior ventral field, which receives both auditory and visual inputs. Labeling was also found in the posterior ectosylvian gyrus (PEG), predominantly in the tonotopically organized posterior suprasylvian field. Profuse anterograde labeling was present in the SC following tracer injections at the site of acoustically responsive neurons in the AEG or PEG, with terminal fields being both more prominent and clustered for inputs originating from the AEG. Terminals from both cortical areas were located throughout the intermediate and deep layers, but were most concentrated in the posterior half of the SC, where peripheral stimulus locations are represented. No inputs were identified from primary auditory cortical areas, although some labeling was found in the surrounding sulci. Our findings suggest that higher level auditory cortical areas, including those involved in multisensory processing, may modulate SC function via their projections into its deeper layers. PMID:20640247

  11. The Functional Neuroanatomy of Human Face Perception.

    PubMed

    Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse

    2017-09-15

    Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.

  12. The role of human ventral visual cortex in motion perception

    PubMed Central

    Saygin, Ayse P.; Lorenzi, Lauren J.; Egan, Ryan; Rees, Geraint; Behrmann, Marlene

    2013-01-01

    Visual motion perception is fundamental to many aspects of visual perception. Visual motion perception has long been associated with the dorsal (parietal) pathway and the involvement of the ventral ‘form’ (temporal) visual pathway has not been considered critical for normal motion perception. Here, we evaluated this view by examining whether circumscribed damage to ventral visual cortex impaired motion perception. The perception of motion in basic, non-form tasks (motion coherence and motion detection) and complex structure-from-motion, for a wide range of motion speeds, all centrally displayed, was assessed in five patients with a circumscribed lesion to either the right or left ventral visual pathway. Patients with a right, but not with a left, ventral visual lesion displayed widespread impairments in central motion perception even for non-form motion, for both slow and for fast speeds, and this held true independent of the integrity of areas MT/V5, V3A or parietal regions. In contrast with the traditional view in which only the dorsal visual stream is critical for motion perception, these novel findings implicate a more distributed circuit in which the integrity of the right ventral visual pathway is also necessary even for the perception of non-form motion. PMID:23983030

  13. The auditory cross-section (AXS) test battery: A new way to study afferent/efferent relations linking body periphery (ear, voice, heart) with brainstem and cortex

    NASA Astrophysics Data System (ADS)

    Lauter, Judith

    2002-05-01

    Several noninvasive methods are available for studying the neural bases of human sensory-motor function, but their cost is prohibitive for many researchers and clinicians. The auditory cross section (AXS) test battery utilizes relatively inexpensive methods, yet yields data that are at least equivalent, if not superior in some applications, to those generated by more expensive technologies. The acronym emphasizes access to axes-the battery makes it possible to assess dynamic physiological relations along all three body-brain axes: rostro-caudal (afferent/efferent), dorso-ventral, and right-left, on an individually-specific basis, extending from cortex to the periphery. For auditory studies, a three-level physiological ear-to-cortex profile is generated, utilizing (1) quantitative electroencephalography (qEEG); (2) the repeated evoked potentials version of the auditory brainstem response (REPs/ABR); and (3) otoacoustic emissions (OAEs). Battery procedures will be explained, and sample data presented illustrating correlated multilevel changes in ear, voice, heart, brainstem, and cortex in response to circadian rhythms, and challenges with substances such as antihistamines and Ritalin. Potential applications for the battery include studies of central auditory processing, reading problems, hyperactivity, neural bases of voice and speech motor control, neurocardiology, individually-specific responses to medications, and the physiological bases of tinnitus, hyperacusis, and related treatments.

  14. Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

    PubMed

    Paraouty, Nihaad; Stasiak, Arkadiusz; Lorenzi, Christian; Varnet, Léo; Winter, Ian M

    2018-04-25

    Frequency modulation (FM) is a common acoustic feature of natural sounds and is known to play a role in robust sound source recognition. Auditory neurons show precise stimulus-synchronized discharge patterns that may be used for the representation of low-rate FM. However, it remains unclear whether this representation is based on synchronization to slow temporal envelope (ENV) cues resulting from cochlear filtering or phase locking to faster temporal fine structure (TFS) cues. To investigate the plausibility of those encoding schemes, single units of the ventral cochlear nucleus of guinea pigs of either sex were recorded in response to sine FM tones centered at the unit's best frequency (BF). The results show that, in contrast to high-BF units, for modulation depths within the receptive field, low-BF units (<4 kHz) demonstrate good phase locking to TFS. For modulation depths extending beyond the receptive field, the discharge patterns follow the ENV and fluctuate at the modulation rate. The receptive field proved to be a good predictor of the ENV responses for most primary-like and chopper units. The current in vivo data also reveal a high level of diversity in responses across unit types. TFS cues are mainly conveyed by low-frequency and primary-like units and ENV cues by chopper and onset units. The diversity of responses exhibited by cochlear nucleus neurons provides a neural basis for a dual-coding scheme of FM in the brainstem based on both ENV and TFS cues. SIGNIFICANCE STATEMENT Natural sounds, including speech, convey informative temporal modulations in frequency. Understanding how the auditory system represents those frequency modulations (FM) has important implications as robust sound source recognition depends crucially on the reception of low-rate FM cues. Here, we recorded 115 single-unit responses from the ventral cochlear nucleus in response to FM and provide the first physiological evidence of a dual-coding mechanism of FM via synchronization to temporal envelope cues and phase locking to temporal fine structure cues. We also demonstrate a diversity of neural responses with different coding specializations. These results support the dual-coding scheme proposed by psychophysicists to account for FM sensitivity in humans and provide new insights on how this might be implemented in the early stages of the auditory pathway. Copyright © 2018 the authors 0270-6474/18/384123-15$15.00/0.

  15. The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition

    PubMed Central

    McLachlan, Neil M.; Wilson, Sarah J.

    2017-01-01

    The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850

  16. Object representation in the human auditory system

    PubMed Central

    Winkler, István; van Zuijen, Titia L.; Sussman, Elyse; Horváth, János; Näätänen, Risto

    2010-01-01

    One important principle of object processing is exclusive allocation. Any part of the sensory input, including the border between two objects, can only belong to one object at a time. We tested whether tones forming a spectro-temporal border between two sound patterns can belong to both patterns at the same time. Sequences were composed of low-, intermediate- and high-pitched tones. Tones were delivered with short onset-to-onset intervals causing the high and low tones to automatically form separate low and high sound streams. The intermediate-pitch tones could be perceived as part of either one or the other stream, but not both streams at the same time. Thus these tones formed a pitch ’border’ between the two streams. The tones were presented in a fixed, cyclically repeating order. Linking the intermediate-pitch tones with the high or the low tones resulted in the perception of two different repeating tonal patterns. Participants were instructed to maintain perception of one of the two tone patterns throughout the stimulus sequences. Occasional changes violated either the selected or the alternative tone pattern, but not both at the same time. We found that only violations of the selected pattern elicited the mismatch negativity event-related potential, indicating that only this pattern was represented in the auditory system. This result suggests that individual sounds are processed as part of only one auditory pattern at a time. Thus tones forming a spectro-temporal border are exclusively assigned to one sound object at any given time, as are spatio-temporal borders in vision. PMID:16836636

  17. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    PubMed

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  18. Auditory Multi-Stability: Idiosyncratic Perceptual Switching Patterns, Executive Functions and Personality Traits

    PubMed Central

    Farkas, Dávid; Denham, Susan L.; Bendixen, Alexandra; Tóth, Dénes; Kondo, Hirohito M.; Winkler, István

    2016-01-01

    Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual’s tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions. PMID:27135945

  19. Comparison of auditory stream segregation in sighted and early blind individuals.

    PubMed

    Boroujeni, Fatemeh Moghadasi; Heidari, Fatemeh; Rouzbahani, Masoumeh; Kamali, Mohammad

    2017-01-18

    An important characteristic of the auditory system is the capacity to analyze complex sounds and make decisions on the source of the constituent parts of these sounds. Blind individuals compensate for the lack of visual information by an increase input from other sensory modalities, including increased auditory information. The purpose of the current study was to compare the fission boundary (FB) threshold of sighted and early blind individuals through spectral aspects using a psychoacoustic auditory stream segregation (ASS) test. This study was conducted on 16 sighted and 16 early blind adult individuals. The applied stimuli were presented sequentially as the pure tones A and B and as a triplet ABA-ABA pattern at the intensity of 40dBSL. The A tone frequency was selected as the basis at values of 500, 1000, and 2000Hz. The B tone was presented with the difference of a 4-100% above the basis tone frequency. Blind individuals had significantly lower FB thresholds than sighted people. FB was independent of the frequency of the tone A when expressed as the difference in the number of equivalent rectangular bandwidths (ERBs). Early blindness may increase perceptual separation of the acoustic stimuli to form accurate representations of the world. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Metagonimoides oregonensis (Heterophyidae: Digenea) infection in Pleurocerid snails and Desmognathus quadramaculatus salamander larvae in Southern Appalachian streams.

    PubMed

    Belden, Lisa K; Peterman, William E; Smith, Stephen A; Brooks, Lauren R; Benfield, E F; Black, Wesley P; Yang, Zhaomin; Wojdak, Jeremy M

    2012-08-01

    Metagonimoides oregonensis (Heterophyidae) is a little-known digenetic trematode that uses raccoons and possibly mink as definitive hosts, and stream snails and amphibians as intermediate hosts. Some variation in the life cycle and adult morphology in western and eastern populations has been previously noted. In the southern Appalachians, Pleurocera snails and stream salamanders, e.g., Desmognathus spp., are used as intermediate hosts in the life cycle. We completed a series of studies in this system examining some aspects of larval trematode morphology and first and second intermediate host use. Molecular sequencing of the 28S rDNA of cercariae in our survey placed them clearly within the heterophyid family. However, light and scanning electron microscopy revealed both lateral and dorso-ventral finfolds on the cercariae in our region, whereas original descriptions of M. oregonensis cercariae from the west coast indicate only a dorso-ventral finfold, so further work on the systematics of this group may be warranted. A survey of first intermediate host, Pleurocera proxima, from 7 streams in the region identified only M. oregonensis, virgulate-type cercariae, and cotylomicrocercous-type cercariae in the streams, with M. oregonensis having the highest prevalence, and the only type present that use amphibians as second intermediate hosts. Based on clearing and staining of 6 Desmognathus quadramaculatus salamander larvae, we found that individual salamanders could have over 600 metacercariae, which form between muscle fibers throughout the body. Histological observations suggest that the metacercariae do not cause excessive tissue damage or inflammation, and likely persist through metamorphosis, thereby transmitting potentially large numbers of worms to definitive host raccoons foraging along streams.

  1. Structural and functional neural correlates of music perception.

    PubMed

    Limb, Charles J

    2006-04-01

    This review article highlights state-of-the-art functional neuroimaging studies and demonstrates the novel use of music as a tool for the study of human auditory brain structure and function. Music is a unique auditory stimulus with properties that make it a compelling tool with which to study both human behavior and, more specifically, the neural elements involved in the processing of sound. Functional neuroimaging techniques represent a modern and powerful method of investigation into neural structure and functional correlates in the living organism. These methods have demonstrated a close relationship between the neural processing of music and language, both syntactically and semantically. Greater neural activity and increased volume of gray matter in Heschl's gyrus has been associated with musical aptitude. Activation of Broca's area, a region traditionally considered to subserve language, is important in interpreting whether a note is on or off key. The planum temporale shows asymmetries that are associated with the phenomenon of perfect pitch. Functional imaging studies have also demonstrated activation of primitive emotional centers such as ventral striatum, midbrain, amygdala, orbitofrontal cortex, and ventral medial prefrontal cortex in listeners of moving musical passages. In addition, studies of melody and rhythm perception have elucidated mechanisms of hemispheric specialization. These studies show the power of music and functional neuroimaging to provide singularly useful tools for the study of brain structure and function.

  2. Asymmetric right/left encoding of emotions in the human subthalamic nucleus

    PubMed Central

    Eitan, Renana; Shamir, Reuben R.; Linetsky, Eduard; Rosenbluh, Ovadya; Moshel, Shay; Ben-Hur, Tamir; Bergman, Hagai; Israel, Zvi

    2013-01-01

    Emotional processing is lateralized to the non-dominant brain hemisphere. However, there is no clear spatial model for lateralization of emotional domains in the basal ganglia. The subthalamic nucleus (STN), an input structure in the basal ganglia network, plays a major role in the pathophysiology of Parkinson's disease (PD). This role is probably not limited only to the motor deficits of PD, but may also span the emotional and cognitive deficits commonly observed in PD patients. Beta oscillations (12–30 Hz), the electrophysiological signature of PD, are restricted to the dorsolateral part of the STN that corresponds to the anatomically defined sensorimotor STN. The more medial, more anterior and more ventral parts of the STN are thought to correspond to the anatomically defined limbic and associative territories of the STN. Surprisingly, little is known about the electrophysiological properties of the non-motor domains of the STN, nor about electrophysiological differences between right and left STNs. In this study, microelectrodes were utilized to record the STN spontaneous spiking activity and responses to vocal non-verbal emotional stimuli during deep brain stimulation (DBS) surgeries in human PD patients. The oscillation properties of the STN neurons were used to map the dorsal oscillatory and the ventral non-oscillatory regions of the STN. Emotive auditory stimulation evoked activity in the ventral non-oscillatory region of the right STN. These responses were not observed in the left ventral STN or in the dorsal regions of either the right or left STN. Therefore, our results suggest that the ventral non-oscillatory regions are asymmetrically associated with non-motor functions, with the right ventral STN associated with emotional processing. These results suggest that DBS of the right ventral STN may be associated with beneficial or adverse emotional effects observed in PD patients and may relieve mental symptoms in other neurological and psychiatric diseases. PMID:24194703

  3. The chicken immediate-early gene ZENK is expressed in the medio-rostral neostriatum/hyperstriatum ventrale, a brain region involved in acoustic imprinting, and is up-regulated after exposure to an auditory stimulus.

    PubMed

    Thode, C; Bock, J; Braun, K; Darlison, M G

    2005-01-01

    The immediate-early gene zenk (an acronym for the avian orthologue of the mammalian genes zif-268, egr-1, ngfi-a and krox-24) has been extensively employed, in studies on oscine birds, as a marker of neuronal activity to reveal forebrain structures that are involved in the memory processes associated with the acquisition, perception and production of song. Audition-induced expression of this gene, in brain, has also recently been reported for the domestic chicken (Gallus gallus domesticus) and the Japanese quail (Coturnix coturnix japonica). Whilst the anatomical distribution of zenk expression was described for the quail, corresponding data for the chicken were not reported. We have, therefore, used in situ hybridisation to localise the mRNA that encodes the product of the zenk gene (which we call ZENK) within the brain of the 1-day-old chick. We demonstrate that this transcript is present in a number of forebrain structures including the medio-rostral neostriatum/hyperstriatum ventrale (MNH), a region that has been strongly implicated in auditory imprinting (which is a form of recognition memory), and Field L, the avian analog of the mammalian auditory cortex. Because of this pattern of gene expression, we have compared the level of the ZENK mRNA in chicks that have been subjected to a 30-min acoustic imprinting paradigm and in untrained controls. Our results reveal a significant increase (P< or =0.05) in the level of the ZENK mRNA in MNH and Field L, and in the two forebrain hemispheres; no increase was seen in the ectostriatum, which is a visual projection area. The data obtained implicate the immediate-early gene, zenk, in auditory imprinting, which is an established model of juvenile learning. In addition, our results indicate that the ZENK mRNA may be used as a molecular marker for MNH, a region that is difficult to anatomically and histochemically delineate.

  4. Symbol processing in the left angular gyrus: evidence from passive perception of digits.

    PubMed

    Price, Gavin R; Ansari, Daniel

    2011-08-01

    Arabic digits are one of the most ubiquitous symbol sets in the world. While there have been many investigations into the neural processing of the semantic information digits represent (e.g. through numerical comparison tasks), little is known about the neural mechanisms which support the processing of digits as visual symbols. To characterise the component neurocognitive mechanisms which underlie numerical cognition, it is essential to understand the processing of digits as a visual category, independent of numerical magnitude processing. The 'Triple Code Model' (Dehaene, 1992; Dehaene and Cohen, 1995) posits an asemantic visual code for processing Arabic digits in the ventral visual stream, yet there is currently little empirical evidence in support of this code. This outstanding question was addressed in the current functional Magnetic Resonance (fMRI) study by contrasting brain responses during the passive viewing of digits versus letters and novel symbols at short (50 ms) and long (500 ms) presentation times. The results of this study reveal increased activation for familiar symbols (digits and letters) relative to unfamiliar symbols (scrambled digits and letters) at long presentation durations in the left dorsal Angular gyrus (dAG). Furthermore, increased activation for Arabic digits was observed in the left ventral Angular gyrus (vAG) in comparison to letters, scrambled digits and scrambled letters at long presentation durations, but no digit specific activation in any region at short presentation durations. These results suggest an absence of a digit specific 'Visual Number Form Area' (VNFA) in the ventral visual cortex, and provide evidence for the role of the left ventral AG during the processing of digits in the absence of any explicit processing demands. We conclude that Arabic digit processing depends specifically on the left AG rather than a ventral visual stream VNFA. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    PubMed

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  6. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation

    PubMed Central

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  7. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    PubMed Central

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  8. From attentional gating in macaque primary visual cortex to dyslexia in humans.

    PubMed

    Vidyasagar, T R

    2001-01-01

    Selective attention is an important aspect of brain function that we need in coping with the immense and constant barrage of sensory information. One model of attention (Feature Integration Theory) that suggests an early selection of spatial locations of objects via an attentional spotlight would also solve the 'binding problem' (that is how do different attributes of each object get correctly bound together?). Our experiments have demonstrated modulation of specific locations of interest at the level of the primary visual cortex both in visual discrimination and memory tasks, where the actual locations of the targets was also important in being able to perform the task. It is suggested that the feedback mediating the modulation arises from the posterior parietal cortex, which would also be consistent with its known role in attentional control. In primates, the magnocellular (M) and parvocellular (P) pathways are the two major streams of inputs from the retina, carrying distinctly different types of information and they remain fairly segregated in their projections to the primary visual cortex and further into the extra-striate regions. The P inputs go mainly into the ventral (temporal) stream, while the dorsal (parietal) stream is dominated by M inputs. A theory of attentional gating is proposed here where the M dominated dorsal stream gates the P inputs into the ventral stream. This framework is used to provide a neural explanation of the processes involved in reading and in learning to read. This scheme also explains how a magnocellular deficit could cause the common reading impairment, dyslexia.

  9. Automatic Activation of Phonological Templates for Native but Not Nonnative Phonemes: An Investigation of the Temporal Dynamics of Mu Activation

    ERIC Educational Resources Information Center

    Santos-Oliveira, Daniela Cristina

    2017-01-01

    Models of speech perception suggest a dorsal stream connecting the temporal and inferior parietal lobe with the inferior frontal gyrus. This stream is thought to involve an auditory motor loop that translates acoustic information into motor/articulatory commands and is further influenced by decision making processes that involve maintenance of…

  10. The dorsal stream contribution to phonological retrieval in object naming

    PubMed Central

    Faseyitan, Olufunsho; Kim, Junghoon; Coslett, H. Branch

    2012-01-01

    Meaningful speech, as exemplified in object naming, calls on knowledge of the mappings between word meanings and phonological forms. Phonological errors in naming (e.g. GHOST named as ‘goath’) are commonly seen in persisting post-stroke aphasia and are thought to signal impairment in retrieval of phonological form information. We performed a voxel-based lesion-symptom mapping analysis of 1718 phonological naming errors collected from 106 individuals with diverse profiles of aphasia. Voxels in which lesion status correlated with phonological error rates localized to dorsal stream areas, in keeping with classical and contemporary brain-language models. Within the dorsal stream, the critical voxels were concentrated in premotor cortex, pre- and postcentral gyri and supramarginal gyrus with minimal extension into auditory-related posterior temporal and temporo-parietal cortices. This challenges the popular notion that error-free phonological retrieval requires guidance from sensory traces stored in posterior auditory regions and points instead to sensory-motor processes located further anterior in the dorsal stream. In a separate analysis, we compared the lesion maps for phonological and semantic errors and determined that there was no spatial overlap, demonstrating that the brain segregates phonological and semantic retrieval operations in word production. PMID:23171662

  11. Non-accidental properties, metric invariance, and encoding by neurons in a model of ventral stream visual object recognition, VisNet.

    PubMed

    Rolls, Edmund T; Mills, W Patrick C

    2018-05-01

    When objects transform into different views, some properties are maintained, such as whether the edges are convex or concave, and these non-accidental properties are likely to be important in view-invariant object recognition. The metric properties, such as the degree of curvature, may change with different views, and are less likely to be useful in object recognition. It is shown that in a model of invariant visual object recognition in the ventral visual stream, VisNet, non-accidental properties are encoded much more than metric properties by neurons. Moreover, it is shown how with the temporal trace rule training in VisNet, non-accidental properties of objects become encoded by neurons, and how metric properties are treated invariantly. We also show how VisNet can generalize between different objects if they have the same non-accidental property, because the metric properties are likely to overlap. VisNet is a 4-layer unsupervised model of visual object recognition trained by competitive learning that utilizes a temporal trace learning rule to implement the learning of invariance using views that occur close together in time. A second crucial property of this model of object recognition is, when neurons in the level corresponding to the inferior temporal visual cortex respond selectively to objects, whether neurons in the intermediate layers can respond to combinations of features that may be parts of two or more objects. In an investigation using the four sides of a square presented in every possible combination, it was shown that even though different layer 4 neurons are tuned to encode each feature or feature combination orthogonally, neurons in the intermediate layers can respond to features or feature combinations present is several objects. This property is an important part of the way in which high capacity can be achieved in the four-layer ventral visual cortical pathway. These findings concerning non-accidental properties and the use of neurons in intermediate layers of the hierarchy help to emphasise fundamental underlying principles of the computations that may be implemented in the ventral cortical visual stream used in object recognition. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Auditory Spatial Layout

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  13. Recent advances in exploring the neural underpinnings of auditory scene perception

    PubMed Central

    Snyder, Joel S.; Elhilali, Mounya

    2017-01-01

    Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022

  14. Finding the missing stimulus mismatch negativity (MMN): Emitted MMN to violations of an auditory gestalt

    PubMed Central

    Salisbury, Dean F

    2011-01-01

    Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counter-intuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of 6 pips (50 msec duration, 330 msec stimulus onset asynchrony (SOA), 400 trials), were presented with an inter-trial interval (ITI) of 750 msec while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (p<.05). The missing 4th (−0.8 uV, p <.01) and the missing 6th stimuli (−1.1 uV, p <.05) were more negative than standard 6th stimuli (0.3 uV). MMN can be elicited by a missing stimulus at long SOAs by violation of a gestalt grouping rule. Homogenous stimulus streams appear to differ in the relative weighting of omissions than strongly patterned streams. PMID:22221004

  15. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    PubMed

    Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S

    2016-01-01

    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  16. Distribution of glutamatergic, GABAergic, and glycinergic neurons in the auditory pathways of macaque monkeys.

    PubMed

    Ito, T; Inoue, K; Takada, M

    2015-12-03

    Macaque monkeys use complex communication calls and are regarded as a model for studying the coding and decoding of complex sound in the auditory system. However, little is known about the distribution of excitatory and inhibitory neurons in the auditory system of macaque monkeys. In this study, we examined the overall distribution of cell bodies that expressed mRNAs for VGLUT1, and VGLUT2 (markers for glutamatergic neurons), GAD67 (a marker for GABAergic neurons), and GLYT2 (a marker for glycinergic neurons) in the auditory system of the Japanese macaque. In addition, we performed immunohistochemistry for VGLUT1, VGLUT2, and GAD67 in order to compare the distribution of proteins and mRNAs. We found that most of the excitatory neurons in the auditory brainstem expressed VGLUT2. In contrast, the expression of VGLUT1 mRNA was restricted to the auditory cortex (AC), periolivary nuclei, and cochlear nuclei (CN). The co-expression of GAD67 and GLYT2 mRNAs was common in the ventral nucleus of the lateral lemniscus (VNLL), CN, and superior olivary complex except for the medial nucleus of the trapezoid body, which expressed GLYT2 alone. In contrast, the dorsal nucleus of the lateral lemniscus, inferior colliculus, thalamus, and AC expressed GAD67 alone. The absence of co-expression of VGLUT1 and VGLUT2 in the medial geniculate, medial superior olive, and VNLL suggests that synaptic responses in the target neurons of these nuclei may be different between rodents and macaque monkeys. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. EGR-1 Expression in Catecholamine-synthesizing Neurons Reflects Auditory Learning and Correlates with Responses in Auditory Processing Areas.

    PubMed

    Dai, Jennifer B; Chen, Yining; Sakata, Jon T

    2018-05-21

    Distinguishing between familiar and unfamiliar individuals is an important task that shapes the expression of social behavior. As such, identifying the neural populations involved in processing and learning the sensory attributes of individuals is important for understanding mechanisms of behavior. Catecholamine-synthesizing neurons have been implicated in sensory processing, but relatively little is known about their contribution to auditory learning and processing across various vertebrate taxa. Here we investigated the extent to which immediate early gene expression in catecholaminergic circuitry reflects information about the familiarity of social signals and predicts immediate early gene expression in sensory processing areas in songbirds. We found that male zebra finches readily learned to differentiate between familiar and unfamiliar acoustic signals ('songs') and that playback of familiar songs led to fewer catecholaminergic neurons in the locus coeruleus (but not in the ventral tegmental area, substantia nigra, or periaqueductal gray) expressing the immediate early gene, EGR-1, than playback of unfamiliar songs. The pattern of EGR-1 expression in the locus coeruleus was similar to that observed in two auditory processing areas implicated in auditory learning and memory, namely the caudomedial nidopallium (NCM) and the caudal medial mesopallium (CMM), suggesting a contribution of catecholamines to sensory processing. Consistent with this, the pattern of catecholaminergic innervation onto auditory neurons co-varied with the degree to which song playback affected the relative intensity of EGR-1 expression. Together, our data support the contention that catecholamines like norepinephrine contribute to social recognition and the processing of social information. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  18. How silent is silent reading? Intracerebral evidence for top-down activation of temporal voice areas during reading.

    PubMed

    Perrone-Bertolotti, Marcela; Kujala, Jan; Vidal, Juan R; Hamame, Carlos M; Ossandon, Tomas; Bertrand, Olivier; Minotti, Lorella; Kahane, Philippe; Jerbi, Karim; Lachaux, Jean-Philippe

    2012-12-05

    As you might experience it while reading this sentence, silent reading often involves an imagery speech component: we can hear our own "inner voice" pronouncing words mentally. Recent functional magnetic resonance imaging studies have associated that component with increased metabolic activity in the auditory cortex, including voice-selective areas. It remains to be determined, however, whether this activation arises automatically from early bottom-up visual inputs or whether it depends on late top-down control processes modulated by task demands. To answer this question, we collaborated with four epileptic human patients recorded with intracranial electrodes in the auditory cortex for therapeutic purposes, and measured high-frequency (50-150 Hz) "gamma" activity as a proxy of population level spiking activity. Temporal voice-selective areas (TVAs) were identified with an auditory localizer task and monitored as participants viewed words flashed on screen. We compared neural responses depending on whether words were attended or ignored and found a significant increase of neural activity in response to words, strongly enhanced by attention. In one of the patients, we could record that response at 800 ms in TVAs, but also at 700 ms in the primary auditory cortex and at 300 ms in the ventral occipital temporal cortex. Furthermore, single-trial analysis revealed a considerable jitter between activation peaks in visual and auditory cortices. Altogether, our results demonstrate that the multimodal mental experience of reading is in fact a heterogeneous complex of asynchronous neural responses, and that auditory and visual modalities often process distinct temporal frames of our environment at the same time.

  19. Deletion of Fmr1 Alters Function and Synaptic Inputs in the Auditory Brainstem

    PubMed Central

    Rotschafer, Sarah E.; Marshak, Sonya; Cramer, Karina S.

    2015-01-01

    Fragile X Syndrome (FXS), a neurodevelopmental disorder, is the most prevalent single-gene cause of autism spectrum disorder. Autism has been associated with impaired auditory processing, abnormalities in the auditory brainstem response (ABR), and reduced cell number and size in the auditory brainstem nuclei. FXS is characterized by elevated cortical responses to sound stimuli, with some evidence for aberrant ABRs. Here, we assessed ABRs and auditory brainstem anatomy in Fmr1 -/- mice, an animal model of FXS. We found that Fmr1 -/- mice showed elevated response thresholds to both click and tone stimuli. Amplitudes of ABR responses were reduced in Fmr1 -/- mice for early peaks of the ABR. The growth of the peak I response with sound intensity was less steep in mutants that in wild type mice. In contrast, amplitudes and response growth in peaks IV and V did not differ between these groups. We did not observe differences in peak latencies or in interpeak latencies. Cell size was reduced in Fmr1 -/- mice in the ventral cochlear nucleus (VCN) and in the medial nucleus of the trapezoid body (MNTB). We quantified levels of inhibitory and excitatory synaptic inputs in these nuclei using markers for presynaptic proteins. We measured VGAT and VGLUT immunolabeling in VCN, MNTB, and the lateral superior olive (LSO). VGAT expression in MNTB was significantly greater in the Fmr1 -/- mouse than in wild type mice. Together, these observations demonstrate that FXS affects peripheral and central aspects of hearing and alters the balance of excitation and inhibition in the auditory brainstem. PMID:25679778

  20. What the success of brain imaging implies about the neural code

    PubMed Central

    Guest, Olivia; Love, Bradley C

    2017-01-01

    The success of fMRI places constraints on the nature of the neural code. The fact that researchers can infer similarities between neural representations, despite fMRI’s limitations, implies that certain neural coding schemes are more likely than others. For fMRI to succeed given its low temporal and spatial resolution, the neural code must be smooth at the voxel and functional level such that similar stimuli engender similar internal representations. Through proof and simulation, we determine which coding schemes are plausible given both fMRI’s successes and its limitations in measuring neural activity. Deep neural network approaches, which have been forwarded as computational accounts of the ventral stream, are consistent with the success of fMRI, though functional smoothness breaks down in the later network layers. These results have implications for the nature of the neural code and ventral stream, as well as what can be successfully investigated with fMRI. DOI: http://dx.doi.org/10.7554/eLife.21397.001 PMID:28103186

  1. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    PubMed

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  2. Face-Likeness and Image Variability Drive Responses in Human Face-Selective Ventral Regions

    PubMed Central

    Davidenko, Nicolas; Remus, David A.; Grill-Spector, Kalanit

    2012-01-01

    The human ventral visual stream contains regions that respond selectively to faces over objects. However, it is unknown whether responses in these regions correlate with how face-like stimuli appear. Here, we use parameterized face silhouettes to manipulate the perceived face-likeness of stimuli and measure responses in face- and object-selective ventral regions with high-resolution fMRI. We first use “concentric hyper-sphere” (CH) sampling to define face silhouettes at different distances from the prototype face. Observers rate the stimuli as progressively more face-like the closer they are to the prototype face. Paradoxically, responses in both face- and object-selective regions decrease as face-likeness ratings increase. Because CH sampling produces blocks of stimuli whose variability is negatively correlated with face-likeness, this effect may be driven by more adaptation during high face-likeness (low-variability) blocks than during low face-likeness (high-variability) blocks. We tested this hypothesis by measuring responses to matched-variability (MV) blocks of stimuli with similar face-likeness ratings as with CH sampling. Critically, under MV sampling, we find a face-specific effect: responses in face-selective regions gradually increase with perceived face-likeness, but responses in object-selective regions are unchanged. Our studies provide novel evidence that face-selective responses correlate with the perceived face-likeness of stimuli, but this effect is revealed only when image variability is controlled across conditions. Finally, our data show that variability is a powerful factor that drives responses across the ventral stream. This indicates that controlling variability across conditions should be a critical tool in future neuroimaging studies of face and object representation. PMID:21823208

  3. Memory-guided reaching in a patient with visual hemiagnosia.

    PubMed

    Cornelsen, Sonja; Rennig, Johannes; Himmelbach, Marc

    2016-06-01

    The two-visual-systems hypothesis (TVSH) postulates that memory-guided movements rely on intact functions of the ventral stream. Its particular importance for memory-guided actions was initially inferred from behavioral dissociations in the well-known patient DF. Despite of rather accurate reaching and grasping movements to visible targets, she demonstrated grossly impaired memory-guided grasping as much as impaired memory-guided reaching. These dissociations were later complemented by apparently reversed dissociations in patients with dorsal damage and optic ataxia. However, grasping studies in DF and optic ataxia patients differed with respect to the retinotopic position of target objects, questioning the interpretation of the respective findings as a double dissociation. In contrast, the findings for reaching errors in both types of patients came from similar peripheral target presentations. However, new data on brain structural changes and visuomotor deficits in DF also questioned the validity of a double dissociation in reaching. A severe visuospatial short-term memory deficit in DF further questioned the specificity of her memory-guided reaching deficit. Therefore, we compared movement accuracy in visually-guided and memory-guided reaching in a new patient who suffered a confined unilateral damage to the ventral visual system due to stroke. Our results indeed support previous descriptions of memory-guided movements' inaccuracies in DF. Furthermore, our data suggest that recently discovered optic-ataxia like misreaching in DF is most likely caused by her parieto-occipital and not by her ventral stream damage. Finally, multiple visuospatial memory measurements in HWS suggest that inaccuracies in memory-guided reaching tasks in patients with ventral damage cannot be explained by visuospatial short-term memory or perceptual deficits, but by a specific deficit in visuomotor processing. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Direct evidence for the contributive role of the right inferior fronto-occipital fasciculus in non-verbal semantic cognition.

    PubMed

    Herbet, Guillaume; Moritz-Gasser, Sylvie; Duffau, Hugues

    2017-05-01

    The neural foundations underlying semantic processing have been extensively investigated, highlighting a pivotal role of the ventral stream. However, although studies concerning the involvement of the left ventral route in verbal semantics are proficient, the potential implication of the right ventral pathway in non-verbal semantics has been to date unexplored. To gain insights on this matter, we used an intraoperative direct electrostimulation to map the structures mediating the non-verbal semantic system in the right hemisphere. Thirteen patients presenting with a right low-grade glioma located within or close to the ventral stream were included. During the 'awake' procedure, patients performed both a visual non-verbal semantic task and a verbal (control) task. At the cortical level, in the right hemisphere, we found non-verbal semantic-related sites (n = 7 in 6 patients) in structures commonly associated with verbal semantic processes in the left hemisphere, including the superior temporal gyrus, the pars triangularis, and the dorsolateral prefrontal cortex. At the subcortical level, we found non-verbal semantic-related sites in all but one patient (n = 15 sites in 12 patients). Importantly, all these responsive stimulation points were located on the spatial course of the right inferior fronto-occipital fasciculus (IFOF). These findings provide direct support for a critical role of the right IFOF in non-verbal semantic processing. Based upon these original data, and in connection with previous findings showing the involvement of the left IFOF in non-verbal semantic processing, we hypothesize the existence of a bilateral network underpinning the non-verbal semantic system, with a homotopic connectional architecture.

  5. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    PubMed Central

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721

  6. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    PubMed

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.

  7. Beyond visualization of big data: a multi-stage data exploration approach using visualization, sonification, and storification

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; Ballora, Mark; Shumaker, Wade

    2013-05-01

    As the sheer volume of data grows exponentially, it becomes increasingly difficult for existing visualization techniques to keep pace. The sonification field attempts to address this issue by enlisting our auditory senses to detect anomalies or complex events that are difficult to detect via visualization alone. Storification attempts to improve analyst understanding by converting data streams into organized narratives describing the data at a higher level of abstraction than the input stream that they area derived from. While these techniques hold a great deal of promise, they also each have a unique set of challenges that must be overcome. Sonification techniques must represent a broad variety of distributed heterogeneous data and present it to the analyst/listener in a manner that doesn't require extended listening - as visual "snapshots" are useful but auditory sounds only exist over time. Storification still faces many human-computer interface (HCI) challenges as well as technical hurdles related to automatically generating a logical narrative from lower-level data streams. This paper proposes a novel approach that utilizes a service oriented architecture (SOA)-based hybrid visualization/ sonification / storification framework to enable distributed human-in-the-loop processing of data in a manner that makes optimized usage of both visual and auditory processing pathways while also leveraging the value of narrative explication of data streams. It addresses the benefits and shortcomings of each processing modality and discusses information infrastructure and data representation concerns required with their utilization in a distributed environment. We present a generalizable approach with a broad range of applications including cyber security, medical informatics, facilitation of energy savings in "smart" buildings, and detection of natural and man-made disasters.

  8. Deep brain stimulation of the ventral striatum enhances extinction of conditioned fear

    PubMed Central

    Rodriguez-Romaguera, Jose; Do Monte, Fabricio H. M.; Quirk, Gregory J.

    2012-01-01

    Deep brain stimulation (DBS) of the ventral capsule/ventral striatum (VC/VS) reduces symptoms of intractable obsessive-compulsive disorder (OCD), but the mechanism of action is unknown. OCD is characterized by avoidance behaviors that fail to extinguish, and DBS could act, in part, by facilitating extinction of fear. We investigated this possibility by using auditory fear conditioning in rats, for which the circuits of fear extinction are well characterized. We found that DBS of the VS (the VC/VS homolog in rats) during extinction training reduced fear expression and strengthened extinction memory. Facilitation of extinction was observed for a specific zone of dorsomedial VS, just above the anterior commissure; stimulation of more ventrolateral sites in VS impaired extinction. DBS effects could not be obtained with pharmacological inactivation of either dorsomedial VS or ventrolateral VS, suggesting an extrastriatal mechanism. Accordingly, DBS of dorsomedial VS (but not ventrolateral VS) increased expression of a plasticity marker in the prelimbic and infralimbic prefrontal cortices, the orbitofrontal cortex, the amygdala central nucleus (lateral division), and intercalated cells, areas known to learn and express extinction. Facilitation of fear extinction suggests that, in accord with clinical observations, DBS could augment the effectiveness of cognitive behavioral therapies for OCD. PMID:22586125

  9. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music

    PubMed Central

    Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.

    2018-01-01

    Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861

  10. Gestures, vocalizations, and memory in language origins.

    PubMed

    Aboitiz, Francisco

    2012-01-01

    THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  11. Developmental Emergence of Phenotypes in the Auditory Brainstem Nuclei of Fmr1 Knockout Mice

    PubMed Central

    Rotschafer, Sarah E.

    2017-01-01

    Abstract Fragile X syndrome (FXS), the most common monogenic cause of autism, is often associated with hypersensitivity to sound. Several studies have shown abnormalities in the auditory brainstem in FXS; however, the emergence of these auditory phenotypes during development has not been described. Here, we investigated the development of phenotypes in FXS model [Fmr1 knockout (KO)] mice in the ventral cochlear nucleus (VCN), medial nucleus of the trapezoid body (MNTB), and lateral superior olive (LSO). We studied features of the brainstem known to be altered in FXS or Fmr1 KO mice, including cell size and expression of markers for excitatory (VGLUT) and inhibitory (VGAT) synapses. We found that cell size was reduced in the nuclei with different time courses. VCN cell size is normal until after hearing onset, while MNTB and LSO show decreases earlier. VGAT expression was elevated relative to VGLUT in the Fmr1 KO mouse MNTB by P6, before hearing onset. Because glial cells influence development and are altered in FXS, we investigated their emergence in the developing Fmr1 KO brainstem. The number of microglia developed normally in all three nuclei in Fmr1 KO mice, but we found elevated numbers of astrocytes in Fmr1 KO in VCN and LSO at P14. The results indicate that some phenotypes are evident before spontaneous or auditory activity, while others emerge later, and suggest that Fmr1 acts at multiple sites and time points in auditory system development. PMID:29291238

  12. Encoding of Natural Sounds at Multiple Spectral and Temporal Resolutions in the Human Auditory Cortex

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Goebel, Rainer; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2014-01-01

    Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds) evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla) functional magnetic resonance imaging (fMRI) with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex. PMID:24391486

  13. Hearing, feeling or seeing a beat recruits a supramodal network in the auditory dorsal stream.

    PubMed

    Araneda, Rodrigo; Renier, Laurent; Ebner-Karestinos, Daniela; Dricot, Laurence; De Volder, Anne G

    2017-06-01

    Hearing a beat recruits a wide neural network that involves the auditory cortex and motor planning regions. Perceiving a beat can potentially be achieved via vision or even touch, but it is currently not clear whether a common neural network underlies beat processing. Here, we used functional magnetic resonance imaging (fMRI) to test to what extent the neural network involved in beat processing is supramodal, that is, is the same in the different sensory modalities. Brain activity changes in 27 healthy volunteers were monitored while they were attending to the same rhythmic sequences (with and without a beat) in audition, vision and the vibrotactile modality. We found a common neural network for beat detection in the three modalities that involved parts of the auditory dorsal pathway. Within this network, only the putamen and the supplementary motor area (SMA) showed specificity to the beat, while the brain activity in the putamen covariated with the beat detection speed. These results highlighted the implication of the auditory dorsal stream in beat detection, confirmed the important role played by the putamen in beat detection and indicated that the neural network for beat detection is mostly supramodal. This constitutes a new example of convergence of the same functional attributes into one centralized representation in the brain. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants

    PubMed Central

    Kopp, Franziska; Dietrich, Claudia

    2013-01-01

    Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071

  15. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  16. Different forms of effective connectivity in primate frontotemporal pathways.

    PubMed

    Petkov, Christopher I; Kikuchi, Yukiko; Milne, Alice E; Mishkin, Mortimer; Rauschecker, Josef P; Logothetis, Nikos K

    2015-01-23

    It is generally held that non-primary sensory regions of the brain have a strong impact on frontal cortex. However, the effective connectivity of pathways to frontal cortex is poorly understood. Here we microstimulate sites in the superior temporal and ventral frontal cortex of monkeys and use functional magnetic resonance imaging to evaluate the functional activity resulting from the stimulation of interconnected regions. Surprisingly, we find that, although certain earlier stages of auditory cortical processing can strongly activate frontal cortex, downstream auditory regions, such as voice-sensitive cortex, appear to functionally engage primarily an ipsilateral temporal lobe network. Stimulating other sites within this activated temporal lobe network shows strong activation of frontal cortex. The results indicate that the relative stage of sensory processing does not predict the level of functional access to the frontal lobes. Rather, certain brain regions engage local networks, only parts of which have a strong functional impact on frontal cortex.

  17. Different forms of effective connectivity in primate frontotemporal pathways

    PubMed Central

    Petkov, Christopher I.; Kikuchi, Yukiko; Milne, Alice E.; Mishkin, Mortimer; Rauschecker, Josef P.; Logothetis, Nikos K.

    2015-01-01

    It is generally held that non-primary sensory regions of the brain have a strong impact on frontal cortex. However, the effective connectivity of pathways to frontal cortex is poorly understood. Here we microstimulate sites in the superior temporal and ventral frontal cortex of monkeys and use functional magnetic resonance imaging to evaluate the functional activity resulting from the stimulation of interconnected regions. Surprisingly, we find that, although certain earlier stages of auditory cortical processing can strongly activate frontal cortex, downstream auditory regions, such as voice-sensitive cortex, appear to functionally engage primarily an ipsilateral temporal lobe network. Stimulating other sites within this activated temporal lobe network shows strong activation of frontal cortex. The results indicate that the relative stage of sensory processing does not predict the level of functional access to the frontal lobes. Rather, certain brain regions engage local networks, only parts of which have a strong functional impact on frontal cortex. PMID:25613079

  18. Articulatory movements modulate auditory responses to speech

    PubMed Central

    Agnew, Z.K.; McGettigan, C.; Banks, B.; Scott, S.K.

    2013-01-01

    Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior–posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. PMID:22982103

  19. Slow Cholinergic Modulation of Spike Probability in Ultra-Fast Time-Coding Sensory Neurons

    PubMed Central

    Goyer, David; Kurth, Stefanie; Rübsamen, Rudolf

    2016-01-01

    Abstract Sensory processing in the lower auditory pathway is generally considered to be rigid and thus less subject to modulation than central processing. However, in addition to the powerful bottom-up excitation by auditory nerve fibers, the ventral cochlear nucleus also receives efferent cholinergic innervation from both auditory and nonauditory top–down sources. We thus tested the influence of cholinergic modulation on highly precise time-coding neurons in the cochlear nucleus of the Mongolian gerbil. By combining electrophysiological recordings with pharmacological application in vitro and in vivo, we found 55–72% of spherical bushy cells (SBCs) to be depolarized by carbachol on two time scales, ranging from hundreds of milliseconds to minutes. These effects were mediated by nicotinic and muscarinic acetylcholine receptors, respectively. Pharmacological block of muscarinic receptors hyperpolarized the resting membrane potential, suggesting a novel mechanism of setting the resting membrane potential for SBC. The cholinergic depolarization led to an increase of spike probability in SBCs without compromising the temporal precision of the SBC output in vitro. In vivo, iontophoretic application of carbachol resulted in an increase in spontaneous SBC activity. The inclusion of cholinergic modulation in an SBC model predicted an expansion of the dynamic range of sound responses and increased temporal acuity. Our results thus suggest of a top–down modulatory system mediated by acetylcholine which influences temporally precise information processing in the lower auditory pathway. PMID:27699207

  20. Plasticity in neuromagnetic cortical responses suggests enhanced auditory object representation

    PubMed Central

    2013-01-01

    Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed. Results After both passive listening and active training, the amplitude of the P2m wave with latency of 200 ms increased considerably. By this latency, the integration of stimulus features into an auditory object for further conscious perception is considered to be complete. Therefore the P2m changes were discussed in the light of auditory object representation. Moreover, P2m sources were localized in anterior auditory association cortex, which is part of the antero-ventral pathway for object identification. The amplitude of the earlier N1m wave, which is related to processing of sensory information, did not change over the time course of the study. Conclusion The P2m amplitude increase and its persistence over time constitute a neuroplastic change. The P2m gain likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time. Different trajectories of brain and behaviour changes suggest that the preceding effect of a P2m increase relates to brain processes, which are necessary precursors of perceptual learning. Cautious discussion is required when interpreting the finding of a P2 amplitude increase between recordings before and after training and learning. PMID:24314010

  1. Reframing the action and perception dissociation in DF: haptics matters, but how?

    PubMed

    Whitwell, Robert L; Buckingham, Gavin

    2013-02-01

    Goodale and Milner's (1992) "vision-for-action" and "vision-for-perception" account of the division of labor between the dorsal and ventral "streams" has come to dominate contemporary views of the functional roles of these two pathways. Nevertheless, some lines of evidence for the model remain controversial. Recently, Thomas Schenk reexamined visual form agnosic patient DF's spared anticipatory grip scaling to object size, one of the principal empirical pillars of the model. Based on this new evidence, Schenk rejects the original interpretation of DF's spared ability that was based on segregated processing of object size and argues that DF's spared grip scaling relies on haptic feedback to calibrate visual egocentric cues that relate the posture of the hand to the visible edges of the goal-object. However, a careful consideration of the tasks that Schenk employed reveals some problems with his claim. We suspect that the core issues of this controversy will require a closer examination of the role that cognition plays in the operation of the dorsal and ventral streams in healthy controls and in patient DF.

  2. Enhanced and bilateralized visual sensory processing in the ventral stream may be a feature of normal aging.

    PubMed

    De Sanctis, Pierfilippo; Katz, Richard; Wylie, Glenn R; Sehatpour, Pejman; Alexopoulos, George S; Foxe, John J

    2008-10-01

    Evidence has emerged for age-related amplification of basic sensory processing indexed by early components of the visual evoked potential (VEP). However, since these age-related effects have been incidental to the main focus of these studies, it is unclear whether they are performance dependent or alternately, represent intrinsic sensory processing changes. High-density VEPs were acquired from 19 healthy elderly and 15 young control participants who viewed alphanumeric stimuli in the absence of any active task. The data show both enhanced and delayed neural responses within structures of the ventral visual stream, with reduced hemispheric asymmetry in the elderly that may be indicative of a decline in hemispheric specialization. Additionally, considerably enhanced early frontal cortical activation was observed in the elderly, suggesting frontal hyper-activation. These age-related differences in early sensory processing are discussed in terms of recent proposals that normal aging involves large-scale compensatory reorganization. Our results suggest that such compensatory mechanisms are not restricted to later higher-order cognitive processes but may also be a feature of early sensory-perceptual processes.

  3. The Dual-Loop Model and the Human Mirror Neuron System: an Exploratory Combined fMRI and DTI Study of the Inferior Frontal Gyrus.

    PubMed

    Hamzei, Farsin; Vry, Magnus-Sebastian; Saur, Dorothee; Glauche, Volkmar; Hoeren, Markus; Mader, Irina; Weiller, Cornelius; Rijntjes, Michel

    2016-05-01

    The inferior frontal gyrus (IFG) is active during both goal-directed action and while observing the same motor act, leading to the idea that also the meaning of a motor act (action understanding) is represented in this "mirror neuron system" (MNS). However, in the dual-loop model, based on dorsal and ventral visual streams, the MNS is thought to be a function of the dorsal steam, projecting to pars opercularis (BA44) of IFG, while recent studies suggest that conceptual meaning and semantic analysis are a function of ventral connections, projecting mainly to pars triangularis (BA45) of IFG. To resolve this discrepancy, we investigated action observation (AO) and imitation (IMI) using fMRI in a large group of subjects. A grasping task (GR) assessed the contribution from movement without AO. We analyzed connections of the MNS-related areas within IFG with postrolandic areas with the use of activation-based DTI. We found that action observation with imitation are mainly a function of the dorsal stream centered on dorsal part of BA44, but also involve BA45, which is dorsally and ventrally connected to the same postrolandic regions. The current finding suggests that BA45 is the crucial part where the MNS and the dual-loop system interact. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Organization of monosynaptic inputs to the serotonin and dopamine neuromodulatorysystems

    PubMed Central

    Ogawa, Sachie K.; Cohen, Jeremiah Y.; Hwang, Dabin; Uchida, Naoshige; Watabe-Uchida, Mitsuko

    2014-01-01

    SUMMARY Serotonin and dopamine are major neuromodulators. Here we used a modified rabies virus to identify monosynaptic inputs to serotonin neurons in the dorsal and median raphe (DR and MR). We found that inputs to DR and MR serotonin neurons are spatially shiftedin the forebrain, with MRserotonin neurons receiving inputs from more medial structures. We then compared these data with inputs to dopamine neurons in the ventral tegmental area (VTA) and substantianigra pars compacta (SNc). We found that DR serotonin neurons receive inputs from a remarkably similar set of areas as VTA dopamine neurons, apart from the striatum, which preferentially targets dopamine neurons. Ourresults suggest three majorinput streams: amedial stream regulates MR serotonin neurons, anintermediate stream regulatesDR serotonin and VTA dopamine neurons, and alateral stream regulatesSNc dopamine neurons. These results providefundamental organizational principlesofafferent control forserotonin and dopamine. PMID:25108805

  5. The Auditory System of the Dipteran Parasitoid Emblemasoma auditrix (Sarcophagidae).

    PubMed

    Tron, Nanina; Stölting, Heiko; Kampschulte, Marian; Martels, Gunhild; Stumpner, Andreas; Lakes-Harlan, Reinhard

    2016-01-01

    Several taxa of insects evolved a tympanate ear at different body positions, whereby the ear is composed of common parts: a scolopidial sense organ, a tracheal air space, and a tympanal membrane. Here, we analyzed the anatomy and physiology of the ear at the ventral prothorax of the sarcophagid fly, Emblemasoma auditrix (Soper). We used micro-computed tomography to analyze the ear and its tracheal air space in relation to the body morphology. Both tympana are separated by a small cuticular bridge, face in the same frontal direction, and are backed by a single tracheal enlargement. This enlargement is connected to the anterior spiracles at the dorsofrontal thorax and is continuous with the tracheal network in the thorax and in the abdomen. Analyses of responses of auditory afferents and interneurons show that the ear is broadly tuned, with a sensitivity peak at 5 kHz. Single-cell recordings of auditory interneurons indicate a frequency- and intensity-dependent tuning, whereby some neurons react best to 9 kHz, the peak frequency of the host's calling song. The results are compared to the convergently evolved ear in Tachinidae (Diptera). © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America.

  6. The Auditory System of the Dipteran Parasitoid Emblemasoma auditrix (Sarcophagidae)

    PubMed Central

    Tron, Nanina; Stölting, Heiko; Kampschulte, Marian; Martels, Gunhild; Stumpner, Andreas; Lakes-Harlan, Reinhard

    2016-01-01

    Several taxa of insects evolved a tympanate ear at different body positions, whereby the ear is composed of common parts: a scolopidial sense organ, a tracheal air space, and a tympanal membrane. Here, we analyzed the anatomy and physiology of the ear at the ventral prothorax of the sarcophagid fly, Emblemasoma auditrix (Soper). We used micro-computed tomography to analyze the ear and its tracheal air space in relation to the body morphology. Both tympana are separated by a small cuticular bridge, face in the same frontal direction, and are backed by a single tracheal enlargement. This enlargement is connected to the anterior spiracles at the dorsofrontal thorax and is continuous with the tracheal network in the thorax and in the abdomen. Analyses of responses of auditory afferents and interneurons show that the ear is broadly tuned, with a sensitivity peak at 5 kHz. Single-cell recordings of auditory interneurons indicate a frequency- and intensity-dependent tuning, whereby some neurons react best to 9 kHz, the peak frequency of the host’s calling song. The results are compared to the convergently evolved ear in Tachinidae (Diptera). PMID:27538415

  7. Developmental changes in the inferior frontal cortex for selecting semantic representations

    PubMed Central

    Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li

    2012-01-01

    Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757

  8. Effects of Nicotine and Ethanol on Indices of Reward and Sensory-Motor Function in Rats: Implications for the Positive Epidemiologic Relationship Between the Use of Cigarettes and the Use of Alcohol

    DTIC Science & Technology

    1997-10-07

    include the auditory nerve, the ventral cochlear nucleus , nuclei of the lateral lemniscus, nucleus reticularis pontis caudalis, spinal neuron, and lower...chambers. In addition, there was a significant effect of nicotine and ethanol to reduce the ratio of dopamine/DOPAC in nucleus accumbens. Because...dopaminergic activity in nucleus accumbens is known to mediate nicotine reinforcement, reductions in the ratio of dopamine/DOPAC (perhaps indicating an

  9. Frequency-following and connectivity of different visual areas in response to contrast-reversal stimulation.

    PubMed

    Stephen, Julia M; Ranken, Doug F; Aine, Cheryl J

    2006-01-01

    The sensitivity of visual areas to different temporal frequencies, as well as the functional connections between these areas, was examined using magnetoencephalography (MEG). Alternating circular sinusoids (0, 3.1, 8.7 and 14 Hz) were presented to foveal and peripheral locations in the visual field to target ventral and dorsal stream structures, respectively. It was hypothesized that higher temporal frequencies would preferentially activate dorsal stream structures. To determine the effect of frequency on the cortical response we analyzed the late time interval (220-770 ms) using a multi-dipole spatio-temporal analysis approach to provide source locations and timecourses for each condition. As an exploratory aspect, we performed cross-correlation analysis on the source timecourses to determine which sources responded similarly within conditions. Contrary to predictions, dorsal stream areas were not activated more frequently during high temporal frequency stimulation. However, across cortical sources the frequency-following response showed a difference, with significantly higher power at the second harmonic for the 3.1 and 8.7 Hz stimulation and at the first and second harmonics for the 14 Hz stimulation with this pattern seen robustly in area V1. Cross-correlations of the source timecourses showed that both low- and high-order visual areas, including dorsal and ventral stream areas, were significantly correlated in the late time interval. The results imply that frequency information is transferred to higher-order visual areas without translation. Despite the less complex waveforms seen in the late interval of time, the cross-correlation results show that visual, temporal and parietal cortical areas are intricately involved in late-interval visual processing.

  10. Motor contributions to the temporal precision of auditory attention

    PubMed Central

    Morillon, Benjamin; Schroeder, Charles E.; Wyart, Valentin

    2014-01-01

    In temporal—or dynamic—attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory ‘active sensing’. PMID:25314898

  11. Motor contributions to the temporal precision of auditory attention.

    PubMed

    Morillon, Benjamin; Schroeder, Charles E; Wyart, Valentin

    2014-10-15

    In temporal-or dynamic-attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory 'active sensing'.

  12. Development and modulation of intrinsic membrane properties control the temporal precision of auditory brain stem neurons.

    PubMed

    Franzen, Delwen L; Gleiss, Sarah A; Berger, Christina; Kümpfbeck, Franziska S; Ammer, Julian J; Felmy, Felix

    2015-01-15

    Passive and active membrane properties determine the voltage responses of neurons. Within the auditory brain stem, refinements in these intrinsic properties during late postnatal development usually generate short integration times and precise action-potential generation. This developmentally acquired temporal precision is crucial for auditory signal processing. How the interactions of these intrinsic properties develop in concert to enable auditory neurons to transfer information with high temporal precision has not yet been elucidated in detail. Here, we show how the developmental interaction of intrinsic membrane parameters generates high firing precision. We performed in vitro recordings from neurons of postnatal days 9-28 in the ventral nucleus of the lateral lemniscus of Mongolian gerbils, an auditory brain stem structure that converts excitatory to inhibitory information with high temporal precision. During this developmental period, the input resistance and capacitance decrease, and action potentials acquire faster kinetics and enhanced precision. Depending on the stimulation time course, the input resistance and capacitance contribute differentially to action-potential thresholds. The decrease in input resistance, however, is sufficient to explain the enhanced action-potential precision. Alterations in passive membrane properties also interact with a developmental change in potassium currents to generate the emergence of the mature firing pattern, characteristic of coincidence-detector neurons. Cholinergic receptor-mediated depolarizations further modulate this intrinsic excitability profile by eliciting changes in the threshold and firing pattern, irrespective of the developmental stage. Thus our findings reveal how intrinsic membrane properties interact developmentally to promote temporally precise information processing. Copyright © 2015 the American Physiological Society.

  13. Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing

    PubMed Central

    Rauschecker, Josef P; Scott, Sophie K

    2010-01-01

    Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271

  14. Auditory cortex of bats and primates: managing species-specific calls for social communication

    PubMed Central

    Kanwal, Jagmeet S.; Rauschecker, Josef P.

    2014-01-01

    Individuals of many animal species communicate with each other using sounds or “calls” that are made up of basic acoustic patterns and their combinations. We are interested in questions about the processing of communication calls and their representation within the mammalian auditory cortex. Our studies compare in particular two species for which a large body of data has accumulated: the mustached bat and the rhesus monkey. We conclude that the brains of both species share a number of functional and organizational principles, which differ only in the extent to which and how they are implemented. For instance, neurons in both species use “combination-sensitivity” (nonlinear spectral and temporal integration of stimulus components) as a basic mechanism to enable exquisite sensitivity to and selectivity for particular call types. Whereas combination-sensitivity is already found abundantly at the primary auditory cortical and also at subcortical levels in bats, it becomes prevalent only at the level of the lateral belt in the secondary auditory cortex of monkeys. A parallel-hierarchical framework for processing complex sounds up to the level of the auditory cortex in bats and an organization into parallel-hierarchical, cortico-cortical auditory processing streams in monkeys is another common principle. Response specialization of neurons seems to be more pronounced in bats than in monkeys, whereas a functional specialization into “what” and “where” streams in the cerebral cortex is more pronounced in monkeys than in bats. These differences, in part, are due to the increased number and larger size of auditory areas in the parietal and frontal cortex in primates. Accordingly, the computational prowess of neural networks and the functional hierarchy resulting in specializations is established early and accelerated across brain regions in bats. The principles proposed here for the neural “management” of species-specific calls in bats and primates can be tested by studying the details of call processing in additional species. Also, computational modeling in conjunction with coordinated studies in bats and monkeys can help to clarify the fundamental question of perceptual invariance (or “constancy”) in call recognition, which has obvious relevance for understanding speech perception and its disorders in humans. PMID:17485400

  15. Auralization of CFD Vorticity Using an Auditory Illusion

    NASA Astrophysics Data System (ADS)

    Volpe, C. R.

    2005-12-01

    One way in which scientists and engineers interpret large quantities of data is through a process called visualization, i.e. generating graphical images that capture essential characteristics and highlight interesting relationships. Another approach, which has received far less attention, is to present complex information with sound. This approach, called ``auralization" or ``sonification", is the auditory analog of visualization. Early work in data auralization frequently involved directly mapping some variable in the data to a sound parameter, such as pitch or volume. Multi-variate data could be auralized by mapping several variables to several sound parameters simultaneously. A clear drawback of this approach is the limited practical range of sound parameters that can be presented to human listeners without exceeding their range of perception or comfort. A software auralization system built upon an existing visualization system is briefly described. This system incorporates an aural presentation synchronously and interactively with an animated scientific visualization, so that alternate auralization techniques can be investigated. One such alternate technique involves auditory illusions: sounds which trick the listener into perceiving something other than what is actually being presented. This software system will be used to present an auditory illusion, known for decades among cognitive psychologists, which produces a sound that seems to ascend or descend endlessly in pitch. The applicability of this illusion for presenting Computational Fluid Dynamics data will be demonstrated. CFD data is frequently visualized with thin stream-lines, but thicker stream-ribbons and stream-tubes can also be used, which rotate to convey fluid vorticity. But a purely graphical presentation can yield drawbacks of its own. Thicker stream-tubes can be self-obscuring, and can obscure other scene elements as well, thus motivating a different approach, such as using sound. Naturally, the simple approach of mapping clockwise and counterclockwise rotations to actual pitch increases and decreases, eventually results in sounds that the listener cannot hear. In this alternate presentation using an auditory illusion, repeated rotations of a stream-tube are replaced with continual increases or decreases in apparent pitch. These apparent pitch changes can continue without bound, yet never exceed the range of frequencies that the listener can hear. The effectiveness of this presentation technique has been studied, and empirical results, obtained through formal user testing and statistical analysis, are presented. These results demonstrate that an aural data presentation using an auditory illusion can improve performance in locating key data characteristics, a task that demonstrates a certain level of understanding of the data. The experiments show that this holds true even when the user expresses a subjective preference and greater confidence in a visual presentation. The CFD data used in the research comes from a number of different industrial domains, but the advantages of this technique could be equally applicable to the study of earth sciences involving fluid mechanics, such as atmospheric or ocean sciences. Furthermore, the approach is applicable not only to CFD data, but to any type of data in which a quantity that is cyclic in nature, such as orientation, needs to be presented. Although the techniques and tools were originally developed with scientists and engineers in mind, they can also be used to aid students, particularly those who are visually impaired or who have difficulty interpreting certain spatial relationships visually.

  16. Tuning in to the Voices: A Multisite fMRI Study of Auditory Hallucinations

    PubMed Central

    Ford, Judith M.; Roach, Brian J.; Jorgensen, Kasper W.; Turner, Jessica A.; Brown, Gregory G.; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A.; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S.; Lim, Kelvin O.; Glover, Gary; Potkin, Steven G.; Mathalon, Daniel H.

    2009-01-01

    Introduction: Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically “tuned” to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. Methods: At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Results: Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Discussion: Although “voices” are the anticipated sensory experience, it appears that even primary auditory cortex is “turned on” and “tuned in” to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample. PMID:18987102

  17. Neural Encoding and Decoding with Deep Learning for Dynamic Natural Vision.

    PubMed

    Wen, Haiguang; Shi, Junxing; Zhang, Yizhen; Lu, Kun-Han; Cao, Jiayue; Liu, Zhongming

    2017-10-20

    Convolutional neural network (CNN) driven by image recognition has been shown to be able to explain cortical responses to static pictures at ventral-stream areas. Here, we further showed that such CNN could reliably predict and decode functional magnetic resonance imaging data from humans watching natural movies, despite its lack of any mechanism to account for temporal dynamics or feedback processing. Using separate data, encoding and decoding models were developed and evaluated for describing the bi-directional relationships between the CNN and the brain. Through the encoding models, the CNN-predicted areas covered not only the ventral stream, but also the dorsal stream, albeit to a lesser degree; single-voxel response was visualized as the specific pixel pattern that drove the response, revealing the distinct representation of individual cortical location; cortical activation was synthesized from natural images with high-throughput to map category representation, contrast, and selectivity. Through the decoding models, fMRI signals were directly decoded to estimate the feature representations in both visual and semantic spaces, for direct visual reconstruction and semantic categorization, respectively. These results corroborate, generalize, and extend previous findings, and highlight the value of using deep learning, as an all-in-one model of the visual cortex, to understand and decode natural vision. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Kinesthetic working memory and action control within the dorsal stream.

    PubMed

    Fiehler, Katja; Burke, Michael; Engel, Annerose; Bien, Siegfried; Rösler, Frank

    2008-02-01

    There is wide agreement that the "dorsal (action) stream" processes visual information for movement control. However, movements depend not only on vision but also on tactile and kinesthetic information (=haptics). Using functional magnetic resonance imaging, the present study investigates to what extent networks within the dorsal stream are also utilized for kinesthetic action control and whether they are also involved in kinesthetic working memory. Fourteen blindfolded participants performed a delayed-recognition task in which right-handed movements had to be encoded, maintained, and later recognized without any visual feedback. Encoding of hand movements activated somatosensory areas, superior parietal lobe (dorsodorsal stream), anterior intraparietal sulcus (aIPS) and adjoining areas (ventrodorsal stream), premotor cortex, and occipitotemporal cortex (ventral stream). Short-term maintenance of kinesthetic information elicited load-dependent activity in the aIPS and adjacent anterior portion of the superior parietal lobe (ventrodorsal stream) of the left hemisphere. We propose that the action representation system of the dorsodorsal and ventrodorsal stream is utilized not only for visual but also for kinesthetic action control. Moreover, the present findings demonstrate that networks within the ventrodorsal stream, in particular the left aIPS and closely adjacent areas, are also engaged in working memory maintenance of kinesthetic information.

  19. On the usefulness of 'what' and 'where' pathways in vision.

    PubMed

    de Haan, Edward H F; Cowey, Alan

    2011-10-01

    The primate visual brain is classically portrayed as a large number of separate 'maps', each dedicated to the processing of specific visual cues, such as colour, motion or faces and their many features. In order to understand this fractionated architecture, the concept of cortical 'pathways' or 'streams' was introduced. In the currently prevailing view, the different maps are organised hierarchically into two major pathways, one involved in recognition and memory (the ventral stream or 'what' pathway) and the other in the programming of action (the dorsal stream or 'where' pathway). In this review, we question this heuristically influential but potentially misleading linear hierarchical pathway model and argue instead for a 'patchwork' or network model. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Effect of attentional load on audiovisual speech perception: evidence from ERPs.

    PubMed

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.

  1. Integration of faces and vocalizations in ventral prefrontal cortex: Implications for the evolution of audiovisual speech

    PubMed Central

    Romanski, Lizabeth M.

    2012-01-01

    The integration of facial gestures and vocal signals is an essential process in human communication and relies on an interconnected circuit of brain regions, including language regions in the inferior frontal gyrus (IFG). Studies have determined that ventral prefrontal cortical regions in macaques [e.g., the ventrolateral prefrontal cortex (VLPFC)] share similar cytoarchitectonic features as cortical areas in the human IFG, suggesting structural homology. Anterograde and retrograde tracing studies show that macaque VLPFC receives afferents from the superior and inferior temporal gyrus, which provide complex auditory and visual information, respectively. Moreover, physiological studies have shown that single neurons in VLPFC integrate species-specific face and vocal stimuli. Although bimodal responses may be found across a wide region of prefrontal cortex, vocalization responsive cells, which also respond to faces, are mainly found in anterior VLPFC. This suggests that VLPFC may be specialized to process and integrate social communication information, just as the IFG is specialized to process and integrate speech and gestures in the human brain. PMID:22723356

  2. En1 is necessary for survival of neurons in the ventral nuclei of the lateral lemniscus.

    PubMed

    Altieri, Stefanie C; Zhao, Tianna; Jalabi, Walid; Romito-DiGiacomo, Rita R; Maricich, Stephen M

    2016-11-01

    The ventral nuclei of the lateral lemniscus (VNLL) are part of the central auditory system thought to participate in temporal sound processing. While the timing and location of VNLL neurogenesis have been determined, the genetic factors that regulate VNLL neuron development are unknown. Here, we use genetic fate-mapping techniques to demonstrate that all glycinergic and glycinergic/GABAergic VNLL neurons derive from a cellular lineage that expresses the homeobox transcription factor Engrailed 1 (En1). We also show that En1 deletion does not affect migration or adoption of a neuronal cell fate but does lead to VNLL neuron death during development. Furthermore, En1 deletion blocks expression of the transcription factor FoxP1 in a subset of VNLL neurons. Together, these data identify En1 as a gene important for VNLL neuron development and survival. © 2016 Wiley Periodicals, Inc. Develop Neurobiol 76: 1266-1274, 2016. © 2016 Wiley Periodicals, Inc.

  3. Do Visual Illusions Probe the Visual Brain?: Illusions in Action without a Dorsal Visual Stream

    ERIC Educational Resources Information Center

    Coello, Yann; Danckert, James; Blangero, Annabelle; Rossetti, Yves

    2007-01-01

    Visual illusions have been shown to affect perceptual judgements more so than motor behaviour, which was interpreted as evidence for a functional division of labour within the visual system. The dominant perception-action theory argues that perception involves a holistic processing of visual objects or scenes, performed within the ventral,…

  4. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  5. Early Cerebral Constraints on Reading Skills in School-Age Children: An MRI Study

    ERIC Educational Resources Information Center

    Borst, G.; Cachia, A.; Tissier, C.; Ahr, E.; Simon, G.; Houdé, O.

    2016-01-01

    Reading relies on a left-lateralized network of brain areas that include the pre-lexical processing regions of the ventral stream. Specifically, a region in the left lateral occipitotemporal sulcus (OTS) is consistently more activated for visual presentations of words than for other categories of stimuli. This region undergoes dramatic changes at…

  6. Developmental Differences for Word Processing in the Ventral Stream

    ERIC Educational Resources Information Center

    Olulade, Olumide A.; Flowers, D. Lynn; Napoliello, Eileen M.; Eden, Guinevere F.

    2013-01-01

    The visual word form system (VWFS), located in the occipito-temporal cortex, is involved in orthographic processing of visually presented words (Cohen et al., 2002). Recent fMRI studies in children and adults have demonstrated a gradient of increasing word-selectivity along the posterior-to-anterior axis of this system (Vinckier et al., 2007), yet…

  7. Computerized tomography of the otic capsule and otoliths in the oyster toadfish, Opsanus tau.

    PubMed

    Edds-Walton, Peggy L; Arruda, Julie; Fay, Richard R; Ketten, Darlene R

    2015-02-01

    The neurocranium of the toadfish (Opsanus tau) exhibits a distinct translucent region in the otic capsule (OC) that may have functional significance for the auditory pathway. This study used ultrahigh resolution computerized tomography (100 µm voxels) to compare the relative density of three sites along the OC (dorsolateral, midlateral, and ventromedial) and two reference sites (dorsal: supraoccipital crest; ventral: parasphenoid bone) in the neurocranium. Higher attenuation occurs where structural density is greater; thus, we compared the X-ray attenuations measured, which provided a measure of relative density. The maximum attenuation value was recorded for each of the five sites (x and y) on consecutive sections throughout the OC and for each of the three calcareous otoliths associated with the sensory maculae (lagena, saccule, and utricle) in the OC. All three otoliths had higher attenuations than any sites in the neurocranium. Both dorsal and ventral reference sites (supraoccipital crest and parasphenoid bone, respectively) had attenuation levels consistent with calcified bone and had relatively small, irregular variations along the length of the OC in all individuals. The lowest relative attenuations (lowest densities) occurred consistently at the three sites along the OC. In addition, the lowest attenuations measured along the OC occurred at the ventromedial site around the saccular otolith for all seven fish. The decrease in bone density along the OC is consistent with the hypothesis that there is a low-density channel in the skull to facilitate transmission of acoustic stimuli to the auditory endorgans of the ear. © 2014 Wiley Periodicals, Inc.

  8. Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion

    PubMed Central

    Im, Hee Yeon; Albohn, Daniel N.; Steiner, Troy G.; Cushing, Cody A.; Adams, Reginald B.; Kveraga, Kestutis

    2017-01-01

    In crowds, where scrutinizing individual facial expressions is inefficient, humans can make snap judgments about the prevailing mood by reading “crowd emotion”. We investigated how the brain accomplishes this feat in a set of behavioral and fMRI studies. Participants were asked to either avoid or approach one of two crowds of faces presented in the left and right visual hemifields. Perception of crowd emotion was improved when crowd stimuli contained goal-congruent cues and was highly lateralized to the right hemisphere. The dorsal visual stream was preferentially activated in crowd emotion processing, with activity in the intraparietal sulcus and superior frontal gyrus predicting perceptual accuracy for crowd emotion perception, whereas activity in the fusiform cortex in the ventral stream predicted better perception of individual facial expressions. Our findings thus reveal significant behavioral differences and differential involvement of the hemispheres and the major visual streams in reading crowd versus individual face expressions. PMID:29226255

  9. What puts the how in where? Tool use and the divided visual streams hypothesis.

    PubMed

    Frey, Scott H

    2007-04-01

    An influential theory suggests that the dorsal (occipito-parietal) visual stream computes representations of objects for purposes of guiding actions (determining 'how') independently of ventral (occipito-temporal) stream processes supporting object recognition and semantic processing (determining 'what'). Yet, the ability of the dorsal stream alone to account for one of the most common forms of human action, tool use, is limited. While experience-dependent modifications to existing dorsal stream representations may explain simple tool use behaviors (e.g., using sticks to extend reach) found among a variety of species, skillful use of manipulable artifacts (e.g., cups, hammers, pencils) requires in addition access to semantic representations of objects' functions and uses. Functional neuroimaging suggests that this latter information is represented in a left-lateralized network of temporal, frontal and parietal areas. I submit that the well-established dominance of the human left hemisphere in the representation of familiar skills stems from the ability for this acquired knowledge to influence the organization of actions within the dorsal pathway.

  10. Subliminal Speech Perception and Auditory Streaming

    ERIC Educational Resources Information Center

    Dupoux, Emmanuel; de Gardelle, Vincent; Kouider, Sid

    2008-01-01

    Current theories of consciousness assume a qualitative dissociation between conscious and unconscious processing: while subliminal stimuli only elicit a transient activity, supraliminal stimuli have long-lasting influences. Nevertheless, the existence of this qualitative distinction remains controversial, as past studies confounded awareness and…

  11. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  12. The role of spatiotemporal and spectral cues in segregating short sound events: evidence from auditory Ternus display.

    PubMed

    Wang, Qingcui; Bao, Ming; Chen, Lihan

    2014-01-01

    Previous studies using auditory sequences with rapid repetition of tones revealed that spatiotemporal cues and spectral cues are important cues used to fuse or segregate sound streams. However, the perceptual grouping was partially driven by the cognitive processing of the periodicity cues of the long sequence. Here, we investigate whether perceptual groupings (spatiotemporal grouping vs. frequency grouping) could also be applicable to short auditory sequences, where auditory perceptual organization is mainly subserved by lower levels of perceptual processing. To find the answer to that question, we conducted two experiments using an auditory Ternus display. The display was composed of three speakers (A, B and C), with each speaker consecutively emitting one sound consisting of two frames (AB and BC). Experiment 1 manipulated both spatial and temporal factors. We implemented three 'within-frame intervals' (WFIs, or intervals between A and B, and between B and C), seven 'inter-frame intervals' (IFIs, or intervals between AB and BC) and two different speaker layouts (inter-distance of speakers: near or far). Experiment 2 manipulated the differentiations of frequencies between two auditory frames, in addition to the spatiotemporal cues as in Experiment 1. Listeners were required to make two alternative forced choices (2AFC) to report the perception of a given Ternus display: element motion (auditory apparent motion from sound A to B to C) or group motion (auditory apparent motion from sound 'AB' to 'BC'). The results indicate that the perceptual grouping of short auditory sequences (materialized by the perceptual decisions of the auditory Ternus display) was modulated by temporal and spectral cues, with the latter contributing more to segregating auditory events. Spatial layout plays a less role in perceptual organization. These results could be accounted for by the 'peripheral channeling' theory.

  13. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps

    PubMed Central

    2016-01-01

    Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771

  14. Functional correlates of the anterolateral processing hierarchy in human auditory cortex.

    PubMed

    Chevillet, Mark; Riesenhuber, Maximilian; Rauschecker, Josef P

    2011-06-22

    Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.

  15. SDF1 regulates leading process branching and speed of migrating interneurons

    PubMed Central

    Lysko, Daniel E.; Putt, Mary; Golden, Jeffrey A.

    2011-01-01

    Cell migration is required for normal embryonic development, yet how cells navigate complex paths while integrating multiple guidance cues remains poorly understood. During brain development, interneurons migrate from the ventral ganglionic eminence to the cerebral cortex within several migratory streams. They must exit these streams to invade the cortical plate. While SDF1-signaling is necessary for normal interneuron stream migration, how they switch from tangential stream migration to invade the cortical plate is unknown. Here we demonstrate that SDF1-signaling reduces interneuron branching frequency by reducing cAMP levels via a Gi-signaling pathway using an in vitro mouse explant system, resulting in the maintenance of stream migration. Blocking SDF1-signaling, or increasing branching frequency, results in stream exit and cortical plate invasion in mouse brain slices. These data support a novel model to understand how migrating interneurons switch from tangential migration to invade the cortical plate in which reducing SDF1-signaling increases leading process branching and slows the migration rate, permitting migrating interneurons to sense cortically directed guidance cues. PMID:21289183

  16. Effect of emotional valence on retrieval-related recapitulation of encoding activity in the ventral visual stream

    PubMed Central

    Kark, Sarah M.; Kensinger, Elizabeth A.

    2015-01-01

    While prior work has shown greater retrieval-related reactivation in the ventral visual stream for emotional stimuli compared to neutral stimuli, the effects of valence on retrieval-related recapitulation of successful encoding processes (Dm effects) have yet to be investigated. Here, seventeen participants (aged 19–35) studied line drawings of negative, positive, or neutral images followed immediately by the complete photo. After a 20-minute delay, participants performed a challenging recognition memory test, distinguishing the studied line drawing outlines from novel ones. First, results replicated earlier work by demonstrating that negative and positive hits elicited greater ventral occipito-temporal cortex (VOTC) activity than neutral hits during both encoding and retrieval. Moreover, the amount of activation in portions of the VOTC correlated with the magnitude of participants’ emotional memory enhancement. Second, results revealed significant retrieval-related recapitulation of Dm effects (Hits > Misses) in VOTC (anterior inferior temporal gyri) only for negative stimuli. Third, connectivity between the amygdala and fusiform gyrus during the encoding of negative stimuli increased the likelihood of fusiform activation during successful retrieval. Together, these results suggest that recapitulation in posterior VOTC reflects memory for the affective dimension of the stimuli (Emotional Hits > Neutral Hits) and the magnitude of activation in some of these regions is related to superior emotional memory. Moreover, for negative stimuli, recapitulation in more anterior portions of the VOTC is greater for remembered than forgotten items. The current study offers new evidence for effects of emotion on recapitulation of activity and functional connectivity in support of memory. PMID:26459096

  17. Effect of emotional valence on retrieval-related recapitulation of encoding activity in the ventral visual stream.

    PubMed

    Kark, Sarah M; Kensinger, Elizabeth A

    2015-11-01

    While prior work has shown greater retrieval-related reactivation in the ventral visual stream for emotional stimuli compared to neutral stimuli, the effects of valence on retrieval-related recapitulation of successful encoding processes (Dm effects) have yet to be investigated. Here, seventeen participants (aged 19-35) studied line drawings of negative, positive, or neutral images followed immediately by the complete photo. After a 20-min delay, participants performed a challenging recognition memory test, distinguishing the studied line drawing outlines from novel ones. First, results replicated earlier work by demonstrating that negative and positive hits elicited greater ventral occipito-temporal cortex (VOTC) activity than neutral hits during both encoding and retrieval. Moreover, the amount of activation in portions of the VOTC correlated with the magnitude of participants' emotional memory enhancement. Second, results revealed significant retrieval-related recapitulation of Dm effects (Hits>Misses) in VOTC (anterior inferior temporal gyri) only for negative stimuli. Third, connectivity between the amygdala and fusiform gyrus during the encoding of negative stimuli increased the likelihood of fusiform activation during successful retrieval. Together, these results suggest that recapitulation in posterior VOTC reflects memory for the affective dimension of the stimuli (Emotional Hits>Neutral Hits) and the magnitude of activation in some of these regions is related to superior emotional memory. Moreover, for negative stimuli, recapitulation in more anterior portions of the VOTC is greater for remembered than forgotten items. The current study offers new evidence for effects of emotion on recapitulation of activity and functional connectivity in support of memory. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Attentional influences on functional mapping of speech sounds in human auditory cortex.

    PubMed

    Obleser, Jonas; Elbert, Thomas; Eulitz, Carsten

    2004-07-21

    The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects. During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

  19. Differential parietal and temporal contributions to music perception in improvising and score-dependent musicians, an fMRI study.

    PubMed

    Harris, Robert; de Jong, Bauke M

    2015-10-22

    Using fMRI, cerebral activations were studied in 24 classically-trained keyboard performers and 12 musically unskilled control subjects. Two groups of musicians were recruited: improvising (n=12) and score-dependent (non-improvising) musicians (n=12). While listening to both familiar and unfamiliar music, subjects either (covertly) appraised the presented music performance or imagined they were playing the music themselves. We hypothesized that improvising musicians would exhibit enhanced efficiency of audiomotor transformation reflected by stronger ventral premotor activation. Statistical Parametric Mapping revealed that, while virtually 'playing along׳ with the music, improvising musicians exhibited activation of a right-hemisphere distribution of cerebral areas including posterior-superior parietal and dorsal premotor cortex. Involvement of these right-hemisphere dorsal stream areas suggests that improvising musicians recruited an amodal spatial processing system subserving pitch-to-space transformations to facilitate their virtual motor performance. Score-dependent musicians recruited a primarily left-hemisphere pattern of motor areas together with the posterior part of the right superior temporal sulcus, suggesting a relationship between aural discrimination and symbolic representation. Activations in bilateral auditory cortex were significantly larger for improvising musicians than for score-dependent musicians, suggesting enhanced top-down effects on aural perception. Our results suggest that learning to play a music instrument primarily from notation predisposes musicians toward aural identification and discrimination, while learning by improvisation involves audio-spatial-motor transformations, not only during performance, but also perception. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  1. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  2. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  3. Neuromechanistic Model of Auditory Bistability

    PubMed Central

    Rankin, James; Sussman, Elyse; Rinzel, John

    2015-01-01

    Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept—a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507

  4. The role of the salience network in processing lexical and nonlexical stimuli in cochlear implant users: an ALE meta-analysis of PET studies.

    PubMed

    Song, Jae-Jin; Vanneste, Sven; Lazard, Diane S; Van de Heyning, Paul; Park, Joo Hyun; Oh, Seung Ha; De Ridder, Dirk

    2015-05-01

    Previous positron emission tomography (PET) studies have shown that various cortical areas are activated to process speech signal in cochlear implant (CI) users. Nonetheless, differences in task dimension among studies and low statistical power preclude from understanding sound processing mechanism in CI users. Hence, we performed activation likelihood estimation meta-analysis of PET studies in CI users and normal hearing (NH) controls to compare the two groups. Eight studies (58 CI subjects/92 peak coordinates; 45 NH subjects/40 peak coordinates) were included and analyzed, retrieving areas significantly activated by lexical and nonlexical stimuli. For lexical and nonlexical stimuli, both groups showed activations in the components of the dual-stream model such as bilateral superior temporal gyrus/sulcus, middle temporal gyrus, left posterior inferior frontal gyrus, and left insula. However, CI users displayed additional unique activation patterns by lexical and nonlexical stimuli. That is, for the lexical stimuli, significant activations were observed in areas comprising salience network (SN), also known as the intrinsic alertness network, such as the left dorsal anterior cingulate cortex (dACC), left insula, and right supplementary motor area in the CI user group. Also, for the nonlexical stimuli, CI users activated areas comprising SN such as the right insula and left dACC. Previous episodic observations on lexical stimuli processing using the dual auditory stream in CI users were reconfirmed in this study. However, this study also suggests that dual-stream auditory processing in CI users may need supports from the SN. In other words, CI users need to pay extra attention to cope with degraded auditory signal provided by the implant. © 2015 Wiley Periodicals, Inc.

  5. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    PubMed

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. Effect of attentional load on audiovisual speech perception: evidence from ERPs

    PubMed Central

    Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa

    2014-01-01

    Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922

  7. The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation.

    PubMed

    Hickok, Gregory; Farahbod, Haleh; Saberi, Kourosh

    2015-07-01

    Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry. © The Author(s) 2015.

  8. Demodulation processes in auditory perception

    NASA Astrophysics Data System (ADS)

    Feth, Lawrence L.

    1994-08-01

    The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.

  9. The central role of recognition in auditory perception: a neurobiological model.

    PubMed

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior colliculus neurons and regulates the encoding of the echoic trace in the thalamus. Identification involves correlation of sequential spectral slices of the stimulus-driven neural activity with stored representations in association with multimodal memories, verbal lexicons, and contextual information. Identities are then consolidated in auditory short-term memory and bound with attribute information (usually pitch, loudness, and direction) that has been integrated according to the identities' spectral properties. Attention to, or recall of, a particular identity will excite a particular sequence in the identification hierarchies and so lead to modulation of thalamus and inferior colliculus neural spectrotemporal response fields. This operates as an adaptive filter for identities, or their attributes, and explains many puzzling human auditory behaviors, such as the cocktail party effect, selective attention, and continuity illusions.

  10. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    PubMed Central

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  11. Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?

    PubMed

    Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni

    2015-09-01

    The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).

  12. Neural Signatures of Stimulus Features in Visual Working Memory—A Spatiotemporal Approach

    PubMed Central

    Jackson, Margaret C.; Klein, Christoph; Mohr, Harald; Shapiro, Kimron L.; Linden, David E. J.

    2010-01-01

    We examined the neural signatures of stimulus features in visual working memory (WM) by integrating functional magnetic resonance imaging (fMRI) and event-related potential data recorded during mental manipulation of colors, rotation angles, and color–angle conjunctions. The N200, negative slow wave, and P3b were modulated by the information content of WM, and an fMRI-constrained source model revealed a progression in neural activity from posterior visual areas to higher order areas in the ventral and dorsal processing streams. Color processing was associated with activity in inferior frontal gyrus during encoding and retrieval, whereas angle processing involved right parietal regions during the delay interval. WM for color–angle conjunctions did not involve any additional neural processes. The finding that different patterns of brain activity underlie WM for color and spatial information is consistent with ideas that the ventral/dorsal “what/where” segregation of perceptual processing influences WM organization. The absence of characteristic signatures of conjunction-related brain activity, which was generally intermediate between the 2 single conditions, suggests that conjunction judgments are based on the coordinated activity of these 2 streams. PMID:19429863

  13. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    PubMed

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  14. Neuronal effects of nicotine during auditory selective attention.

    PubMed

    Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R

    2015-06-01

    Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.

  15. Different neural activities support auditory working memory in musicians and bilinguals.

    PubMed

    Alain, Claude; Khatamian, Yasha; He, Yu; Lee, Yunjo; Moreno, Sylvain; Leung, Ada W S; Bialystok, Ellen

    2018-05-17

    Musical training and bilingualism benefit executive functioning and working memory (WM)-however, the brain networks supporting this advantage are not well specified. Here, we used functional magnetic resonance imaging and the n-back task to assess WM for spatial (sound location) and nonspatial (sound category) auditory information in musician monolingual (musicians), nonmusician bilinguals (bilinguals), and nonmusician monolinguals (controls). Musicians outperformed bilinguals and controls on the nonspatial WM task. Overall, spatial and nonspatial WM were associated with greater activity in dorsal and ventral brain regions, respectively. Increasing WM load yielded similar recruitment of the anterior-posterior attention network in all three groups. In both tasks and both levels of difficulty, musicians showed lower brain activity than controls in superior prefrontal frontal gyrus and dorsolateral prefrontal cortex (DLPFC) bilaterally, a finding that may reflect improved and more efficient use of neural resources. Bilinguals showed enhanced activity in language-related areas (i.e., left DLPFC and left supramarginal gyrus) relative to musicians and controls, which could be associated with the need to suppress interference associated with competing semantic activations from multiple languages. These findings indicate that the auditory WM advantage in musicians and bilinguals is mediated by different neural networks specific to each life experience. © 2018 New York Academy of Sciences.

  16. Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.

    PubMed

    Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B

    2018-02-01

    Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Input-output relationships of the dorsal nucleus of the lateral lemniscus: possible substrate for the processing of dynamic spatial cues.

    PubMed

    Shneiderman, A; Stanforth, D A; Henkel, C K; Saint Marie, R L

    1999-07-26

    One organizing principle of the auditory system is the progressive representation of best tuning frequency. Superimposed on this tonotopy are nucleotopic organizations, some of which are related to the processing of different spatial cues. In the present study, we correlated asymmetries in the outputs of the dorsal nucleus of the lateral lemniscus (DNLL) to the two inferior colliculi (ICs), with asymmetries in the inputs to DNLL from the two lateral superior olives (LSOs). The positions of DNLL neurons with crossed and uncrossed projections were plotted from cases with unilateral injections of retrograde tracers in the IC. We found an orderly dorsal-to-ventral progression to the output that recapitulated the tonotopy of DNLL. In addition, we found a nucleotopic organization in the ventral (high-frequency) part of DNLL. Neurons with projections to the ventromedial (high-frequency) part of the contralateral IC were preferentially located ventrolaterally in DNLL; those with projections to the ventromedial part of the ipsilateral IC were preferentially located ventromedially in DNLL. This partial segregation of outputs corresponded with a partial segregation of inputs from the two LSOs in cases which received closely matched bilateral injections of anterograde tracers in LSO. The ventral part of DNLL received a heavy projection medially from the opposite LSO and a heavy projection laterally from the ipsilateral LSO. The findings suggest a direct relationship in the ventral part of the DNLL between inputs from the two LSOs and outputs to the two ICs. Possible roles for this segregation of pathways in DNLL are discussed in relation to the processing of static and dynamic spatial cues.

  18. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study

    PubMed Central

    Holschneider, Daniel P.; Wang, Zhuo; Pang, Raina D.

    2014-01-01

    Rodent cortical midline structures (CMS) are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC) remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [14C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow (CBF) was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas–findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways) will not only critically inform future work at the microscopic (single neurons and synapses) level, but also have translational value to advance our understanding of human brain architecture. PMID:24966831

  19. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study.

    PubMed

    Holschneider, Daniel P; Wang, Zhuo; Pang, Raina D

    2014-01-01

    Rodent cortical midline structures (CMS) are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC) remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [(14)C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow (CBF) was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas-findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways) will not only critically inform future work at the microscopic (single neurons and synapses) level, but also have translational value to advance our understanding of human brain architecture.

  20. The dynamic imprint of word learning on the dorsal language pathway.

    PubMed

    Palomar-García, María-Ángeles; Sanjuán, Ana; Bueichekú, Elisenda; Ventura-Campos, Noelia; Ávila, César

    2017-10-01

    According to Hickok and Poeppel (2007), the acquisition of new vocabulary rests on the dorsal language pathway connecting auditory and motor areas. The present study tested this hypothesis longitudinally by measuring BOLD signal changes during a verbal repetition task and modulation of resting state functional connectivity (rs-FC) in the dorsal stream. Thirty-five healthy participants, divided into trained and control groups, completed fMRI sessions on days 1, 10, and 24. Between days 1 and 10, the trained group learned 84 new pseudowords associated with 84 native words. Task-related fMRI results showed a reduced activity in the IFG and STG while processing the learned vocabulary after training, returning to initial values two weeks later. Moreover, rs-fMRI analysis showed stronger rs-FC between the IFG and STG in the trained group than in the control group after learning, especially on day 24. These neural changes were more evident in participants with a larger vocabulary. Discussion focuses on the prominent role of the dorsal stream in vocabulary acquisition. Even when their meaning was known, newly learned words were again processed through the dorsal stream two weeks after learning, with the increase in rs-FC between auditory and motor areas being a relevant long-term imprint of vocabulary learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Moving in time: Bayesian causal inference explains movement coordination to auditory beats

    PubMed Central

    Elliott, Mark T.; Wing, Alan M.; Welchman, Andrew E.

    2014-01-01

    Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved. PMID:24850915

  2. Perception of temporally modified speech in auditory neuropathy.

    PubMed

    Hassan, Dalia Mohamed

    2011-01-01

    Disrupted auditory nerve activity in auditory neuropathy (AN) significantly impairs the sequential processing of auditory information, resulting in poor speech perception. This study investigated the ability of AN subjects to perceive temporally modified consonant-vowel (CV) pairs and shed light on their phonological awareness skills. Four Arabic CV pairs were selected: /ki/-/gi/, /to/-/do/, /si/-/sti/ and /so/-/zo/. The formant transitions in consonants and the pauses between CV pairs were prolonged. Rhyming, segmentation and blending skills were tested using words at a natural rate of speech and with prolongation of the speech stream. Fourteen adult AN subjects were compared to a matched group of cochlear-impaired patients in their perception of acoustically processed speech. The AN group distinguished the CV pairs at a low speech rate, in particular with modification of the consonant duration. Phonological awareness skills deteriorated in adult AN subjects but improved with prolongation of the speech inter-syllabic time interval. A rehabilitation program for AN should consider temporal modification of speech, training for auditory temporal processing and the use of devices with innovative signal processing schemes. Verbal modifications as well as visual imaging appear to be promising compensatory strategies for remediating the affected phonological processing skills.

  3. Monaural Speech Segregation by Integrating Primitive and Schema-Based Analysis

    DTIC Science & Technology

    2008-02-03

    vol. 19, pp. 475-492. Wang D.L. and Chang P.S. (2008): An oscillatory correlation model of auditory streaming. Cognitive Neurodynamics , vol. 2, pp...Subcontracts DeLiang Wang (Principal Investigator) March 2008 Department of Computer Science & Engineering and Center for Cognitive Science The

  4. Subcollicular projections to the auditory thalamus and collateral projections to the inferior colliculus.

    PubMed

    Schofield, Brett R; Mellott, Jeffrey G; Motts, Susan D

    2014-01-01

    Experiments in several species have identified direct projections to the medial geniculate nucleus (MG) from cells in subcollicular auditory nuclei. Moreover, many cochlear nucleus cells that project to the MG send collateral projections to the inferior colliculus (IC) (Schofield et al., 2014). We conducted three experiments to characterize projections to the MG from the superior olivary and the lateral lemniscal regions in guinea pigs. For experiment 1, we made large injections of retrograde tracer into the MG. Labeled cells were most numerous in the superior paraolivary nucleus, ventral nucleus of the trapezoid body, lateral superior olivary nucleus, ventral nucleus of the lateral lemniscus, ventrolateral tegmental nucleus, paralemniscal region and sagulum. Additional sources include other periolivary nuclei and the medial superior olivary nucleus. The projections are bilateral with an ipsilateral dominance (66%). For experiment 2, we injected tracer into individual MG subdivisions. The results show that the subcollicular projections terminate primarily in the medial MG, with the dorsal MG a secondary target. The variety of projecting nuclei suggest a range of functions, including monaural and binaural aspects of hearing. These direct projections could provide the thalamus with some of the earliest (i.e., fastest) information regarding acoustic stimuli. For experiment 3, we made large injections of different retrograde tracers into one MG and the homolateral IC to identify cells that project to both targets. Such cells were numerous and distributed across many of the nuclei listed above, mostly ipsilateral to the injections. The prominence of the collateral projections suggests that the same information is delivered to both the IC and the MG, or perhaps that a common signal is being delivered as a preparatory indicator or temporal reference point. The results are discussed from functional and evolutionary perspectives.

  5. Physiological correlates of comodulation masking release in the mammalian ventral cochlear nucleus.

    PubMed

    Pressnitzer, D; Meddis, R; Delahaye, R; Winter, I M

    2001-08-15

    Comodulation masking release (CMR) enhances the detection of signals embedded in wideband, amplitude-modulated maskers. At least part of the CMR is attributable to across-frequency processing, however, the relative contribution of different stages in the auditory system to across-frequency processing is unknown. We have measured the responses of single units from one of the earliest stages in the ascending auditory pathway, the ventral cochlear nucleus, where across frequency processing may take place. A sinusoidally amplitude-modulated tone at the best frequency of each unit was used as a masker. A pure tone signal was added in the dips of the masker modulation (reference condition). Flanking components (FCs) were then added at frequencies remote from the unit best frequency. The FCs were pure tones amplitude modulated either in phase (comodulated) or out of phase (codeviant) with the on-frequency component. Psychophysically, this CMR paradigm reduces within-channel cues while producing an advantage of approximately 10 dB for the comodulated condition in comparison with the reference condition. Some of the recorded units showed responses consistent with perceptual CMR. The addition of the comodulated FCs produced a strong reduction in the response to the masker modulation, making the signal more salient in the poststimulus time histograms. A decision statistic based on d' showed that threshold was reached at lower signal levels for the comodulated condition than for reference or codeviant conditions. The neurons that exhibited such a behavior were mainly transient chopper or primary-like units. The results obtained from a subpopulation of transient chopper units are consistent with a possible circuit in the cochlear nucleus consisting of a wideband inhibitor contacting a narrowband cell. A computational model was used to confirm the feasibility of such a circuit.

  6. Subcollicular projections to the auditory thalamus and collateral projections to the inferior colliculus

    PubMed Central

    Schofield, Brett R.; Mellott, Jeffrey G.; Motts, Susan D.

    2014-01-01

    Experiments in several species have identified direct projections to the medial geniculate nucleus (MG) from cells in subcollicular auditory nuclei. Moreover, many cochlear nucleus cells that project to the MG send collateral projections to the inferior colliculus (IC) (Schofield et al., 2014). We conducted three experiments to characterize projections to the MG from the superior olivary and the lateral lemniscal regions in guinea pigs. For experiment 1, we made large injections of retrograde tracer into the MG. Labeled cells were most numerous in the superior paraolivary nucleus, ventral nucleus of the trapezoid body, lateral superior olivary nucleus, ventral nucleus of the lateral lemniscus, ventrolateral tegmental nucleus, paralemniscal region and sagulum. Additional sources include other periolivary nuclei and the medial superior olivary nucleus. The projections are bilateral with an ipsilateral dominance (66%). For experiment 2, we injected tracer into individual MG subdivisions. The results show that the subcollicular projections terminate primarily in the medial MG, with the dorsal MG a secondary target. The variety of projecting nuclei suggest a range of functions, including monaural and binaural aspects of hearing. These direct projections could provide the thalamus with some of the earliest (i.e., fastest) information regarding acoustic stimuli. For experiment 3, we made large injections of different retrograde tracers into one MG and the homolateral IC to identify cells that project to both targets. Such cells were numerous and distributed across many of the nuclei listed above, mostly ipsilateral to the injections. The prominence of the collateral projections suggests that the same information is delivered to both the IC and the MG, or perhaps that a common signal is being delivered as a preparatory indicator or temporal reference point. The results are discussed from functional and evolutionary perspectives. PMID:25100950

  7. Content-based TV sports video retrieval using multimodal analysis

    NASA Astrophysics Data System (ADS)

    Yu, Yiqing; Liu, Huayong; Wang, Hongbin; Zhou, Dongru

    2003-09-01

    In this paper, we propose content-based video retrieval, which is a kind of retrieval by its semantical contents. Because video data is composed of multimodal information streams such as video, auditory and textual streams, we describe a strategy of using multimodal analysis for automatic parsing sports video. The paper first defines the basic structure of sports video database system, and then introduces a new approach that integrates visual stream analysis, speech recognition, speech signal processing and text extraction to realize video retrieval. The experimental results for TV sports video of football games indicate that the multimodal analysis is effective for video retrieval by quickly browsing tree-like video clips or inputting keywords within predefined domain.

  8. Dorsal and ventral working memory-related brain areas support distinct processes in contextual cueing.

    PubMed

    Manginelli, Angela A; Baumgartner, Florian; Pollmann, Stefan

    2013-02-15

    Behavioral evidence suggests that the use of implicitly learned spatial contexts for improved visual search may depend on visual working memory resources. Working memory may be involved in contextual cueing in different ways: (1) for keeping implicitly learned working memory contents available during search or (2) for the capture of attention by contexts retrieved from memory. We mapped brain areas that were modulated by working memory capacity. Within these areas, activation was modulated by contextual cueing along the descending segment of the intraparietal sulcus, an area that has previously been related to maintenance of explicit memories. Increased activation for learned displays, but not modulated by the size of contextual cueing, was observed in the temporo-parietal junction area, previously associated with the capture of attention by explicitly retrieved memory items, and in the ventral visual cortex. This pattern of activation extends previous research on dorsal versus ventral stream functions in memory guidance of attention to the realm of attentional guidance by implicit memory. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. The neural representation of objects formed through the spatiotemporal integration of visual transients

    PubMed Central

    Erlikhman, Gennady; Gurariy, Gennadiy; Mruczek, Ryan E.B.; Caplovitz, Gideon P.

    2016-01-01

    Oftentimes, objects are only partially and transiently visible as parts of them become occluded during observer or object motion. The visual system can integrate such object fragments across space and time into perceptual wholes or spatiotemporal objects. This integrative and dynamic process may involve both ventral and dorsal visual processing pathways, along which shape and spatial representations are thought to arise. We measured fMRI BOLD response to spatiotemporal objects and used multi-voxel pattern analysis (MVPA) to decode shape information across 20 topographic regions of visual cortex. Object identity could be decoded throughout visual cortex, including intermediate (V3A, V3B, hV4, LO1-2,) and dorsal (TO1-2, and IPS0-1) visual areas. Shape-specific information, therefore, may not be limited to early and ventral visual areas, particularly when it is dynamic and must be integrated. Contrary to the classic view that the representation of objects is the purview of the ventral stream, intermediate and dorsal areas may play a distinct and critical role in the construction of object representations across space and time. PMID:27033688

  10. From Acoustic Segmentation to Language Processing: Evidence from Optical Imaging

    PubMed Central

    Obrig, Hellmuth; Rossi, Sonja; Telkemeyer, Silke; Wartenburger, Isabell

    2010-01-01

    During language acquisition in infancy and when learning a foreign language, the segmentation of the auditory stream into words and phrases is a complex process. Intuitively, learners use “anchors” to segment the acoustic speech stream into meaningful units like words and phrases. Regularities on a segmental (e.g., phonological) or suprasegmental (e.g., prosodic) level can provide such anchors. Regarding the neuronal processing of these two kinds of linguistic cues a left-hemispheric dominance for segmental and a right-hemispheric bias for suprasegmental information has been reported in adults. Though lateralization is common in a number of higher cognitive functions, its prominence in language may also be a key to understanding the rapid emergence of the language network in infants and the ease at which we master our language in adulthood. One question here is whether the hemispheric lateralization is driven by linguistic input per se or whether non-linguistic, especially acoustic factors, “guide” the lateralization process. Methodologically, functional magnetic resonance imaging provides unsurpassed anatomical detail for such an enquiry. However, instrumental noise, experimental constraints and interference with EEG assessment limit its applicability, pointedly in infants and also when investigating the link between auditory and linguistic processing. Optical methods have the potential to fill this gap. Here we review a number of recent studies using optical imaging to investigate hemispheric differences during segmentation and basic auditory feature analysis in language development. PMID:20725516

  11. End-Stopping Predicts Curvature Tuning along the Ventral Stream

    PubMed Central

    Hartmann, Till S.; Livingstone, Margaret S.

    2017-01-01

    Neurons in primate inferotemporal cortex (IT) are clustered into patches of shared image preferences. Functional imaging has shown that these patches are activated by natural categories (e.g., faces, body parts, and places), artificial categories (numerals, words) and geometric features (curvature and real-world size). These domains develop in the same cortical locations across monkeys and humans, which raises the possibility of common innate mechanisms. Although these commonalities could be high-level template-based categories, it is alternatively possible that the domain locations are constrained by low-level properties such as end-stopping, eccentricity, and the shape of the preferred images. To explore this, we looked for correlations among curvature preference, receptive field (RF) end-stopping, and RF eccentricity in the ventral stream. We recorded from sites in V1, V4, and posterior IT (PIT) from six monkeys using microelectrode arrays. Across all visual areas, we found a tendency for end-stopped sites to prefer curved over straight contours. Further, we found a progression in population curvature preferences along the visual hierarchy, where, on average, V1 sites preferred straight Gabors, V4 sites preferred curved stimuli, and many PIT sites showed a preference for curvature that was concave relative to fixation. Our results provide evidence that high-level functional domains may be mapped according to early rudimentary properties of the visual system. SIGNIFICANCE STATEMENT The macaque occipitotemporal cortex contains clusters of neurons with preferences for categories such as faces, body parts, and places. One common question is how these clusters (or “domains”) acquire their cortical position along the ventral stream. We and other investigators previously established an fMRI-level correlation among these category domains, retinotopy, and curvature preferences: for example, in inferotemporal cortex, face- and curvature-preferring domains show a central visual field bias whereas place- and rectilinear-preferring domains show a more peripheral visual field bias. Here, we have found an electrophysiological-level explanation for the correlation among domain preference, curvature, and retinotopy based on neuronal preference for short over long contours, also called end-stopping. PMID:28100746

  12. End-Stopping Predicts Curvature Tuning along the Ventral Stream.

    PubMed

    Ponce, Carlos R; Hartmann, Till S; Livingstone, Margaret S

    2017-01-18

    Neurons in primate inferotemporal cortex (IT) are clustered into patches of shared image preferences. Functional imaging has shown that these patches are activated by natural categories (e.g., faces, body parts, and places), artificial categories (numerals, words) and geometric features (curvature and real-world size). These domains develop in the same cortical locations across monkeys and humans, which raises the possibility of common innate mechanisms. Although these commonalities could be high-level template-based categories, it is alternatively possible that the domain locations are constrained by low-level properties such as end-stopping, eccentricity, and the shape of the preferred images. To explore this, we looked for correlations among curvature preference, receptive field (RF) end-stopping, and RF eccentricity in the ventral stream. We recorded from sites in V1, V4, and posterior IT (PIT) from six monkeys using microelectrode arrays. Across all visual areas, we found a tendency for end-stopped sites to prefer curved over straight contours. Further, we found a progression in population curvature preferences along the visual hierarchy, where, on average, V1 sites preferred straight Gabors, V4 sites preferred curved stimuli, and many PIT sites showed a preference for curvature that was concave relative to fixation. Our results provide evidence that high-level functional domains may be mapped according to early rudimentary properties of the visual system. The macaque occipitotemporal cortex contains clusters of neurons with preferences for categories such as faces, body parts, and places. One common question is how these clusters (or "domains") acquire their cortical position along the ventral stream. We and other investigators previously established an fMRI-level correlation among these category domains, retinotopy, and curvature preferences: for example, in inferotemporal cortex, face- and curvature-preferring domains show a central visual field bias whereas place- and rectilinear-preferring domains show a more peripheral visual field bias. Here, we have found an electrophysiological-level explanation for the correlation among domain preference, curvature, and retinotopy based on neuronal preference for short over long contours, also called end-stopping. Copyright © 2017 the authors 0270-6474/17/370648-12$15.00/0.

  13. Attentional influences on functional mapping of speech sounds in human auditory cortex

    PubMed Central

    Obleser, Jonas; Elbert, Thomas; Eulitz, Carsten

    2004-01-01

    Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands. PMID:15268765

  14. Dynamic Object Representations in Infants with and without Fragile X Syndrome

    PubMed Central

    Farzin, Faraz; Rivera, Susan M.

    2009-01-01

    Our visual world is dynamic in nature. The ability to encode, mentally represent, and track an object's identity as it moves across time and space is critical for integrating and maintaining a complete and coherent view of the world. Here we investigated dynamic object processing in typically developing (TD) infants and infants with fragile X syndrome (FXS), a single-gene disorder associated with deficits in dorsal stream functioning. We used the violation of expectation method to assess infants’ visual response to expected versus unexpected outcomes following a brief dynamic (dorsal stream) or static (ventral stream) occlusion event. Consistent with previous reports of deficits in dorsal stream-mediated functioning in individuals with this disorder, these results reveal that, compared to mental age-matched TD infants, infants with FXS could maintain the identity of static, but not dynamic, object information during occlusion. These findings are the first to experimentally evaluate visual object processing skills in infants with FXS, and further support the hypothesis of dorsal stream difficulties in infants with this developmental disorder. PMID:20224809

  15. Diminished auditory sensory gating during active auditory verbal hallucinations.

    PubMed

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Stimulation of subgenual cingulate area decreases limbic top-down effect on ventral visual stream: A DBS-EEG pilot study.

    PubMed

    Kibleur, Astrid; Polosan, Mircea; Favre, Pauline; Rudrauf, David; Bougerol, Thierry; Chabardès, Stéphan; David, Olivier

    2017-02-01

    Deep brain stimulation (DBS) of the subgenual cingulate gyrus (area CG25) is beneficial in treatment resistant depression. Though the mechanisms of action of Cg25 DBS remain largely unknown, it is commonly believed that Cg25 DBS modulates limbic activity of large networks to achieve thymic regulation of patients. To investigate how emotional attention is influenced by Cg25 DBS, we assessed behavioral and electroencephalographic (EEG) responses to an emotional Stroop task in 5 patients during ON and OFF stimulation conditions. Using EEG source localization, we found that the main effect of DBS was a reduction of neuronal responses in limbic regions (temporal pole, medial prefrontal and posterior cingulate cortices) and in ventral visual areas involved in face processing. In the dynamic causal modeling (DCM) approach, the changes of the evoked response amplitudes are assumed to be due to changes of long range connectivity induced by Cg25 DBS. Here, using a simplified neural mass model that did not take explicitly into account the cytoarchitecture of the considered brain regions, we showed that the remote action of Cg25 DBS could be explained by a reduced top-down effective connectivity of the amygdalo-temporo-polar complex. Overall, our results thus indicate that Cg25 DBS during the emotional Stroop task causes a decrease of top-down limbic influence on the ventral visual stream itself, rather than a modulation of prefrontal cognitive processes only. Tuning down limbic excitability in relation to sensory processing might be one of the biological mechanisms through which Cg25 DBS produces positive clinical outcome in the treatment of resistant depression. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Encoding model of temporal processing in human visual cortex.

    PubMed

    Stigliani, Anthony; Jeska, Brianna; Grill-Spector, Kalanit

    2017-12-19

    How is temporal information processed in human visual cortex? Visual input is relayed to V1 through segregated transient and sustained channels in the retina and lateral geniculate nucleus (LGN). However, there is intense debate as to how sustained and transient temporal channels contribute to visual processing beyond V1. The prevailing view associates transient processing predominately with motion-sensitive regions and sustained processing with ventral stream regions, while the opposing view suggests that both temporal channels contribute to neural processing beyond V1. Using fMRI, we measured cortical responses to time-varying stimuli and then implemented a two temporal channel-encoding model to evaluate the contributions of each channel. Different from the general linear model of fMRI that predicts responses directly from the stimulus, the encoding approach first models neural responses to the stimulus from which fMRI responses are derived. This encoding approach not only predicts cortical responses to time-varying stimuli from milliseconds to seconds but also, reveals differential contributions of temporal channels across visual cortex. Consistent with the prevailing view, motion-sensitive regions and adjacent lateral occipitotemporal regions are dominated by transient responses. However, ventral occipitotemporal regions are driven by both sustained and transient channels, with transient responses exceeding the sustained. These findings propose a rethinking of temporal processing in the ventral stream and suggest that transient processing may contribute to rapid extraction of the content of the visual input. Importantly, our encoding approach has vast implications, because it can be applied with fMRI to decipher neural computations in millisecond resolution in any part of the brain. Copyright © 2017 the Author(s). Published by PNAS.

  18. Catecholaminergic connectivity to the inner ear, central auditory and vocal motor circuitry in the plainfin midshipman fish, Porichthys notatus

    PubMed Central

    Forlano, Paul M.; Kim, Spencer D.; Krzyminska, Zuzanna M.; Sisneros, Joseph A.

    2014-01-01

    Although the neuroanatomical distribution of catecholaminergic (CA) neurons has been well documented across all vertebrate classes, few studies have examined CA connectivity to physiologically and anatomically identified neural circuitry that controls behavior. The goal of this study was to characterize CA distribution in the brain and inner ear of the plainfin midshipman fish (Porichthys notatus) with particular emphasis on their relationship with anatomically labeled circuitry that both produces and encodes social acoustic signals in this species. Neurobiotin labeling of the main auditory endorgan, the saccule, combined with tyrosine hydroxylase immunofluorescence (TH-ir) revealed a strong CA innervation of both the peripheral and central auditory system. Diencephalic TH-ir neurons in the periventricular posterior tuberculum, known to be dopaminergic, send ascending projections to the ventral telencephalon and prominent descending projections to vocal-acoustic integration sites, notably the hindbrain octavolateralis efferent nucleus, as well as onto the base of hair cells in the saccule via nerve VIII. Neurobiotin backfills of the vocal nerve in combination with TH-ir revealed CA terminals on all components of the vocal pattern generator which appears to largely originate from local TH-ir neurons but may include diencephalic projections as well. This study provides strong evidence for catecholamines as important neuromodulators of both auditory and vocal circuitry and acoustic-driven social behavior in midshipman fish. This first demonstration of TH-ir terminals in the main endorgan of hearing in a non-mammalian vertebrate suggests a conserved and important anatomical and functional role for dopamine in normal audition. PMID:24715479

  19. Areas activated during naturalistic reading comprehension overlap topological visual, auditory, and somatotomotor maps.

    PubMed

    Sood, Mariam R; Sereno, Martin I

    2016-08-01

    Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  20. Near-Term Fetuses Process Temporal Features of Speech

    ERIC Educational Resources Information Center

    Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie

    2011-01-01

    The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…

  1. Altered structure-function relations of semantic processing in youths with high-functioning autism: a combined diffusion and functional MRI study.

    PubMed

    Lo, Yu-Chun; Chou, Tai-Li; Fan, Li-Ying; Gau, Susan Shur-Fen; Chiu, Yen-Nan; Tseng, Wen-Yih Isaac

    2013-12-01

    Deficits in language and communication are among the core symptoms of autism, a common neurodevelopmental disorder with long-term impairment. Despite the striking nature of the autistic language impairment, knowledge about its corresponding alterations in the brain is still evolving. We hypothesized that the dual stream language network is altered in autism, and that this alteration could be revealed by changes in the relationships between microstructural integrity and functional activation. The study recruited 20 right-handed male youths with autism and 20 carefully matched individually, typically developing (TD) youths. Microstructural integrity of the left dorsal and left ventral pathways responsible for language processing and the functional activation of the connected brain regions were investigated by using diffusion spectrum imaging and functional magnetic resonance imaging of a semantic task, respectively. Youths with autism had significantly poorer language function, and lower functional activation in left dorsal and left ventral regions of the language network, compared with TD youths. The TD group showed a significant correlation of the functional activation of the left dorsal region with microstructural integrity of the left ventral pathway, whereas the autism group showed a significant correlation of the functional activation of the left ventral region with microstructural integrity of the left dorsal pathway, and moreover verbal comprehension index was correlated with microstructural integrity of the left ventral pathway. These altered structure-function relationships in autism suggest possible involvement of the dual pathways in supporting deficient semantic processing. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  2. Music training relates to the development of neural mechanisms of selective auditory attention.

    PubMed

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. Impact of peripheral hearing loss on top-down auditory processing.

    PubMed

    Lesicko, Alexandria M H; Llano, Daniel A

    2017-01-01

    The auditory system consists of an intricate set of connections interposed between hierarchically arranged nuclei. The ascending pathways carrying sound information from the cochlea to the auditory cortex are, predictably, altered in instances of hearing loss resulting from blockage or damage to peripheral auditory structures. However, hearing loss-induced changes in descending connections that emanate from higher auditory centers and project back toward the periphery are still poorly understood. These pathways, which are the hypothesized substrate of high-level contextual and plasticity cues, are intimately linked to the ascending stream, and are thereby also likely to be influenced by auditory deprivation. In the current report, we review both the human and animal literature regarding changes in top-down modulation after peripheral hearing loss. Both aged humans and cochlear implant users are able to harness the power of top-down cues to disambiguate corrupted sounds and, in the case of aged listeners, may rely more heavily on these cues than non-aged listeners. The animal literature also reveals a plethora of structural and functional changes occurring in multiple descending projection systems after peripheral deafferentation. These data suggest that peripheral deafferentation induces a rebalancing of bottom-up and top-down controls, and that it will be necessary to understand the mechanisms underlying this rebalancing to develop better rehabilitation strategies for individuals with peripheral hearing loss. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  5. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    PubMed

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  6. Disentangling visual imagery and perception of real-world objects

    PubMed Central

    Lee, Sue-Hyun; Kravitz, Dwight J.; Baker, Chris I.

    2011-01-01

    During mental imagery, visual representations can be evoked in the absence of “bottom-up” sensory input. Prior studies have reported similar neural substrates for imagery and perception, but studies of brain-damaged patients have revealed a double dissociation with some patients showing preserved imagery in spite of impaired perception and others vice versa. Here, we used fMRI and multi-voxel pattern analysis to investigate the specificity, distribution, and similarity of information for individual seen and imagined objects to try and resolve this apparent contradiction. In an event-related design, participants either viewed or imagined individual named object images on which they had been trained prior to the scan. We found that the identity of both seen and imagined objects could be decoded from the pattern of activity throughout the ventral visual processing stream. Further, there was enough correspondence between imagery and perception to allow discrimination of individual imagined objects based on the response during perception. However, the distribution of object information across visual areas was strikingly different during imagery and perception. While there was an obvious posterior-anterior gradient along the ventral visual stream for seen objects, there was an opposite gradient for imagined objects. Moreover, the structure of representations (i.e. the pattern of similarity between responses to all objects) was more similar during imagery than perception in all regions along the visual stream. These results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies. PMID:22040738

  7. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    PubMed

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants <18, nonhuman participants, and article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as neurophysiological substrates for auditory prediction. Tinnitus has been modeled as an auditory object which may demonstrate incomplete processing during auditory scene analysis resulting in tinnitus salience and therefore difficulty in habituation. Within the electrophysiological domain, there is currently mixed evidence regarding oscillatory band changes in tinnitus. There are theoretical proposals for a relationship between prediction error and tinnitus but few published empirical studies. American Academy of Audiology.

  8. Predictive motor control of sensory dynamics in Auditory Active Sensing

    PubMed Central

    Morillon, Benjamin; Hackett, Troy A.; Kajikawa, Yoshinao; Schroeder, Charles E.

    2016-01-01

    Neuronal oscillations present potential physiological substrates for brain operations that require temporal prediction. We review this idea in the context of auditory perception. Using speech as an exemplar, we illustrate how hierarchically organized oscillations can be used to parse and encode complex input streams. We then consider the motor system as a major source of rhythms (temporal priors) in auditory processing, that act in concert with attention to sharpen sensory representations and link them across areas. We discuss the anatomo-functional pathways that could mediate this audio-motor interaction, and notably the potential role of the somatosensory cortex. Finally, we reposition temporal predictions in the context of internal models, discussing how they interact with feature-based or spatial predictions. We argue that complementary predictions interact synergistically according to the organizational principles of each sensory system, forming multidimensional filters crucial to perception. PMID:25594376

  9. What difference does a year of schooling make?: Maturation of brain response and connectivity between 2nd and 3rd grades during arithmetic problem solving

    PubMed Central

    Rosenberg-Lee, Miriam; Barth, Maria; Menon, Vinod

    2011-01-01

    Early elementary schooling in 2nd and 3rd grades (ages 7-9) is an important period for the acquisition and mastery of basic mathematical skills. Yet, we know very little about neurodevelopmental changes that might occur over a year of schooling. Here we examine behavioral and neurodevelopmental changes underlying arithmetic problem solving in a well-matched group of 2nd (n = 45) and 3rd (n = 45) grade children. Although 2nd and 3rd graders did not differ on IQ or grade- and age-normed measures of math, reading and working memory, 3rd graders had higher raw math scores (effect sizes = 1.46-1.49) and were more accurate than 2nd graders in an fMRI task involving verification of simple and complex two-operand addition problems (effect size = 0.43). In both 2nd and 3rd graders, arithmetic complexity was associated with increased responses in right inferior frontal sulcus and anterior insula, regions implicated in domain-general cognitive control, and in left intraparietal sulcus (IPS) and superior parietal lobule (SPL) regions important for numerical and arithmetic processing. Compared to 2nd graders, 3rd graders showed greater activity in dorsal stream parietal areas right SPL, IPS and angular gyrus (AG) as well as ventral visual stream areas bilateral lingual gyrus (LG), right lateral occipital cortex (LOC) and right parahippocampal gyrus (PHG). Significant differences were also observed in the prefrontal cortex (PFC), with 3rd graders showing greater activation in left dorsal lateral PFC (dlPFC) and greater deactivation in the ventral medial PFC (vmPFC). Third graders also showed greater functional connectivity between the left dlPFC and multiple posterior brain areas, with larger differences in dorsal stream parietal areas SPL and AG, compared to ventral stream visual areas LG, LOC and PHG. No such between-grade differences were observed in functional connectivity between the vmPFC and posterior brain regions. These results suggest that even the narrow one-year interval spanning grades 2 and 3 is characterized by significant arithmetic task-related changes in brain response and connectivity, and argue that pooling data across wide age ranges and grades can miss important neurodevelopmental changes. Our findings have important implications for understanding brain mechanisms mediating early maturation of mathematical skills and, more generally, for educational neuroscience. PMID:21620984

  10. Person perception involves functional integration between the extrastriate body area and temporal pole.

    PubMed

    Greven, Inez M; Ramsey, Richard

    2017-02-01

    The majority of human neuroscience research has focussed on understanding functional organisation within segregated patches of cortex. The ventral visual stream has been associated with the detection of physical features such as faces and body parts, whereas the theory-of-mind network has been associated with making inferences about mental states and underlying character, such as whether someone is friendly, selfish, or generous. To date, however, it is largely unknown how such distinct processing components integrate neural signals. Using functional magnetic resonance imaging and connectivity analyses, we investigated the contribution of functional integration to social perception. During scanning, participants observed bodies that had previously been associated with trait-based or neutral information. Additionally, we independently localised the body perception and theory-of-mind networks. We demonstrate that when observing someone who cues the recall of stored social knowledge compared to non-social knowledge, a node in the ventral visual stream (extrastriate body area) shows greater coupling with part of the theory-of-mind network (temporal pole). These results show that functional connections provide an interface between perceptual and inferential processing components, thus providing neurobiological evidence that supports the view that understanding the visual environment involves interplay between conceptual knowledge and perceptual processing. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Biologically Inspired Model for Inference of 3D Shape from Texture

    PubMed Central

    Gomez, Olman; Neumann, Heiko

    2016-01-01

    A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387

  12. Neural correlates of specific musical anhedonia

    PubMed Central

    Martínez-Molina, Noelia; Mas-Herrero, Ernest; Rodríguez-Fornells, Antoni; Zatorre, Robert J.

    2016-01-01

    Although music is ubiquitous in human societies, there are some people for whom music holds no reward value despite normal perceptual ability and preserved reward-related responses in other domains. The study of these individuals with specific musical anhedonia may be crucial to understand better the neural correlates underlying musical reward. Previous neuroimaging studies have shown that musically induced pleasure may arise from the interaction between auditory cortical networks and mesolimbic reward networks. If such interaction is critical for music-induced pleasure to emerge, then those individuals who do not experience it should show alterations in the cortical-mesolimbic response. In the current study, we addressed this question using fMRI in three groups of 15 participants, each with different sensitivity to music reward. We demonstrate that the music anhedonic participants showed selective reduction of activity for music in the nucleus accumbens (NAcc), but normal activation levels for a monetary gambling task. Furthermore, this group also exhibited decreased functional connectivity between the right auditory cortex and ventral striatum (including the NAcc). In contrast, individuals with greater than average response to music showed enhanced connectivity between these structures. Thus, our results suggest that specific musical anhedonia may be associated with a reduction in the interplay between the auditory cortex and the subcortical reward network, indicating a pivotal role of this interaction for the enjoyment of music. PMID:27799544

  13. Auditory Brainstem Circuits That Mediate the Middle Ear Muscle Reflex

    PubMed Central

    Mukerji, Sudeep; Windsor, Alanna Marie; Lee, Daniel J.

    2010-01-01

    The middle ear muscle (MEM) reflex is one of two major descending systems to the auditory periphery. There are two middle ear muscles (MEMs): the stapedius and the tensor tympani. In man, the stapedius contracts in response to intense low frequency acoustic stimuli, exerting forces perpendicular to the stapes superstructure, increasing middle ear impedance and attenuating the intensity of sound energy reaching the inner ear (cochlea). The tensor tympani is believed to contract in response to self-generated noise (chewing, swallowing) and nonauditory stimuli. The MEM reflex pathways begin with sound presented to the ear. Transduction of sound occurs in the cochlea, resulting in an action potential that is transmitted along the auditory nerve to the cochlear nucleus in the brainstem (the first relay station for all ascending sound information originating in the ear). Unknown interneurons in the ventral cochlear nucleus project either directly or indirectly to MEM motoneurons located elsewhere in the brainstem. Motoneurons provide efferent innervation to the MEMs. Although the ascending and descending limbs of these reflex pathways have been well characterized, the identity of the reflex interneurons is not known, as are the source of modulatory inputs to these pathways. The aim of this article is to (a) provide an overview of MEM reflex anatomy and physiology, (b) present new data on MEM reflex anatomy and physiology from our laboratory and others, and (c) describe the clinical implications of our research. PMID:20870664

  14. Construction and Updating of Event Models in Auditory Event Processing

    ERIC Educational Resources Information Center

    Huff, Markus; Maurer, Annika E.; Brich, Irina; Pagenkopf, Anne; Wickelmaier, Florian; Papenmeier, Frank

    2018-01-01

    Humans segment the continuous stream of sensory information into distinct events at points of change. Between 2 events, humans perceive an event boundary. Present theories propose changes in the sensory information to trigger updating processes of the present event model. Increased encoding effort finally leads to a memory benefit at event…

  15. Implicit Processing of Phonotactic Cues: Evidence from Electrophysiological and Vascular Responses

    ERIC Educational Resources Information Center

    Rossi, Sonja; Jurgenson, Ina B.; Hanulikova, Adriana; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth

    2011-01-01

    Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics.…

  16. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  17. From genes to brain development to phenotypic behavior: "dorsal-stream vulnerability" in relation to spatial cognition, attention, and planning of actions in Williams syndrome (WS) and other developmental disorders.

    PubMed

    Atkinson, Janette; Braddick, Oliver

    2011-01-01

    Visual information is believed to be processed through two distinct, yet interacting cortical streams. The ventral stream performs the computations needed for recognition of objects and faces ("what" and "who"?) and the dorsal stream the computations for registering spatial relationships and for controlling visually guided actions ("where" and "how"?). We initially proposed a model of spatial deficits in Williams syndrome (WS) in which visual abilities subserved by the ventral stream, such as face recognition, are relatively well developed (although not necessarily in exactly the same way as in typical development), whereas dorsal-stream functions, such as visuospatial actions, are markedly impaired. Since these initial findings in WS, deficits of motion coherence sensitivity, a dorsal-stream function has been found in other genetic disorders such as Fragile X and autism, and as a consequence of perinatal events (in hemiplegia, perinatal brain anomalies following very premature birth), leading to the proposal of a general "dorsal-stream vulnerability" in many different conditions of abnormal human development. In addition, dorsal-stream systems provide information used in tasks of visuospatial memory and locomotor planning, and these systems are closely coupled to networks for attentional control. We and several other research groups have previously shown deficits of frontal and parietal lobe function in WS individuals for specific attention tasks [e.g., Atkinson, J., Braddick, O., Anker, S., Curran, W., & Andrew, R. (2003). Neurobiological models of visuospatial cognition in children with Williams Syndrome: Measures of dorsal-stream and frontal function. Developmental Neuropsychology, 23(1/2), 141-174.]. We have used the Test of Everyday Attention for Children (TEA-Ch) which aims to attempt to separate components of attention with distinct brain networks (selective attention, sustained attention, and attention control-executive function) testing a group of older children with WS, but this test battery is too demanding for many children and adults with WS. Consequently, we have devised a new set of tests of attention, the Early Childhood Attention Battery (ECAB). This uses similar principles to the TEA-Ch, but adapted for mental ages younger than 6 years. The ECAB shows a distinctive attention profile for WS individuals relative to their overall cognitive development, with relative strength in tasks of sustained attention and poorer performance on tasks of selective attention and executive control. These profiles, and the characteristic developmental courses, also show differences between children with Down's syndrome and WS. This chapter briefly reviews new research findings on WS in these areas, relating the development of brain systems in WS to evidence from neuroimaging in typically developing infants, children born very preterm, and normal adults. The hypothesis of "dorsal-stream(s) vulnerability" which will be discussed includes a number of interlinked brain networks, subserving not only global visual processing and formulation of visuomotor actions but interlinked networks of attention. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  19. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment.

    PubMed

    Rosemann, Stephanie; Thiel, Christiane M

    2018-07-15

    Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Reduced audiovisual recalibration in the elderly.

    PubMed

    Chan, Yu Man; Pianta, Michael J; McKendrick, Allison M

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22-32 years old) and 15 older (64-74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age.

  1. Reduced audiovisual recalibration in the elderly

    PubMed Central

    Chan, Yu Man; Pianta, Michael J.; McKendrick, Allison M.

    2014-01-01

    Perceived synchrony of visual and auditory signals can be altered by exposure to a stream of temporally offset stimulus pairs. Previous literature suggests that adapting to audiovisual temporal offsets is an important recalibration to correctly combine audiovisual stimuli into a single percept across a range of source distances. Healthy aging results in synchrony perception over a wider range of temporally offset visual and auditory signals, independent of age-related unisensory declines in vision and hearing sensitivities. However, the impact of aging on audiovisual recalibration is unknown. Audiovisual synchrony perception for sound-lead and sound-lag stimuli was measured for 15 younger (22–32 years old) and 15 older (64–74 years old) healthy adults using a method-of-constant-stimuli, after adapting to a stream of visual and auditory pairs. The adaptation pairs were either synchronous or asynchronous (sound-lag of 230 ms). The adaptation effect for each observer was computed as the shift in the mean of the individually fitted psychometric functions after adapting to asynchrony. Post-adaptation to synchrony, the younger and older observers had average window widths (±standard deviation) of 326 (±80) and 448 (±105) ms, respectively. There was no adaptation effect for sound-lead pairs. Both the younger and older observers, however, perceived more sound-lag pairs as synchronous. The magnitude of the adaptation effect in the older observers was not correlated with how often they saw the adapting sound-lag stimuli as asynchronous. Our finding demonstrates that audiovisual synchrony perception adapts less with advancing age. PMID:25221508

  2. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    PubMed Central

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  3. A practical, intuitive brain-computer interface for communicating ‘yes’ or ‘no’ by listening

    NASA Astrophysics Data System (ADS)

    Hill, N. Jeremy; Ricci, Erin; Haider, Sameah; McCane, Lynn M.; Heckman, Susan; Wolpaw, Jonathan R.; Vaughan, Theresa M.

    2014-06-01

    Objective. Previous work has shown that it is possible to build an EEG-based binary brain-computer interface system (BCI) driven purely by shifts of attention to auditory stimuli. However, previous studies used abrupt, abstract stimuli that are often perceived as harsh and unpleasant, and whose lack of inherent meaning may make the interface unintuitive and difficult for beginners. We aimed to establish whether we could transition to a system based on more natural, intuitive stimuli (spoken words ‘yes’ and ‘no’) without loss of performance, and whether the system could be used by people in the locked-in state. Approach. We performed a counterbalanced, interleaved within-subject comparison between an auditory streaming BCI that used beep stimuli, and one that used word stimuli. Fourteen healthy volunteers performed two sessions each, on separate days. We also collected preliminary data from two subjects with advanced amyotrophic lateral sclerosis (ALS), who used the word-based system to answer a set of simple yes-no questions. Main results. The N1, N2 and P3 event-related potentials elicited by words varied more between subjects than those elicited by beeps. However, the difference between responses to attended and unattended stimuli was more consistent with words than beeps. Healthy subjects’ performance with word stimuli (mean 77% ± 3.3 s.e.) was slightly but not significantly better than their performance with beep stimuli (mean 73% ± 2.8 s.e.). The two subjects with ALS used the word-based BCI to answer questions with a level of accuracy similar to that of the healthy subjects. Significance. Since performance using word stimuli was at least as good as performance using beeps, we recommend that auditory streaming BCI systems be built with word stimuli to make the system more pleasant and intuitive. Our preliminary data show that word-based streaming BCI is a promising tool for communication by people who are locked in.

  4. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    NASA Astrophysics Data System (ADS)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  5. Cortical substrates and functional correlates of auditory deviance processing deficits in schizophrenia

    PubMed Central

    Rissling, Anthony J.; Miyakoshi, Makoto; Sugar, Catherine A.; Braff, David L.; Makeig, Scott; Light, Gregory A.

    2014-01-01

    Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ) patients, scalp-channel measures of averaged event-related potentials (ERPs) mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42) and non-psychiatric comparison subjects (n = 47) participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard) tone ERP difference complex, here termed the auditory deviance response (ADR), comprised of a mid-frontal mismatch negativity (MMN), P3a positivity, and re-orienting negativity (RON) peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs) across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz) scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics. PMID:25379456

  6. A Theory of Object Recognition: Computations and Circuits in the Feedforward Path of the Ventral Stream in Primate Visual Cortex

    DTIC Science & Technology

    2005-12-01

    Computational Learning in the Department of Brain & Cognitive Sciences and in the Computer Science and Artificial Intelligence Laboratory at the Massachusetts...physiology and cognitive science . . . . . . . . . . . . . . . . . . . . . 67 2 CONTENTS A Appendices 68 A.1 Detailed model implementation and...physiol- ogy to cognitive science. The original model [Riesenhuber and Poggio, 1999b] made also a few predictions ranging from biophysics to psychophysics

  7. Contributions of the Ventral Striatum to Conscious Perception: An Intracranial EEG Study of the Attentional Blink.

    PubMed

    Slagter, Heleen A; Mazaheri, Ali; Reteig, Leon C; Smolders, Ruud; Figee, Martijn; Mantione, Mariska; Schuurman, P Richard; Denys, Damiaan

    2017-02-01

    The brain is limited in its capacity to consciously process information, necessitating gating of information. While conscious perception is robustly associated with sustained, recurrent interactions between widespread cortical regions, subcortical regions, including the striatum, influence cortical activity. Here, we examined whether the ventral striatum, given its ability to modulate cortical information flow, contributes to conscious perception. Using intracranial EEG, we recorded ventral striatum activity while 7 patients performed an attentional blink task in which they had to detect two targets (T1 and T2) in a stream of distractors. Typically, when T2 follows T1 within 100-500 ms, it is often not perceived (i.e., the attentional blink). We found that conscious T2 perception was influenced and signaled by ventral striatal activity. Specifically, the failure to perceive T2 was foreshadowed by a T1-induced increase in α and low β oscillatory activity as early as 80 ms after T1, indicating that the attentional blink to T2 may be due to very early T1-driven attentional capture. Moreover, only consciously perceived targets were associated with an increase in θ activity between 200 and 400 ms. These unique findings shed new light on the mechanisms that give rise to the attentional blink by revealing that conscious target perception may be determined by T1 processing at a much earlier processing stage than traditionally believed. More generally, they indicate that ventral striatum activity may contribute to conscious perception, presumably by gating cortical information flow. What determines whether we become aware of a piece of information or not? Conscious access has been robustly associated with activity within a distributed network of cortical regions. Using intracranial electrophysiological recordings during an attentional blink task, we tested the idea that the ventral striatum, because of its ability to modulate cortical information flow, may contribute to conscious perception. We find that conscious perception is influenced and signaled by ventral striatal activity. Short-latency (80-140 ms) striatal responses to a first target determined conscious perception of a second target. Moreover, conscious perception of the second target was signaled by longer-latency (200-400 ms) striatal activity. These results suggest that the ventral striatum may be part of a subcortical network that influences conscious experience. Copyright © 2017 the authors 0270-6474/17/371081-09$15.00/0.

  8. Increasingly complex representations of natural movies across the dorsal stream are shared between subjects.

    PubMed

    Güçlü, Umut; van Gerven, Marcel A J

    2017-01-15

    Recently, deep neural networks (DNNs) have been shown to provide accurate predictions of neural responses across the ventral visual pathway. We here explore whether they also provide accurate predictions of neural responses across the dorsal visual pathway, which is thought to be devoted to motion processing and action recognition. This is achieved by training deep neural networks to recognize actions in videos and subsequently using them to predict neural responses while subjects are watching natural movies. Moreover, we explore whether dorsal stream representations are shared between subjects. In order to address this question, we examine if individual subject predictions can be made in a common representational space estimated via hyperalignment. Results show that a DNN trained for action recognition can be used to accurately predict how dorsal stream responds to natural movies, revealing a correspondence in representations of DNN layers and dorsal stream areas. It is also demonstrated that models operating in a common representational space can generalize to responses of multiple or even unseen individual subjects to novel spatio-temporal stimuli in both encoding and decoding settings, suggesting that a common representational space underlies dorsal stream responses across multiple subjects. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Brain Activation Patterns in Response to Conspecific and Heterospecific Social Acoustic Signals in Female Plainfin Midshipman Fish, Porichthys notatus.

    PubMed

    Mohr, Robert A; Chang, Yiran; Bhandiwad, Ashwin A; Forlano, Paul M; Sisneros, Joseph A

    2018-01-01

    While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli. © 2018 S. Karger AG, Basel.

  10. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    PubMed Central

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  11. A centralized audio presentation manager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papp, A.L. III; Blattner, M.M.

    1994-05-16

    The centralized audio presentation manager addresses the problems which occur when multiple programs running simultaneously attempt to use the audio output of a computer system. Time dependence of sound means that certain auditory messages must be scheduled simultaneously, which can lead to perceptual problems due to psychoacoustic phenomena. Furthermore, the combination of speech and nonspeech audio is examined; each presents its own problems of perceptibility in an acoustic environment composed of multiple auditory streams. The centralized audio presentation manager receives abstract parameterized message requests from the currently running programs, and attempts to create and present a sonic representation in themore » most perceptible manner through the use of a theoretically and empirically designed rule set.« less

  12. Attentional Shifts between Audition and Vision in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Occelli, Valeria; Esposito, Gianluca; Venuti, Paola; Arduino, Giuseppe Maurizio; Zampini, Massimiliano

    2013-01-01

    Previous evidence on neurotypical adults shows that the presentation of a stimulus allocates the attention to its modality, resulting in faster responses to a subsequent target presented in the same (vs. different) modality. People with Autism Spectrum Disorders (ASDs) often fail to detect a (visual or auditory) target in a stream of stimuli after…

  13. Implicit Segmentation of a Stream of Syllables Based on Transitional Probabilities: An MEG Study

    ERIC Educational Resources Information Center

    Teinonen, Tuomas; Huotilainen, Minna

    2012-01-01

    Statistical segmentation of continuous speech, i.e., the ability to utilise transitional probabilities between syllables in order to detect word boundaries, is reflected in the brain's auditory event-related potentials (ERPs). The N1 and N400 ERP components are typically enhanced for word onsets compared to random syllables during active…

  14. Hand shape selection in pantomimed grasping: Interaction between the dorsal and the ventral visual streams and convergence on the ventral premotor area

    PubMed Central

    Makuuchi, Michiru; Someya, Yoshiaki; Ogawa, Seiji; Takayama, Yoshihiro

    2011-01-01

    In visually guided grasping, possible hand shapes are computed from the geometrical features of the object, while prior knowledge about the object and the goal of the action influence both the computation and the selection of the hand shape. We investigated the system dynamics of the human brain for the pantomiming of grasping with two aspects accentuated. One is object recognition, with the use of objects for daily use. The subjects mimed grasping movements appropriate for an object presented in a photograph either by precision or power grip. The other is the selection of grip hand shape. We manipulated the selection demands for the grip hand shape by having the subjects use the same or different grip type in the second presentation of the identical object. Effective connectivity analysis revealed that the increased selection demands enhance the interaction between the anterior intraparietal sulcus (AIP) and posterior inferior temporal gyrus (pITG), and drive the converging causal influences from the AIP, pITG, and dorsolateral prefrontal cortex to the ventral premotor area (PMv). These results suggest that the dorsal and ventral visual areas interact in the pantomiming of grasping, while the PMv integrates the neural information of different regions to select the hand posture. The present study proposes system dynamics in visually guided movement toward meaningful objects, but further research is needed to examine if the same dynamics is found also in real grasping. PMID:21739528

  15. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  16. Identification of distinct telencephalic progenitor pools for neuronal diversity in the amygdala

    PubMed Central

    Hirata, Tsutomu; Li, Peijun; Lanuza, Guillermo M.; Cocas, Laura A.; Huntsman, Molly M.; Corbin, Joshua G.

    2009-01-01

    Development of the amygdala, a central structure of the limbic system, remains poorly understood. Using mouse as a model, our studies reveal that two spatially distinct and early specified telencephalic progenitor pools marked by the homeodomain transcription factor Dbx1 are major sources of neuronal cell diversity in the mature amygdala. We find that Dbx1+ cells of the ventral pallium (VP) generate excitatory neurons of the basolateral complex and cortical amygdala nuclei. Moreover, Dbx1-derived cells comprise a novel migratory stream that emanates from the preoptic area (POA), a ventral telencephalic domain adjacent to the diencephalic border. The Dbx1+ POA-derived population migrates specifically to the amygdala, and as defined by both immunochemical and electrophysiological criteria, generates a unique subclass of inhibitory neurons in the medial amygdala nucleus. Thus, this POA-derived population represents a novel progenitor pool dedicated to the limbic system. PMID:19136974

  17. Identification of distinct telencephalic progenitor pools for neuronal diversity in the amygdala.

    PubMed

    Hirata, Tsutomu; Li, Peijun; Lanuza, Guillermo M; Cocas, Laura A; Huntsman, Molly M; Corbin, Joshua G

    2009-02-01

    The development of the amygdala, a central structure of the limbic system, remains poorly understood. We found that two spatially distinct and early-specified telencephalic progenitor pools marked by the homeodomain transcription factor Dbx1 are major sources of neuronal cell diversity in the mature mouse amygdala. We found that Dbx1-positive cells of the ventral pallium generate the excitatory neurons of the basolateral complex and cortical amygdala nuclei. Moreover, Dbx1-derived cells comprise a previously unknown migratory stream that emanates from the preoptic area (POA), a ventral telencephalic domain adjacent to the diencephalic border. The Dbx1-positive, POA-derived population migrated specifically to the amygdala and, as defined by both immunochemical and electrophysiological criteria, generated a unique subclass of inhibitory neurons in the medial amygdala nucleus. Thus, this POA-derived population represents a previously unknown progenitor pool dedicated to the limbic system.

  18. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  19. Objects and categories: feature statistics and object processing in the ventral stream.

    PubMed

    Tyler, Lorraine K; Chiu, Shannon; Zhuang, Jie; Randall, Billi; Devereux, Barry J; Wright, Paul; Clarke, Alex; Taylor, Kirsten I

    2013-10-01

    Recognizing an object involves more than just visual analyses; its meaning must also be decoded. Extensive research has shown that processing the visual properties of objects relies on a hierarchically organized stream in ventral occipitotemporal cortex, with increasingly more complex visual features being coded from posterior to anterior sites culminating in the perirhinal cortex (PRC) in the anteromedial temporal lobe (aMTL). The neurobiological principles of the conceptual analysis of objects remain more controversial. Much research has focused on two neural regions-the fusiform gyrus and aMTL, both of which show semantic category differences, but of different types. fMRI studies show category differentiation in the fusiform gyrus, based on clusters of semantically similar objects, whereas category-specific deficits, specifically for living things, are associated with damage to the aMTL. These category-specific deficits for living things have been attributed to problems in differentiating between highly similar objects, a process that involves the PRC. To determine whether the PRC and the fusiform gyri contribute to different aspects of an object's meaning, with differentiation between confusable objects in the PRC and categorization based on object similarity in the fusiform, we carried out an fMRI study of object processing based on a feature-based model that characterizes the degree of semantic similarity and difference between objects and object categories. Participants saw 388 objects for which feature statistic information was available and named the objects at the basic level while undergoing fMRI scanning. After controlling for the effects of visual information, we found that feature statistics that capture similarity between objects formed category clusters in fusiform gyri, such that objects with many shared features (typical of living things) were associated with activity in the lateral fusiform gyri whereas objects with fewer shared features (typical of nonliving things) were associated with activity in the medial fusiform gyri. Significantly, a feature statistic reflecting differentiation between highly similar objects, enabling object-specific representations, was associated with bilateral PRC activity. These results confirm that the statistical characteristics of conceptual object features are coded in the ventral stream, supporting a conceptual feature-based hierarchy, and integrating disparate findings of category responses in fusiform gyri and category deficits in aMTL into a unifying neurocognitive framework.

  20. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  1. Opposite patterns of hemisphere dominance for early auditory processing of lexical tones and consonants

    PubMed Central

    Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin

    2006-01-01

    In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136

  2. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  3. Connectional Modularity of Top-Down and Bottom-Up Multimodal Inputs to the Lateral Cortex of the Mouse Inferior Colliculus

    PubMed Central

    Lesicko, Alexandria M.H.; Hristova, Teodora S.; Maigler, Kathleen C.

    2016-01-01

    The lateral cortex of the inferior colliculus receives information from both auditory and somatosensory structures and is thought to play a role in multisensory integration. Previous studies in the rat have shown that this nucleus contains a series of distinct anatomical modules that stain for GAD-67 as well as other neurochemical markers. In the present study, we sought to better characterize these modules in the mouse inferior colliculus and determine whether the connectivity of other neural structures with the lateral cortex is spatially related to the distribution of these neurochemical modules. Staining for GAD-67 and other markers revealed a single modular network throughout the rostrocaudal extent of the mouse lateral cortex. Somatosensory inputs from the somatosensory cortex and dorsal column nuclei were found to terminate almost exclusively within these modular zones. However, projections from the auditory cortex and central nucleus of the inferior colliculus formed patches that interdigitate with the GAD-67-positive modules. These results suggest that the lateral cortex of the mouse inferior colliculus exhibits connectional as well as neurochemical modularity and may contain multiple segregated processing streams. This finding is discussed in the context of other brain structures in which neuroanatomical and connectional modularity have functional consequences. SIGNIFICANCE STATEMENT Many brain regions contain subnuclear microarchitectures, such as the matrix-striosome organization of the basal ganglia or the patch-interpatch organization of the visual cortex, that shed light on circuit complexities. In the present study, we demonstrate the presence of one such micro-organization in the rodent inferior colliculus. While this structure is typically viewed as an auditory integration center, its lateral cortex appears to be involved in multisensory operations and receives input from somatosensory brain regions. We show here that the lateral cortex can be further subdivided into multiple processing streams: modular regions, which are targeted by somatosensory inputs, and extramodular zones that receive auditory information. PMID:27798184

  4. Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.

    PubMed

    Hertrich, Ingo; Dietrich, Susanne; Ackermann, Hermann

    2011-01-01

    During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.

  5. Neural correlates of audiovisual integration in music reading.

    PubMed

    Nichols, Emily S; Grahn, Jessica A

    2016-10-01

    Integration of auditory and visual information is important to both language and music. In the linguistic domain, audiovisual integration alters event-related potentials (ERPs) at early stages of processing (the mismatch negativity (MMN)) as well as later stages (P300(Andres et al., 2011)). However, the role of experience in audiovisual integration is unclear, as reading experience is generally confounded with developmental stage. Here we tested whether audiovisual integration of music appears similar to reading, and how musical experience altered integration. We compared brain responses in musicians and non-musicians on an auditory pitch-interval oddball task that evoked the MMN and P300, while manipulating whether visual pitch-interval information was congruent or incongruent with the auditory information. We predicted that the MMN and P300 would be largest when both auditory and visual stimuli deviated, because audiovisual integration would increase the neural response when the deviants were congruent. The results indicated that scalp topography differed between musicians and non-musicians for both the MMN and P300 response to deviants. Interestingly, musicians' musical training modulated integration of congruent deviants at both early and late stages of processing. We propose that early in the processing stream, visual information may guide interpretation of auditory information, leading to a larger MMN when auditory and visual information mismatch. At later attentional stages, integration of the auditory and visual stimuli leads to a larger P300 amplitude. Thus, experience with musical visual notation shapes the way the brain integrates abstract sound-symbol pairings, suggesting that musicians can indeed inform us about the role of experience in audiovisual integration. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. Memory and learning with rapid audiovisual sequences

    PubMed Central

    Keller, Arielle S.; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193

  7. Memory and learning with rapid audiovisual sequences.

    PubMed

    Keller, Arielle S; Sekuler, Robert

    2015-01-01

    We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.

  8. Diminished n1 auditory evoked potentials to oddball stimuli in misophonia patients.

    PubMed

    Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan

    2014-01-01

    Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain's early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.

  9. The Representation of Color across the Human Visual Cortex: Distinguishing Chromatic Signals Contributing to Object Form Versus Surface Color.

    PubMed

    Seymour, K J; Williams, M A; Rich, A N

    2016-05-01

    Many theories of visual object perception assume the visual system initially extracts borders between objects and their background and then "fills in" color to the resulting object surfaces. We investigated the transformation of chromatic signals across the human ventral visual stream, with particular interest in distinguishing representations of object surface color from representations of chromatic signals reflecting the retinal input. We used fMRI to measure brain activity while participants viewed figure-ground stimuli that differed either in the position or in the color contrast polarity of the foreground object (the figure). Multivariate pattern analysis revealed that classifiers were able to decode information about which color was presented at a particular retinal location from early visual areas, whereas regions further along the ventral stream exhibited biases for representing color as part of an object's surface, irrespective of its position on the retina. Additional analyses showed that although activity in V2 contained strong chromatic contrast information to support the early parsing of objects within a visual scene, activity in this area also signaled information about object surface color. These findings are consistent with the view that mechanisms underlying scene segmentation and the binding of color to object surfaces converge in V2. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. A single dual-stream framework for syntactic computations in music and language.

    PubMed

    Musso, Mariacristina; Weiller, Cornelius; Horn, Andreas; Glauche, Volkmer; Umarova, Roza; Hennig, Jürgen; Schneider, Albrecht; Rijntjes, Michel

    2015-08-15

    This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Neural Basis of Action Understanding: Evidence from Sign Language Aphasia.

    PubMed

    Rogalsky, Corianne; Raphel, Kristin; Tomkovicz, Vivian; O'Grady, Lucinda; Damasio, Hanna; Bellugi, Ursula; Hickok, Gregory

    2013-01-01

    The neural basis of action understanding is a hotly debated issue. The mirror neuron account holds that motor simulation in fronto-parietal circuits is critical to action understanding including speech comprehension, while others emphasize the ventral stream in the temporal lobe. Evidence from speech strongly supports the ventral stream account, but on the other hand, evidence from manual gesture comprehension (e.g., in limb apraxia) has led to contradictory findings. Here we present a lesion analysis of sign language comprehension. Sign language is an excellent model for studying mirror system function in that it bridges the gap between the visual-manual system in which mirror neurons are best characterized and language systems which have represented a theoretical target of mirror neuron research. Twenty-one life long deaf signers with focal cortical lesions performed two tasks: one involving the comprehension of individual signs and the other involving comprehension of signed sentences (commands). Participants' lesions, as indicated on MRI or CT scans, were mapped onto a template brain to explore the relationship between lesion location and sign comprehension measures. Single sign comprehension was not significantly affected by left hemisphere damage. Sentence sign comprehension impairments were associated with left temporal-parietal damage. We found that damage to mirror system related regions in the left frontal lobe were not associated with deficits on either of these comprehension tasks. We conclude that the mirror system is not critically involved in action understanding.

  12. Dissociation between melodic and rhythmic processing during piano performance from musical scores.

    PubMed

    Bengtsson, Sara L; Ullén, Fredrik

    2006-03-01

    When performing or perceiving music, we experience the melodic (spatial) and rhythmic aspects as a unified whole. Moreover, the motor program theory stipulates that the relative timing and the serial order of the movement are invariant features of a motor program. Still, clinical and psychophysical observations suggest independent processing of these two aspects, in both production and perception. Here, we used functional magnetic resonance imaging to dissociate between brain areas processing the melodic and the rhythmic aspects during piano playing from musical scores. This behavior requires that the pianist decodes two types of information from the score in order to produce the desired piece of music. The spatial location of a note head determines which piano key to strike, and the various features of the note, such as the stem and flags determine the timing of each key stroke. We found that the medial occipital lobe, the superior temporal lobe, the rostral cingulate cortex, the putamen and the cerebellum process the melodic information, whereas the lateral occipital and the inferior temporal cortex, the left supramarginal gyrus, the left inferior and ventral frontal gyri, the caudate nucleus, and the cerebellum process the rhythmic information. Thus, we suggest a dissociate involvement of the dorsal visual stream in the spatial pitch processing and the ventral visual stream in temporal movement preparation. We propose that this dissociate organization may be important for fast learning and flexibility in motor control.

  13. The Temporal Pole Top-Down Modulates the Ventral Visual Stream During Social Cognition.

    PubMed

    Pehrs, Corinna; Zaki, Jamil; Schlochtermeier, Lorna H; Jacobs, Arthur M; Kuchinke, Lars; Koelsch, Stefan

    2017-01-01

    The temporal pole (TP) has been associated with diverse functions of social cognition and emotion processing. Although the underlying mechanism remains elusive, one possibility is that TP acts as domain-general hub integrating socioemotional information. To test this, 26 participants were presented with 60 empathy-evoking film clips during fMRI scanning. The film clips were preceded by a linguistic sad or neutral context and half of the clips were accompanied by sad music. In line with its hypothesized role, TP was involved in the processing of sad context and furthermore tracked participants' empathic concern. To examine the neuromodulatory impact of TP, we applied nonlinear dynamic causal modeling to a multisensory integration network from previous work consisting of superior temporal gyrus (STG), fusiform gyrus (FG), and amygdala, which was extended by an additional node in the TP. Bayesian model comparison revealed a gating of STG and TP on fusiform-amygdalar coupling and an increase of TP to FG connectivity during the integration of contextual information. Moreover, these backward projections were strengthened by emotional music. The findings indicate that during social cognition, TP integrates information from different modalities and top-down modulates lower-level perceptual areas in the ventral visual stream as a function of integration demands. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Haloperidol impairs auditory filial imprinting and modulates monoaminergic neurotransmission in an imprinting-relevant forebrain area of the domestic chick.

    PubMed

    Gruss, M; Bock, J; Braun, K

    2003-11-01

    In vivo microdialysis and behavioural studies in the domestic chick have shown that glutamatergic as well as monoaminergic neurotransmission in the medio-rostral neostriatum/hyperstriatum ventrale (MNH) is altered after auditory filial imprinting. In the present study, using pharmaco-behavioural and in vivo microdialysis approaches, the role of dopaminergic neurotransmission in this juvenile learning event was further evaluated. The results revealed that: (i) the systemic application of the potent dopamine receptor antagonist haloperidol (7.5 mg/kg) strongly impairs auditory filial imprinting; (ii) systemic haloperidol induces a tetrodotoxin-sensitive increase of extracellular levels of the dopamine metabolite, homovanillic acid, in the MNH, whereas the levels of glutamate, taurine and the serotonin metabolite, 5-hydroxyindole-3-acetic acid, remain unchanged; (iii) haloperidol (0.01, 0.1, 1 mm) infused locally into the MNH increases glutamate, taurine and 5- hydroxyindole-3-acetic acid levels in a dose-dependent manner, whereas homovanillic acid levels remain unchanged; (iv) systemic haloperidol infusion reinforces the N-methyl-d-aspartate receptor-mediated inhibitory modulation of the dopaminergic neurotransmission within the MNH. These results indicate that the modulation of dopaminergic function and its interaction with other neurotransmitter systems in a higher associative forebrain region of the juvenile avian brain displays similar neurochemical characteristics as the adult mammalian prefrontal cortex. Furthermore, we were able to show that the pharmacological manipulation of monoaminergic regulatory mechanisms interferes with learning and memory formation, events which in a similar fashion might occur in young or adult mammals.

  15. Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.

    PubMed

    Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa

    2017-09-01

    Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Structural and functional abnormalities of the motor system in developmental stuttering

    PubMed Central

    Watkins, Kate E.; Smith, Stephen M.; Davis, Steve; Howell, Peter

    2007-01-01

    Summary Though stuttering is manifest in its motor characteristics, the cause of stuttering may not relate purely to impairments in the motor system as stuttering frequency is increased by linguistic factors, such as syntactic complexity and length of utterance, and decreased by changes in perception, such as masking or altering auditory feedback. Using functional and diffusion imaging, we examined brain structure and function in the motor and language areas in a group of young people who stutter. During speech production, irrespective of fluency or auditory feedback, the people who stuttered showed overactivity relative to controls in the anterior insula, cerebellum and midbrain bilaterally and underactivity in the ventral premotor, Rolandic opercular and sensorimotor cortex bilaterally and Heschl’s gyrus on the left. These results are consistent with a recent meta-analysis of functional imaging studies in developmental stuttering. Two additional findings emerged from our study. First, we found overactivity in the midbrain, which was at the level of the substantia nigra and extended to the pedunculopontine nucleus, red nucleus and subthalamic nucleus. This overactivity is consistent with suggestions in previous studies of abnormal function of the basal ganglia or excessive dopamine in people who stutter. Second, we found underactivity of the cortical motor and premotor areas associated with articulation and speech production. Analysis of the diffusion data revealed that the integrity of the white matter underlying the underactive areas in ventral premotor cortex was reduced in people who stutter. The white matter tracts in this area via connections with posterior superior temporal and inferior parietal cortex provide a substrate for the integration of articulatory planning and sensory feedback, and via connections with primary motor cortex, a substrate for execution of articulatory movements. Our data support the conclusion that stuttering is a disorder related primarily to disruption in the cortical and subcortical neural systems supporting the selection, initiation and execution of motor sequences necessary for fluent speech production. PMID:17928317

  17. Structural and functional abnormalities of the motor system in developmental stuttering.

    PubMed

    Watkins, Kate E; Smith, Stephen M; Davis, Steve; Howell, Peter

    2008-01-01

    Though stuttering is manifest in its motor characteristics, the cause of stuttering may not relate purely to impairments in the motor system as stuttering frequency is increased by linguistic factors, such as syntactic complexity and length of utterance, and decreased by changes in perception, such as masking or altering auditory feedback. Using functional and diffusion imaging, we examined brain structure and function in the motor and language areas in a group of young people who stutter. During speech production, irrespective of fluency or auditory feedback, the people who stuttered showed overactivity relative to controls in the anterior insula, cerebellum and midbrain bilaterally and underactivity in the ventral premotor, Rolandic opercular and sensorimotor cortex bilaterally and Heschl's gyrus on the left. These results are consistent with a recent meta-analysis of functional imaging studies in developmental stuttering. Two additional findings emerged from our study. First, we found overactivity in the midbrain, which was at the level of the substantia nigra and extended to the pedunculopontine nucleus, red nucleus and subthalamic nucleus. This overactivity is consistent with suggestions in previous studies of abnormal function of the basal ganglia or excessive dopamine in people who stutter. Second, we found underactivity of the cortical motor and premotor areas associated with articulation and speech production. Analysis of the diffusion data revealed that the integrity of the white matter underlying the underactive areas in ventral premotor cortex was reduced in people who stutter. The white matter tracts in this area via connections with posterior superior temporal and inferior parietal cortex provide a substrate for the integration of articulatory planning and sensory feedback, and via connections with primary motor cortex, a substrate for execution of articulatory movements. Our data support the conclusion that stuttering is a disorder related primarily to disruption in the cortical and subcortical neural systems supporting the selection, initiation and execution of motor sequences necessary for fluent speech production.

  18. Some components of the ``cocktail-party effect,'' as revealed when it fails

    NASA Astrophysics Data System (ADS)

    Divenyi, Pierre L.; Gygi, Brian

    2003-04-01

    The precise way listeners cope with cocktail-party situations, i.e., understand speech in the midst of other, simultaneously ongoing conversations, has by-and-large remained a puzzle, despite research committed to studying the problem over the past half century. In contrast, it is widely acknowledged that the cocktail-party effect (CPE) deteriorates in aging. Our investigations during the last decade have assessed the deterioration of the CPE in elderly listeners and attempted to uncover specific auditory tasks, on which the performance of the same listeners will also exhibit a deficit. Correlated performance on CPE and such auditory tasks arguably signify that the tasks in question are necessary for perceptual segregation of the target speech and the background babble. We will present results on three tasks correlated with CPE performance. All three tasks require temporal processing-based perceptual segregation of specific non-speech stimuli (amplitude- and/or frequency-modulated sinusoidal complexes): discrimination of formant transition patterns, segregation of streams with different syllabic rhythms, and selective attention to AM or FM features in the designated stream. [Work supported by a grant from the National Institute on Aging and by the V.A. Medical Research.

  19. Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.

    PubMed

    Williams, Jason A

    2012-06-01

    The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.

  20. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The simplest chronoscope II: reaction time measured by meterstick versus machine.

    PubMed

    Montare, Alberto

    2010-12-01

    Visual simple reaction time (SRT) scores measured in 31 college students of both sexes by use of the simplest chronoscope methodology (meterstick SRT) were compared to scores obtained by use of an electromechanical multi-choice reaction timer (machine SRT). Four hypotheses were tested. Results indicated that the previous mean value of meterstick SRT was replicated; meterstick SRT was significantly faster than long-standing population estimates of mean SRT; and machine SRT was significantly slower than the same long-standing mean SRT estimates for the population. Also, the mean meterstick SRT of 181 msec. was significantly faster than the mean machine SRT of 294 msec. It was theorized that differential visual information processing occurred such that the dorsal visual stream subserved meterstick SRT; whereas the ventral visual stream subserved machine SRT.

  3. Vision for perception and vision for action: normal and unusual development.

    PubMed

    Dilks, Daniel D; Hoffman, James E; Landau, Barbara

    2008-07-01

    Evidence suggests that visual processing is divided into the dorsal ('how') and ventral ('what') streams. We examined the normal development of these streams and their breakdown under neurological deficit by comparing performance of normally developing children and Williams syndrome individuals on two tasks: a visually guided action ('how') task, in which participants posted a card into an oriented slot, and a perception ('what') task, in which they matched a card to the slot's orientation. Results showed that all groups performed worse on the action task than the perception task, but the disparity was more pronounced in WS individuals and in normal 3-4-year-olds than in older children. These findings suggest that the 'how' system may be relatively slow to develop and more vulnerable to breakdown than the 'what' system.

  4. Inner Ear Morphology in the Atlantic Molly Poecilia mexicana—First Detailed Microanatomical Study of the Inner Ear of a Cyprinodontiform Species

    PubMed Central

    Schulz-Mirbach, Tanja; Heß, Martin; Plath, Martin

    2011-01-01

    Background Fishes show an amazing diversity in hearing abilities, inner ear structures, and otolith morphology. Inner ear morphology, however, has not yet been investigated in detail in any member of the diverse order Cyprinodontiformes. We, therefore, studied the inner ear of the cyprinodontiform freshwater fish Poecilia mexicana by analyzing the position of otoliths in situ, investigating the 3D structure of sensory epithelia, and examining the orientation patterns of ciliary bundles of the sensory hair cells, while combining μ-CT analyses, scanning electron microscopy, and immunocytochemical methods. P. mexicana occurs in different ecotypes, enabling us to study the intra-specific variability (on a qualitative basis) of fish from regular surface streams, and the Cueva del Azufre, a sulfidic cave in southern Mexico. Results The inner ear of Poecilia mexicana displays a combination of several remarkable features. The utricle is connected rostrally instead of dorso-rostrally to the saccule, and the macula sacculi, therefore, is very close to the utricle. Moreover, the macula sacculi possesses dorsal and ventral bulges. The two studied ecotypes of P. mexicana showed variation mainly in the shape and curvature of the macula lagenae, in the curvature of the macula sacculi, and in the thickness of the otolithic membrane. Conclusions Our study for the first time provides detailed insights into the auditory periphery of a cyprinodontiform inner ear and thus serves a basis—especially with regard to the application of 3D techniques—for further research on structure-function relationships of inner ears within the species-rich order Cyprinodontiformes. We suggest that other poeciliid taxa, or even other non-poeciliid cyprinodontiforms, may display similar inner ear morphologies as described here. PMID:22110746

  5. Inner ear morphology in the Atlantic molly Poecilia mexicana--first detailed microanatomical study of the inner ear of a cyprinodontiform species.

    PubMed

    Schulz-Mirbach, Tanja; Hess, Martin; Plath, Martin

    2011-01-01

    Fishes show an amazing diversity in hearing abilities, inner ear structures, and otolith morphology. Inner ear morphology, however, has not yet been investigated in detail in any member of the diverse order Cyprinodontiformes. We, therefore, studied the inner ear of the cyprinodontiform freshwater fish Poecilia mexicana by analyzing the position of otoliths in situ, investigating the 3D structure of sensory epithelia, and examining the orientation patterns of ciliary bundles of the sensory hair cells, while combining μ-CT analyses, scanning electron microscopy, and immunocytochemical methods. P. mexicana occurs in different ecotypes, enabling us to study the intra-specific variability (on a qualitative basis) of fish from regular surface streams, and the Cueva del Azufre, a sulfidic cave in southern Mexico. The inner ear of Poecilia mexicana displays a combination of several remarkable features. The utricle is connected rostrally instead of dorso-rostrally to the saccule, and the macula sacculi, therefore, is very close to the utricle. Moreover, the macula sacculi possesses dorsal and ventral bulges. The two studied ecotypes of P. mexicana showed variation mainly in the shape and curvature of the macula lagenae, in the curvature of the macula sacculi, and in the thickness of the otolithic membrane. Our study for the first time provides detailed insights into the auditory periphery of a cyprinodontiform inner ear and thus serves a basis--especially with regard to the application of 3D techniques--for further research on structure-function relationships of inner ears within the species-rich order Cyprinodontiformes. We suggest that other poeciliid taxa, or even other non-poeciliid cyprinodontiforms, may display similar inner ear morphologies as described here.

  6. Audio-visual integration through the parallel visual pathways.

    PubMed

    Kaposvári, Péter; Csete, Gergő; Bognár, Anna; Csibri, Péter; Tóth, Eszter; Szabó, Nikoletta; Vécsei, László; Sáry, Gyula; Tamás Kincses, Zsigmond

    2015-10-22

    Audio-visual integration has been shown to be present in a wide range of different conditions, some of which are processed through the dorsal, and others through the ventral visual pathway. Whereas neuroimaging studies have revealed integration-related activity in the brain, there has been no imaging study of the possible role of segregated visual streams in audio-visual integration. We set out to determine how the different visual pathways participate in this communication. We investigated how audio-visual integration can be supported through the dorsal and ventral visual pathways during the double flash illusion. Low-contrast and chromatic isoluminant stimuli were used to drive preferably the dorsal and ventral pathways, respectively. In order to identify the anatomical substrates of the audio-visual interaction in the two conditions, the psychophysical results were correlated with the white matter integrity as measured by diffusion tensor imaging.The psychophysiological data revealed a robust double flash illusion in both conditions. A correlation between the psychophysical results and local fractional anisotropy was found in the occipito-parietal white matter in the low-contrast condition, while a similar correlation was found in the infero-temporal white matter in the chromatic isoluminant condition. Our results indicate that both of the parallel visual pathways may play a role in the audio-visual interaction. Copyright © 2015. Published by Elsevier B.V.

  7. Task-dependent enhancement of facial expression and identity representations in human cortex.

    PubMed

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Changes in brain morphology in albinism reflect reduced visual acuity.

    PubMed

    Bridge, Holly; von dem Hagen, Elisabeth A H; Davies, George; Chambers, Claire; Gouws, Andre; Hoffmann, Michael; Morland, Antony B

    2014-07-01

    Albinism, in humans and many animal species, has a major impact on the visual system, leading to reduced acuity, lack of binocular function and nystagmus. In addition to the lack of a foveal pit, there is a disruption to the routing of the nerve fibers crossing at the optic chiasm, resulting in excessive crossing of fibers to the contralateral hemisphere. However, very little is known about the effect of this misrouting on the structure of the post-chiasmatic visual pathway, and the occipital lobes in particular. Whole-brain analyses of cortical thickness in a large cohort of subjects with albinism showed an increase in cortical thickness, relative to control subjects, particularly in posterior V1, corresponding to the foveal representation. Furthermore, mean cortical thickness across entire V1 was significantly greater in these subjects compared to controls and negatively correlated with visual acuity in albinism. Additionally, the group with albinism showed decreased gyrification in the left ventral occipital lobe. While the increase in cortical thickness in V1, also found in congenitally blind subjects, has been interpreted to reflect a lack of pruning, the decreased gyrification in the ventral extrastriate cortex may reflect the reduced input to the foveal regions of the ventral visual stream. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Neuronal effects of nicotine during auditory selective attention in schizophrenia.

    PubMed

    Smucny, Jason; Olincy, Ann; Rojas, Donald C; Tregellas, Jason R

    2016-01-01

    Although nicotine has been shown to improve attention deficits in schizophrenia, the neurobiological mechanisms underlying this effect are poorly understood. We hypothesized that nicotine would modulate attention-associated neuronal response in schizophrenia patients in the ventral parietal cortex (VPC), hippocampus, and anterior cingulate based on previous findings in control subjects. To test this hypothesis, the present study examined response in these regions in a cohort of nonsmoking patients and healthy control subjects using an auditory selective attention task with environmental noise distractors during placebo and nicotine administration. In agreement with our hypothesis, significant diagnosis (Control vs. Patient) X drug (Placebo vs. Nicotine) interactions were observed in the VPC and hippocampus. The interaction was driven by task-associated hyperactivity in patients (relative to healthy controls) during placebo administration, and decreased hyperactivity in patients after nicotine administration (relative to placebo). No significant interaction was observed in the anterior cingulate. Task-associated hyperactivity of the VPC predicted poor task performance in patients during placebo. Poor task performance also predicted symptoms in patients as measured by the Brief Psychiatric Rating Scale. These results are the first to suggest that nicotine may modulate brain activity in a selective attention-dependent manner in schizophrenia. © 2015 Wiley Periodicals, Inc.

  10. The Thalamocortical Projection Systems in Primate: An Anatomical Support for Multisensory and Sensorimotor Interplay

    PubMed Central

    Cappe, Céline; Morel, Anne; Barone, Pascal

    2009-01-01

    Multisensory and sensorimotor integrations are usually considered to occur in superior colliculus and cerebral cortex, but few studies proposed the thalamus as being involved in these integrative processes. We investigated whether the organization of the thalamocortical (TC) systems for different modalities partly overlap, representing an anatomical support for multisensory and sensorimotor interplay in thalamus. In 2 macaque monkeys, 6 neuroanatomical tracers were injected in the rostral and caudal auditory cortex, posterior parietal cortex (PE/PEa in area 5), and dorsal and ventral premotor cortical areas (PMd, PMv), demonstrating the existence of overlapping territories of thalamic projections to areas of different modalities (sensory and motor). TC projections, distinct from the ones arising from specific unimodal sensory nuclei, were observed from motor thalamus to PE/PEa or auditory cortex and from sensory thalamus to PMd/PMv. The central lateral nucleus and the mediodorsal nucleus project to all injected areas, but the most significant overlap across modalities was found in the medial pulvinar nucleus. The present results demonstrate the presence of thalamic territories integrating different sensory modalities with motor attributes. Based on the divergent/convergent pattern of TC and corticothalamic projections, 4 distinct mechanisms of multisensory and sensorimotor interplay are proposed. PMID:19150924

  11. Substitution urethroplasty using oral mucosa graft for male anterior urethral stricture disease: Current topics and reviews.

    PubMed

    Horiguchi, Akio

    2017-07-01

    Male anterior urethral stricture is scarring of the subepithelial tissue of the corpus spongiosum that constricts the urethral lumen, decreasing the urinary stream. Its surgical management is a challenging problem, and has changed dramatically in the past several decades. Open surgical repair using grafts or flaps, called substitution urethroplasty, has become the gold standard procedure for anterior urethral strictures that are not amenable to excision and primary anastomosis. Oral mucosa harvested from the inner cheek (buccal mucosa) is an ideal material, and is most commonly used for substitution urethroplasty, and lingual mucosa harvested from the underside of the tongue has recently emerged as an alternative material with equivalent outcome. Onlay augmentation of oral mucosa graft on the ventral side (ventral onlay) or dorsal side (dorsal onlay, Barbagli procedure) has been widely used for bulbar urethral stricture with comparable success rates. In bulbar urethral strictures containing obliterative or nearly obliterative segments, either a two-sided dorsal plus ventral onlay (Palminteri technique) or a combination of excision and primary anastomosis and onlay augmentation (augmented anastomotic urethroplasty) are the procedures of choice. Most penile urethral strictures can be repaired in a one-stage procedure either by dorsal inlay with ventral sagittal urethrotomy (Asopa technique) or dorsolateral onlay with one-sided urethral dissection (Kulkarni technique); however, staged urethroplasty remains the procedure of choice for complex strictures, including strictures associated with genital lichen sclerosus or failed hypospadias. This article presents an overview of substitution urethroplasty using oral mucosa graft, and reviews current topics. © 2017 The Japanese Urological Association.

  12. [Tinnitus and psychiatric comorbidities].

    PubMed

    Goebel, G

    2015-04-01

    Tinnitus is an auditory phantom phenomenon characterized by the sensation of sounds without objectively identifiable sound sources. To date, its causes are not well understood. The perceived severity of tinnitus correlates more closely to psychological and general health factors than to audiometric parameters. Together with limbic structures in the ventral striatum, the prefrontal cortex forms an internal "noise cancelling system", which normally helps to block out unpleasant sounds, including the tinnitus signal. If this pathway is compromised, chronic tinnitus results. Patients with chronic tinnitus show increased functional connectivity in corticolimbic pathways. Psychiatric comorbidities are common in patients who seek help for tinnitus or hyperacusis. Clinicians need valid screening tools in order to identify patients with psychiatric disorders and to tailor treatment in a multidisciplinary setting.

  13. Effective Connectivity Hierarchically Links Temporoparietal and Frontal Areas of the Auditory Dorsal Stream with the Motor Cortex Lip Area during Speech Perception

    ERIC Educational Resources Information Center

    Murakami, Takenobu; Restle, Julia; Ziemann, Ulf

    2012-01-01

    A left-hemispheric cortico-cortical network involving areas of the temporoparietal junction (Tpj) and the posterior inferior frontal gyrus (pIFG) is thought to support sensorimotor integration of speech perception into articulatory motor activation, but how this network links with the lip area of the primary motor cortex (M1) during speech…

  14. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    PubMed

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.

  15. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys

    PubMed Central

    Mishkin, Mortimer

    2009-01-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703

  16. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263

  17. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.

    PubMed

    Tian, Biao; Rauschecker, Josef P

    2004-11-01

    Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.

  18. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  19. The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing

    PubMed Central

    Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc

    2014-01-01

    One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available. PMID:25009512

  20. Diminished N1 Auditory Evoked Potentials to Oddball Stimuli in Misophonia Patients

    PubMed Central

    Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan

    2014-01-01

    Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain’s early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients. PMID:24782731

Top