Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.
Shepard, Kathryn N.; Chong, Kelly K.
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529
Effect of sound intensity on tonotopic fMRI maps in the unanesthetized monkey.
Tanji, Kazuyo; Leopold, David A; Ye, Frank Q; Zhu, Charles; Malloy, Megan; Saunders, Richard C; Mishkin, Mortimer
2010-01-01
The monkey's auditory cortex includes a core region on the supratemporal plane (STP) made up of the tonotopically organized areas A1, R, and RT, together with a surrounding belt and a lateral parabelt region. The functional studies that yielded the tonotopic maps and corroborated the anatomical division into core, belt, and parabelt typically used low-amplitude pure tones that were often restricted to threshold-level intensities. Here we used functional magnetic resonance imaging in awake rhesus monkeys to determine whether, and if so how, the tonotopic maps and the pattern of activation in core, belt, and parabelt are affected by systematic changes in sound intensity. Blood oxygenation level-dependent (BOLD) responses to groups of low- and high-frequency pure tones 3-4 octaves apart were measured at multiple sound intensity levels. The results revealed tonotopic maps in the auditory core that reversed at the putative areal boundaries between A1 and R and between R and RT. Although these reversals of the tonotopic representations were present at all intensity levels, the lateral spread of activation depended on sound amplitude, with increasing recruitment of the adjacent belt areas as the intensities increased. Tonotopic organization along the STP was also evident in frequency-specific deactivation (i.e. "negative BOLD"), an effect that was intensity-specific as well. Regions of positive and negative BOLD were spatially interleaved, possibly reflecting lateral inhibition of high-frequency areas during activation of adjacent low-frequency areas, and vice versa. These results, which demonstrate the strong influence of tonal amplitude on activation levels, identify sound intensity as an important adjunct parameter for mapping the functional architecture of auditory cortex.
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Tani, Toshiki; Abe, Hiroshi; Hayami, Taku; Banno, Taku; Kitamura, Naohito; Mashiko, Hiromi
2018-01-01
Abstract Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions. PMID:29736410
Auditory Spatial Attention Representations in the Human Cerebral Cortex
Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.
2014-01-01
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753
Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets.
Meredith, M Alex; Allman, Brian L
2015-03-01
The recent findings in several species that the primary auditory cortex processes non-auditory information have largely overlooked the possibility of somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior auditory field and primary auditory cortex) for tactile responsivity. Multiple single-unit recordings from anesthetised ferret cortex yielded histologically verified neurons (n = 311) tested with electronically controlled auditory, visual and tactile stimuli, and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in the core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in the auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing, and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Scott, Brian H.; Leccese, Paul A.; Saleem, Kadharbatcha S.; Kikuchi, Yukiko; Mullarkey, Matthew P.; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C.
2017-01-01
Abstract In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. PMID:26620266
Scott, Brian H; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
2017-11-01
In the primate auditory cortex, information flows serially in the mediolateral dimension from core, to belt, to parabelt. In the caudorostral dimension, stepwise serial projections convey information through the primary, rostral, and rostrotemporal (AI, R, and RT) core areas on the supratemporal plane, continuing to the rostrotemporal polar area (RTp) and adjacent auditory-related areas of the rostral superior temporal gyrus (STGr) and temporal pole. In addition to this cascade of corticocortical connections, the auditory cortex receives parallel thalamocortical projections from the medial geniculate nucleus (MGN). Previous studies have examined the projections from MGN to auditory cortex, but most have focused on the caudal core areas AI and R. In this study, we investigated the full extent of connections between MGN and AI, R, RT, RTp, and STGr using retrograde and anterograde anatomical tracers. Both AI and R received nearly 90% of their thalamic inputs from the ventral subdivision of the MGN (MGv; the primary/lemniscal auditory pathway). By contrast, RT received only ∼45% from MGv, and an equal share from the dorsal subdivision (MGd). Area RTp received ∼25% of its inputs from MGv, but received additional inputs from multisensory areas outside the MGN (30% in RTp vs. 1-5% in core areas). The MGN input to RTp distinguished this rostral extension of auditory cortex from the adjacent auditory-related cortex of the STGr, which received 80% of its thalamic input from multisensory nuclei (primarily medial pulvinar). Anterograde tracers identified complementary descending connections by which highly processed auditory information may modulate thalamocortical inputs. © 2017 Wiley Periodicals, Inc.
Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C
2017-01-01
In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Jing, Rixing; Huang, Jiangjie; Jiang, Deguo; Lin, Xiaodong; Ma, Xiaolei; Tian, Hongjun; Li, Jie; Zhuo, Chuanjun
2018-01-23
Schizophrenia is associated with widespread and complex cerebral blood flow (CBF) disturbance. Auditory verbal hallucinations (AVH) and insight are the core symptoms of schizophrenia. However, to the best of our knowledge, very few studies have assessed the CBF characteristics of the AVH suffered by schizophrenic patients with and without insight. Based on our previous findings, Using a 3D pseudo-continuous ASL (pcASL) technique, we investigated the differences in AVH-related CBF alterations in schizophrenia patients with and without insight. We used statistical parametric mapping (SPM8) and statistical non-parametric mapping (SnPM13) to perform the fMRI analysis. We found that AVH-schizophrenia patients without insight showed an increased CBF in the left temporal pole and a decreased CBF in the right middle frontal gyrus when compared to AVH-schizophrenia patients with insight. Our novel findings suggest that AVH-schizophrenia patients without insight possess a more complex CBF disturbance. Simultaneously, our findings also incline to support the idea that the CBF aberrant in some specific brain regions may be the common neural basis of insight and AVH. Our findings support the mostly current hypotheses regarding AVH to some extent. Although our findings come from a small sample, it provide the evidence that indicate us to conduct a larger study to thoroughly explore the mechanisms of schizophrenia, especially the core symptoms of AVHs and insight.
Mapping perception to action in piano practice: a longitudinal DC-EEG study
Bangert, Marc; Altenmüller, Eckart O
2003-01-01
Background Performing music requires fast auditory and motor processing. Regarding professional musicians, recent brain imaging studies have demonstrated that auditory stimulation produces a co-activation of motor areas, whereas silent tapping of musical phrases evokes a co-activation in auditory regions. Whether this is obtained via a specific cerebral relay station is unclear. Furthermore, the time course of plasticity has not yet been addressed. Results Changes in cortical activation patterns (DC-EEG potentials) induced by short (20 minute) and long term (5 week) piano learning were investigated during auditory and motoric tasks. Two beginner groups were trained. The 'map' group was allowed to learn the standard piano key-to-pitch map. For the 'no-map' group, random assignment of keys to tones prevented such a map. Auditory-sensorimotor EEG co-activity occurred within only 20 minutes. The effect was enhanced after 5-week training, contributing elements of both perception and action to the mental representation of the instrument. The 'map' group demonstrated significant additional activity of right anterior regions. Conclusion We conclude that musical training triggers instant plasticity in the cortex, and that right-hemispheric anterior areas provide an audio-motor interface for the mental representation of the keyboard. PMID:14575529
Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
2010-09-29
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
ERIC Educational Resources Information Center
Eitan, Zohar; Timmers, Renee
2010-01-01
Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…
Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C
2014-01-01
Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.
Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B
2012-06-07
In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. Copyright © 2012 Elsevier Inc. All rights reserved.
Fukushima, Makoto; Saunders, Richard C.; Leopold, David A.; Mishkin, Mortimer; Averbeck, Bruno B.
2012-01-01
Summary In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here we used chronic micro-electrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693
2016-01-01
Abstract Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor‐preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface‐based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory‐motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory‐motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M‐I. Hum Brain Mapp 37:2784–2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:27061771
Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M
2000-01-01
Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.
Sood, Mariam R; Sereno, Martin I
2016-08-01
Cortical mapping techniques using fMRI have been instrumental in identifying the boundaries of topological (neighbor-preserving) maps in early sensory areas. The presence of topological maps beyond early sensory areas raises the possibility that they might play a significant role in other cognitive systems, and that topological mapping might help to delineate areas involved in higher cognitive processes. In this study, we combine surface-based visual, auditory, and somatomotor mapping methods with a naturalistic reading comprehension task in the same group of subjects to provide a qualitative and quantitative assessment of the cortical overlap between sensory-motor maps in all major sensory modalities, and reading processing regions. Our results suggest that cortical activation during naturalistic reading comprehension overlaps more extensively with topological sensory-motor maps than has been heretofore appreciated. Reading activation in regions adjacent to occipital lobe and inferior parietal lobe almost completely overlaps visual maps, whereas a significant portion of frontal activation for reading in dorsolateral and ventral prefrontal cortex overlaps both visual and auditory maps. Even classical language regions in superior temporal cortex are partially overlapped by topological visual and auditory maps. By contrast, the main overlap with somatomotor maps is restricted to a small region on the anterior bank of the central sulcus near the border between the face and hand representations of M-I. Hum Brain Mapp 37:2784-2810, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
The Role of Auditory Cues in the Spatial Knowledge of Blind Individuals
ERIC Educational Resources Information Center
Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios
2012-01-01
The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…
2006-01-01
information of the robot (Figure 1) acquired via laser-based localization techniques. The results are maps of the global soundscape . The algorithmic...environments than noise maps. Furthermore, provided the acoustic localization algorithm can detect the sources, the soundscape can be mapped with many...gathering information about the auditory soundscape in which it is working. In addition to robustness in the presence of noise, it has also been
An anatomical and functional topography of human auditory cortical areas
Moerel, Michelle; De Martino, Federico; Formisano, Elia
2014-01-01
While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that—whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis—the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions. PMID:25120426
Responses in Rat Core Auditory Cortex are Preserved during Sleep Spindle Oscillations
Sela, Yaniv; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Tononi, Giulio; Nir, Yuval
2016-01-01
Study Objectives: Sleep is defined as a reversible state of reduction in sensory responsiveness and immobility. A long-standing hypothesis suggests that a high arousal threshold during non-rapid eye movement (NREM) sleep is mediated by sleep spindle oscillations, impairing thalamocortical transmission of incoming sensory stimuli. Here we set out to test this idea directly by examining sensory-evoked neuronal spiking activity during natural sleep. Methods: We compared neuronal (n = 269) and multiunit activity (MUA), as well as local field potentials (LFP) in rat core auditory cortex (A1) during NREM sleep, comparing responses to sounds depending on the presence or absence of sleep spindles. Results: We found that sleep spindles robustly modulated the timing of neuronal discharges in A1. However, responses to sounds were nearly identical for all measured signals including isolated neurons, MUA, and LFPs (all differences < 10%). Furthermore, in 10% of trials, auditory stimulation led to an early termination of the sleep spindle oscillation around 150–250 msec following stimulus onset. Finally, active ON states and inactive OFF periods during slow waves in NREM sleep affected the auditory response in opposite ways, depending on stimulus intensity. Conclusions: Responses in core auditory cortex are well preserved regardless of sleep spindles recorded in that area, suggesting that thalamocortical sensory relay remains functional during sleep spindles, and that sensory disconnection in sleep is mediated by other mechanisms. Citation: Sela Y, Vyazovskiy VV, Cirelli C, Tononi G, Nir Y. Responses in rat core auditory cortex are preserved during sleep spindle oscillations. SLEEP 2016;39(5):1069–1082. PMID:26856904
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2018-05-01
The trends in cochlear implantation candidacy and benefit have changed rapidly in the last two decades. It is now widely accepted that early implantation leads to better postimplant outcomes. Although some generalizations can be made about postimplant auditory and language performance, neural mechanisms need to be studied to predict individual prognosis. The aim of this study was to use functional magnetic resonance imaging (fMRI) to identify preimplant neuroimaging biomarkers that predict children's postimplant auditory and language outcomes as measured by parental observation/reports. This is a pre-post correlational measures study. Twelve possible cochlear implant candidates with bilateral severe to profound hearing loss were recruited via referrals for a clinical magnetic resonance imaging to ensure structural integrity of the auditory nerve for implantation. Participants underwent cochlear implantation at a mean age of 19.4 mo. All children used the advanced combination encoder strategy (ACE, Cochlear Corporation™, Nucleus ® Freedom cochlear implants). Three participants received an implant in the right ear; one in the left ear whereas eight participants received bilateral implants. Participants' preimplant neuronal activation in response to two auditory stimuli was studied using an event-related fMRI method. Blood oxygen level dependent contrast maps were calculated for speech and noise stimuli. The general linear model was used to create z-maps. The Auditory Skills Checklist (ASC) and the SKI-HI Language Development Scale (SKI-HI LDS) were administered to the parents 2 yr after implantation. A nonparametric correlation analysis was implemented between preimplant fMRI activation and postimplant auditory and language outcomes based on ASC and SKI-HI LDS. Statistical Parametric Mapping software was used to create regression maps between fMRI activation and scores on the aforementioned tests. Regression maps were overlaid on the Imaging Research Center infant template and visualized in MRIcro. Regression maps revealed two clusters of brain activation for the speech versus silence contrast and five clusters for the noise versus silence contrast that were significantly correlated with the parental reports. These clusters included auditory and extra-auditory regions such as the middle temporal gyrus, supramarginal gyrus, precuneus, cingulate gyrus, middle frontal gyrus, subgyral, and middle occipital gyrus. Both positive and negative correlations were observed. Correlation values for the different clusters ranged from -0.90 to 0.95 and were significant at a corrected p value of <0.05. Correlations suggest that postimplant performance may be predicted by activation in specific brain regions. The results of the present study suggest that (1) fMRI can be used to identify neuroimaging biomarkers of auditory and language performance before implantation and (2) activation in certain brain regions may be predictive of postimplant auditory and language performance as measured by parental observation/reports. American Academy of Audiology.
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
Brain Mapping of Language and Auditory Perception in High-Functioning Autistic Adults: A PET Study.
ERIC Educational Resources Information Center
Muller, R-A.; Behen, M. E.; Rothermel, R. D.; Chugani, D. C.; Muzik, O.; Mangner, T. J.; Chugani, H. T.
1999-01-01
A study used positron emission tomography (PET) to study patterns of brain activation during auditory processing in five high-functioning adults with autism. Results found that participants showed reversed hemispheric dominance during the verbal auditory stimulation and reduced activation of the auditory cortex and cerebellum. (CR)
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
Following the success in incorporating perceptual models in audio coding algorithms, their application in other speech/audio processing systems is expanding. In general, all perceptual speech/audio processing algorithms involve minimization of an objective function that directly/indirectly incorporates properties of human perception. This dissertation primarily investigates the problems associated with directly embedding an auditory model in the objective function formulation and proposes possible solutions to overcome high complexity issues for use in real-time speech/audio algorithms. Specific problems addressed in this dissertation include: 1) the development of approximate but computationally efficient auditory model implementations that are consistent with the principles of psychoacoustics, 2) the development of a mapping scheme that allows synthesizing a time/frequency domain representation from its equivalent auditory model output. The first problem is aimed at addressing the high computational complexity involved in solving perceptual objective functions that require repeated application of auditory model for evaluation of different candidate solutions. In this dissertation, a frequency pruning and a detector pruning algorithm is developed that efficiently implements the various auditory model stages. The performance of the pruned model is compared to that of the original auditory model for different types of test signals in the SQAM database. Experimental results indicate only a 4-7% relative error in loudness while attaining up to 80-90 % reduction in computational complexity. Similarly, a hybrid algorithm is developed specifically for use with sinusoidal signals and employs the proposed auditory pattern combining technique together with a look-up table to store representative auditory patterns. The second problem obtains an estimate of the auditory representation that minimizes a perceptual objective function and transforms the auditory pattern back to its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
ERIC Educational Resources Information Center
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol
2007-01-01
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school…
Is auditory perceptual timing a core deficit of developmental coordination disorder?
Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen
2018-05-09
Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.
Concentric scheme of monkey auditory cortex
NASA Astrophysics Data System (ADS)
Kosaki, Hiroko; Saunders, Richard C.; Mishkin, Mortimer
2003-04-01
The cytoarchitecture of the rhesus monkey's auditory cortex was examined using immunocytochemical staining with parvalbumin, calbindin-D28K, and SMI32, as well as staining for cytochrome oxidase (CO). The results suggest that Kaas and Hackett's scheme of the auditory cortices can be extended to include five concentric rings surrounding an inner core. The inner core, containing areas A1 and R, is the most densely stained with parvalbumin and CO and can be separated on the basis of laminar patterns of SMI32 staining into lateral and medial subdivisions. From the inner core to the fifth (outermost) ring, parvalbumin staining gradually decreases and calbindin staining gradually increases. The first ring corresponds to Kaas and Hackett's auditory belt, and the second, to their parabelt. SMI32 staining revealed a clear border between these two. Rings 2 through 5 extend laterally into the dorsal bank of the superior temporal sulcus. The results also suggest that the rostral tip of the outermost ring adjoins the rostroventral part of the insula (area Pro) and the temporal pole, while the caudal tip adjoins the ventral part of area 7a.
Neural network retuning and neural predictors of learning success associated with cello training.
Wollman, Indiana; Penhune, Virginia; Segado, Melanie; Carpentier, Thibaut; Zatorre, Robert J
2018-06-26
The auditory and motor neural systems are closely intertwined, enabling people to carry out tasks such as playing a musical instrument whose mapping between action and sound is extremely sophisticated. While the dorsal auditory stream has been shown to mediate these audio-motor transformations, little is known about how such mapping emerges with training. Here, we use longitudinal training on a cello as a model for brain plasticity during the acquisition of specific complex skills, including continuous and many-to-one audio-motor mapping, and we investigate individual differences in learning. We trained participants with no musical background to play on a specially designed MRI-compatible cello and scanned them before and after 1 and 4 wk of training. Activation of the auditory-to-motor dorsal cortical stream emerged rapidly during the training and was similarly activated during passive listening and cello performance of trained melodies. This network activation was independent of performance accuracy and therefore appears to be a prerequisite of music playing. In contrast, greater recruitment of regions involved in auditory encoding and motor control over the training was related to better musical proficiency. Additionally, pre-supplementary motor area activity and its connectivity with the auditory cortex during passive listening before training was predictive of final training success, revealing the integrative function of this network in auditory-motor information processing. Together, these results clarify the critical role of the dorsal stream and its interaction with auditory areas in complex audio-motor learning.
Evaluation of Techniques Used to Estimate Cortical Feature Maps
Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2011-01-01
Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537
Representation of Sound Categories in Auditory Cortical Maps
ERIC Educational Resources Information Center
Guenther, Frank H.; Nieto-Castanon, Alfonso; Ghosh, Satrajit S.; Tourville, Jason A.
2004-01-01
Functional magnetic resonance imaging (fMRI) was used to investigate the representation of sound categories in human auditory cortex. Experiment 1 investigated the representation of prototypical (good) and nonprototypical (bad) examples of a vowel sound. Listening to prototypical examples of a vowel resulted in less auditory cortical activation…
Debruyne, Joke A; Francart, Tom; Janssen, A Miranda L; Douma, Kim; Brokx, Jan P L
2017-03-01
This study investigated the hypotheses that (1) prelingually deafened CI users do not have perfect electrode discrimination ability and (2) the deactivation of non-discriminable electrodes can improve auditory performance. Electrode discrimination difference limens were determined for all electrodes of the array. The subjects' basic map was subsequently compared to an experimental map, which contained only discriminable electrodes, with respect to speech understanding in quiet and in noise, listening effort, spectral ripple discrimination and subjective appreciation. Subjects were six prelingually deafened, late implanted adults using the Nucleus cochlear implant. Electrode discrimination difference limens across all subjects and electrodes ranged from 0.5 to 7.125, with significantly larger limens for basal electrodes. No significant differences were found between the basic map and the experimental map on auditory tests. Subjective appreciation was found to be significantly poorer for the experimental map. Prelingually deafened CI users were unable to discriminate between all adjacent electrodes. There was no difference in auditory performance between the basic and experimental map. Potential factors contributing to the absence of improvement with the experimental map include the reduced number of maxima, incomplete adaptation to the new frequency allocation, and the mainly basal location of deactivated electrodes.
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.
Ingham, N J; Thornton, S K; McCrossan, D; Withington, D J
1998-12-01
Neurotransmitter involvement in development and maintenance of the auditory space map in the guinea pig superior colliculus. J. Neurophysiol. 80: 2941-2953, 1998. The mammalian superior colliculus (SC) is a complex area of the midbrain in terms of anatomy, physiology, and neurochemistry. The SC bears representations of the major sensory modalites integrated with a motor output system. It is implicated with saccade generation, in behavioral responses to novel sensory stimuli and receives innervation from diverse regions of the brain using many neurotransmitter classes. Ethylene-vinyl acetate copolymer (Elvax-40W polymer) was used here to deliver chronically neurotransmitter receptor antagonists to the SC of the guinea pig to investigate the potential role played by the major neurotransmitter systems in the collicular representation of auditory space. Slices of polymer containing different drugs were implanted onto the SC of guinea pigs before the development of the SC azimuthal auditory space map, at approximately 20 days after birth (DAB). A further group of animals was exposed to aminophosphonopentanoic acid (AP5) at approximately 250 DAB. Azimuthal spatial tuning properties of deep layer multiunits of anesthetized guinea pigs were examined approximately 20 days after implantation of the Elvax polymer. Broadband noise bursts were presented to the animals under anechoic, free-field conditions. Neuronal responses were used to construct polar plots representative of the auditory spatial multiunit receptive fields (MURFs). Animals exposed to control polymer could develop a map of auditory space in the SC comparable with that seen in unimplanted normal animals. Exposure of the SC of young animals to AP5, 6-cyano-7-nitroquinoxaline-2,3-dione, or atropine, resulted in a reduction in the proportion of spatially tuned responses with an increase in the proportion of broadly tuned responses and a degradation in topographic order. Thus N-methyl--aspartate (NMDA) and non-NMDA glutamate receptors and muscarinic acetylcholine receptors appear to play vital roles in the development of the SC auditory space map. A group of animals exposed to AP5 beginning at approximately 250 DAB produced results very similar to those obtained in the young group exposed to AP5. Thus NMDA glutamate receptors also seem to be involved in the maintenance of the SC representation of auditory space in the adult guinea pig. Exposure of the SC of young guinea pigs to gamma-aminobutyric acid (GABA) receptor blocking agents produced some but not total disruption of the spatial tuning of auditory MURFs. Receptive fields were large compared with controls, but a significant degree of topographical organization was maintained. GABA receptors may play a role in the development of fine tuning and sharpening of auditory spatial responses in the SC but not necessarily in the generation of topographical order of the these responses.
Memory as embodiment: The case of modality and serial short-term memory.
Macken, Bill; Taylor, John C; Kozlov, Michail D; Hughes, Robert W; Jones, Dylan M
2016-10-01
Classical explanations for the modality effect-superior short-term serial recall of auditory compared to visual sequences-typically recur to privileged processing of information derived from auditory sources. Here we critically appraise such accounts, and re-evaluate the nature of the canonical empirical phenomena that have motivated them. Three experiments show that the standard account of modality in memory is untenable, since auditory superiority in recency is often accompanied by visual superiority in mid-list serial positions. We explain this simultaneous auditory and visual superiority by reference to the way in which perceptual objects are formed in the two modalities and how those objects are mapped to speech motor forms to support sequence maintenance and reproduction. Specifically, stronger obligatory object formation operating in the standard auditory form of sequence presentation compared to that for visual sequences leads both to enhanced addressability of information at the object boundaries and reduced addressability for that in the interior. Because standard visual presentation does not lead to such object formation, such sequences do not show the boundary advantage observed for auditory presentation, but neither do they suffer loss of addressability associated with object information, thereby affording more ready mapping of that information into a rehearsal cohort to support recall. We show that a range of factors that impede this perceptual-motor mapping eliminate visual superiority while leaving auditory superiority unaffected. We make a general case for viewing short-term memory as an embodied, perceptual-motor process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Sinai, A; Crone, N E; Wied, H M; Franaszczuk, P J; Miglioretti, D; Boatman-Reich, D
2009-01-01
We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping.
Intracranial mapping of auditory perception: Event-related responses and electrocortical stimulation
Sinai, A.; Crone, N.E.; Wied, H.M.; Franaszczuk, P.J.; Miglioretti, D.; Boatman-Reich, D.
2010-01-01
Objective We compared intracranial recordings of auditory event-related responses with electrocortical stimulation mapping (ESM) to determine their functional relationship. Methods Intracranial recordings and ESM were performed, using speech and tones, in adult epilepsy patients with subdural electrodes implanted over lateral left cortex. Evoked N1 responses and induced spectral power changes were obtained by trial averaging and time-frequency analysis. Results ESM impaired perception and comprehension of speech, not tones, at electrode sites in the posterior temporal lobe. There was high spatial concordance between ESM sites critical for speech perception and the largest spectral power (100% concordance) and N1 (83%) responses to speech. N1 responses showed good sensitivity (0.75) and specificity (0.82), but poor positive predictive value (0.32). Conversely, increased high-frequency power (>60 Hz) showed high specificity (0.98), but poorer sensitivity (0.67) and positive predictive value (0.67). Stimulus-related differences were observed in the spatial-temporal patterns of event-related responses. Conclusions Intracranial auditory event-related responses to speech were associated with cortical sites critical for auditory perception and comprehension of speech. Significance These results suggest that the distribution and magnitude of intracranial auditory event-related responses to speech reflect the functional significance of the underlying cortical regions and may be useful for pre-surgical functional mapping. PMID:19070540
A comprehensive three-dimensional cortical map of vowel space.
Scharinger, Mathias; Idsardi, William J; Poe, Samantha
2011-12-01
Mammalian cortex is known to contain various kinds of spatial encoding schemes for sensory information including retinotopic, somatosensory, and tonotopic maps. Tonotopic maps are especially interesting for human speech sound processing because they encode linguistically salient acoustic properties. In this study, we mapped the entire vowel space of a language (Turkish) onto cortical locations by using the magnetic N1 (M100), an auditory-evoked component that peaks approximately 100 msec after auditory stimulus onset. We found that dipole locations could be structured into two distinct maps, one for vowels produced with the tongue positioned toward the front of the mouth (front vowels) and one for vowels produced in the back of the mouth (back vowels). Furthermore, we found spatial gradients in lateral-medial, anterior-posterior, and inferior-superior dimensions that encoded the phonetic, categorical distinctions between all the vowels of Turkish. Statistical model comparisons of the dipole locations suggest that the spatial encoding scheme is not entirely based on acoustic bottom-up information but crucially involves featural-phonetic top-down modulation. Thus, multiple areas of excitation along the unidimensional basilar membrane are mapped into higher dimensional representations in auditory cortex.
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
A Systematic Review of Mapping Strategies for the Sonification of Physical Quantities
Dubus, Gaël; Bresin, Roberto
2013-01-01
The field of sonification has progressed greatly over the past twenty years and currently constitutes an established area of research. This article aims at exploiting and organizing the knowledge accumulated in previous experimental studies to build a foundation for future sonification works. A systematic review of these studies may reveal trends in sonification design, and therefore support the development of design guidelines. To this end, we have reviewed and analyzed 179 scientific publications related to sonification of physical quantities. Using a bottom-up approach, we set up a list of conceptual dimensions belonging to both physical and auditory domains. Mappings used in the reviewed works were identified, forming a database of 495 entries. Frequency of use was analyzed among these conceptual dimensions as well as higher-level categories. Results confirm two hypotheses formulated in a preliminary study: pitch is by far the most used auditory dimension in sonification applications, and spatial auditory dimensions are almost exclusively used to sonify kinematic quantities. To detect successful as well as unsuccessful sonification strategies, assessment of mapping efficiency conducted in the reviewed works was considered. Results show that a proper evaluation of sonification mappings is performed only in a marginal proportion of publications. Additional aspects of the publication database were investigated: historical distribution of sonification works is presented, projects are classified according to their primary function, and the sonic material used in the auditory display is discussed. Finally, a mapping-based approach for characterizing sonification is proposed. PMID:24358192
Fritz, U; Rohrberg, M; Lange, C; Weyland, W; Bräuer, A; Braun, U
1996-11-01
Temperature of the tympanic membrane is recommended as a "gold standard" of core-temperature recording. However, use of temperature probes in the auditory canal may lead to damage of tympanic membrane. Temperature measurement in the auditory canal with infrared thermometry does not pose this risk. Furthermore it is easy to perform and not very time-consuming. For this reason infrared thermometry of the auditory canal is becoming increasingly popular in clinical practice. We evaluated two infrared thermometers-the Diatek 9000 Thermoguide and the Diatek 9000 Instatemp-regarding factors influencing agreement with conventional tympanic temperature measurement and other core-temperature recording sites. In addition, we systematically evaluated user dependent factors that influence the agreement with the tympanic temperature. In 20 volunteers we evaluated the influence of three factors: duration of the devices in the auditory canal before taking temperature (0 or 5 s), interval between two following recordings (30, 60, 90, 120, 180 s) and positioning of the grip relative to the auditory-canal axis (0, 60, 180 and 270 degrees). Agreement with tympanic contact probes (Mon-a-therm tympanic) in the contralateral ear was investigated in 100 postoperative patients. Comparative readings with rectal (YSI series 400) and esophageal (Mon-a-therm esophageal stethoscope with temperature sensor) probes were done in 100 patients in the ICU. The method of Bland and Altman was taken for comparison. Shortening of the interval between two consecutive readings led to increasing differences between the two measurements with the second reading decreasing. A similar effect was seen when positioning the infrared thermometers in the auditory canal before taking temperatures: after 5 s the recorded temperatures were significantly lower than temperature recordings taken immediately. Rotation of the devices out of the telephone handle position led to increasing lack of agreement between infrared thermometry and contact probes. Mean differences between infrared thermometry (Instatemp and Thermoguide, CAL-Mode) and tympanic probes were -0.41 +/- 0.67 degree C (2 SD) and -0.43 +/- 0.70 degree C, respectively. Mean differences between the Thermoquide (Rectal-Mode) and rectal probe were -0.19 +/- 0.72 degree C, and between the Thermoguide (Core Mode) and esophageal probe -0.13 +/- 0.74 degree C. Although easy to use, infrared thermometry requires careful handling. To obtain optimal recordings, the time between two consecutive readings should not be less than two min. Recordings should be taken immediately after positioning the devices in the auditory canal. Best results are obtained in the 60 degrees position with the grip of the devices following the ramus mandibulae (telephone handle position). The lower readings of infrared thermometry compared with tympanic contact probes indicate that the readings obtained represent the temperature of the auditory canal rather than of the tympanic membrane itself. To compensate for underestimation of core temperature by infrared thermometry, the results obtained are corrected and transferred into core-equivalent temperatures. This data correction reduces mean differences between infrared recordings and traditional core-temperature monitoring, but leaves limits of agreement between the two methods uninfluenced.
Mapping cortical hubs in tinnitus
2009-01-01
Background Subjective tinnitus is the perception of a sound in the absence of any physical source. It has been shown that tinnitus is associated with hyperactivity of the auditory cortices. Accompanying this hyperactivity, changes in non-auditory brain structures have also been reported. However, there have been no studies on the long-range information flow between these regions. Results Using Magnetoencephalography, we investigated the long-range cortical networks of chronic tinnitus sufferers (n = 23) and healthy controls (n = 24) in the resting state. A beamforming technique was applied to reconstruct the brain activity at source level and the directed functional coupling between all voxels was analyzed by means of Partial Directed Coherence. Within a cortical network, hubs are brain structures that either influence a great number of other brain regions or that are influenced by a great number of other brain regions. By mapping the cortical hubs in tinnitus and controls we report fundamental group differences in the global networks, mainly in the gamma frequency range. The prefrontal cortex, the orbitofrontal cortex and the parieto-occipital region were core structures in this network. The information flow from the global network to the temporal cortex correlated positively with the strength of tinnitus distress. Conclusion With the present study we suggest that the hyperactivity of the temporal cortices in tinnitus is integrated in a global network of long-range cortical connectivity. Top-down influence from the global network on the temporal areas relates to the subjective strength of the tinnitus distress. PMID:19930625
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...
2017-06-30
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.
Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less
Musical learning in children and adults with Williams syndrome.
Lense, M; Dykens, E
2013-09-01
There is recent interest in using music making as an empirically supported intervention for various neurodevelopmental disorders due to music's engagement of perceptual-motor mapping processes. However, little is known about music learning in populations with developmental disabilities. Williams syndrome (WS) is a neurodevelopmental genetic disorder whose characteristic auditory strengths and visual-spatial weaknesses map onto the processes used to learn to play a musical instrument. We identified correlates of novel musical instrument learning in WS by teaching 46 children and adults (7-49 years) with WS to play the Appalachian dulcimer. Obtained dulcimer skill was associated with prior musical abilities (r = 0.634, P < 0.001) and visual-motor integration abilities (r = 0.487, P = 0.001), but not age, gender, IQ, handedness, auditory sensitivities or musical interest/emotionality. Use of auditory learning strategies, but not visual or instructional strategies, predicted greater dulcimer skill beyond individual musical and visual-motor integration abilities (β = 0.285, sr(2) = 0.06, P = 0.019). These findings map onto behavioural and emerging neural evidence for greater auditory-motor mapping processes in WS. Results suggest that explicit awareness of task-specific learning approaches is important when learning a new skill. Implications for using music with populations with syndrome-specific strengths and weakness will be discussed. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.
Pfordresher, Peter Q; Mantell, James T
2012-01-01
We report an experiment that tested whether effects of altered auditory feedback (AAF) during piano performance differ from its effects during singing. These effector systems differ with respect to the mapping between motor gestures and pitch content of auditory feedback. Whereas this action-effect mapping is highly reliable during phonation in any vocal motor task (singing or speaking), mapping between finger movements and pitch occurs only in limited situations, such as piano playing. Effects of AAF in both tasks replicated results previously found for keyboard performance (Pfordresher, 2003), in that asynchronous (delayed) feedback slowed timing whereas alterations to feedback pitch increased error rates, and the effect of asynchronous feedback was similar in magnitude across tasks. However, manipulations of feedback pitch had larger effects on singing than on keyboard production, suggesting effector-specific differences in sensitivity to action-effect mapping with respect to feedback content. These results support the view that disruption from AAF is based on abstract, effector independent, response-effect associations but that the strength of associations differs across effector systems. Copyright © 2011. Published by Elsevier B.V.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquière, Pol
2007-04-09
This study investigates whether the core bottleneck of literacy-impairment should be situated at the phonological level or at a more basic sensory level, as postulated by supporters of the auditory temporal processing theory. Phonological ability, speech perception and low-level auditory processing were assessed in a group of 5-year-old pre-school children at high-family risk for dyslexia, compared to a group of well-matched low-risk control children. Based on family risk status and first grade literacy achievement children were categorized in groups and pre-school data were retrospectively reanalyzed. On average, children showing both increased family risk and literacy-impairment at the end of first grade, presented significant pre-school deficits in phonological awareness, rapid automatized naming, speech-in-noise perception and frequency modulation detection. The concurrent presence of these deficits before receiving any formal reading instruction, might suggest a causal relation with problematic literacy development. However, a closer inspection of the individual data indicates that the core of the literacy problem is situated at the level of higher-order phonological processing. Although auditory and speech perception problems are relatively over-represented in literacy-impaired subjects and might possibly aggravate the phonological and literacy problem, it is unlikely that they would be at the basis of these problems. At a neurobiological level, results are interpreted as evidence for dysfunctional processing along the auditory-to-articulation stream that is implied in phonological processing, in combination with a relatively intact or inconsistently impaired functioning of the auditory-to-meaning stream that subserves auditory processing and speech perception.
Terband, H.; Maassen, B.; Guenther, F.H.; Brumberg, J.
2014-01-01
Background/Purpose Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. Method In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Results Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. Conclusions These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. PMID:24491630
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Interconnected growing self-organizing maps for auditory and semantic acquisition modeling.
Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J
2014-01-01
Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic-semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory-semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model.
Sun, Hongyu; Takesian, Anne E; Wang, Ting Ting; Lippman-Bell, Jocelyn J; Hensch, Takao K; Jensen, Frances E
2018-05-29
Heightened neural excitability in infancy and childhood results in increased susceptibility to seizures. Such early-life seizures are associated with language deficits and autism that can result from aberrant development of the auditory cortex. Here, we show that early-life seizures disrupt a critical period (CP) for tonotopic map plasticity in primary auditory cortex (A1). We show that this CP is characterized by a prevalence of "silent," NMDA-receptor (NMDAR)-only, glutamate receptor synapses in auditory cortex that become "unsilenced" due to activity-dependent AMPA receptor (AMPAR) insertion. Induction of seizures prior to this CP occludes tonotopic map plasticity by prematurely unsilencing NMDAR-only synapses. Further, brief treatment with the AMPAR antagonist NBQX following seizures, prior to the CP, prevents synapse unsilencing and permits subsequent A1 plasticity. These findings reveal that early-life seizures modify CP regulators and suggest that therapeutic targets for early post-seizure treatment can rescue CP plasticity. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Lamas, Verónica; Estévez, Sheila; Pernía, Marianni; Plaza, Ignacio; Merchán, Miguel A
2017-10-11
The rat auditory cortex (AC) is becoming popular among auditory neuroscience investigators who are interested in experience-dependence plasticity, auditory perceptual processes, and cortical control of sound processing in the subcortical auditory nuclei. To address new challenges, a procedure to accurately locate and surgically expose the auditory cortex would expedite this research effort. Stereotactic neurosurgery is routinely used in pre-clinical research in animal models to engraft a needle or electrode at a pre-defined location within the auditory cortex. In the following protocol, we use stereotactic methods in a novel way. We identify four coordinate points over the surface of the temporal bone of the rat to define a window that, once opened, accurately exposes both the primary (A1) and secondary (Dorsal and Ventral) cortices of the AC. Using this method, we then perform a surgical ablation of the AC. After such a manipulation is performed, it is necessary to assess the localization, size, and extension of the lesions made in the cortex. Thus, we also describe a method to easily locate the AC ablation postmortem using a coordinate map constructed by transferring the cytoarchitectural limits of the AC to the surface of the brain.The combination of the stereotactically-guided location and ablation of the AC with the localization of the injured area in a coordinate map postmortem facilitates the validation of information obtained from the animal, and leads to a better analysis and comprehension of the data.
Brain-wide maps of Fos expression during fear learning and recall.
Cho, Jin-Hyung; Rendall, Sam D; Gray, Jesse M
2017-04-01
Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. © 2017 Cho et al.; Published by Cold Spring Harbor Laboratory Press.
Brain-wide maps of Fos expression during fear learning and recall
Cho, Jin-Hyung; Rendall, Sam D.; Gray, Jesse M.
2017-01-01
Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. PMID:28331016
Functional specialization of medial auditory belt cortex in the alert rhesus monkey.
Kusmierek, Pawel; Rauschecker, Josef P
2009-09-01
Responses of neural units in two areas of the medial auditory belt (middle medial area [MM] and rostral medial area [RM]) were tested with tones, noise bursts, monkey calls (MC), and environmental sounds (ES) in microelectrode recordings from two alert rhesus monkeys. For comparison, recordings were also performed from two core areas (primary auditory area [A1] and rostral area [R]) of the auditory cortex. All four fields showed cochleotopic organization, with best (center) frequency [BF(c)] gradients running in opposite directions in A1 and MM than in R and RM. The medial belt was characterized by a stronger preference for band-pass noise than for pure tones found medially to the core areas. Response latencies were shorter for the two more posterior (middle) areas MM and A1 than for the two rostral areas R and RM, reaching values as low as 6 ms for high BF(c) in MM and A1, and strongly depended on BF(c). The medial belt areas exhibited a higher selectivity to all stimuli, in particular to noise bursts, than the core areas. An increased selectivity to tones and noise bursts was also found in the anterior fields; the opposite was true for highly temporally modulated ES. Analysis of the structure of neural responses revealed that neurons were driven by low-level acoustic features in all fields. Thus medial belt areas RM and MM have to be considered early stages of auditory cortical processing. The anteroposterior difference in temporal processing indices suggests that R and RM may belong to a different hierarchical level or a different computational network than A1 and MM.
Auditory spatial processing in the human cortex.
Salminen, Nelli H; Tiitinen, Hannu; May, Patrick J C
2012-12-01
The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.
Auditory temporal processing skills in musicians with dyslexia.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
2014-08-01
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Temporal lobe stimulation reveals anatomic distinction between auditory naming processes.
Hamberger, M J; Seidel, W T; Goodman, R R; Perrine, K; McKhann, G M
2003-05-13
Language errors induced by cortical stimulation can provide insight into function(s) supported by the area stimulated. The authors observed that some stimulation-induced errors during auditory description naming were characterized by tip-of-the-tongue responses or paraphasic errors, suggesting expressive difficulty, whereas others were qualitatively different, suggesting receptive difficulty. They hypothesized that these two response types reflected disruption at different stages of auditory verbal processing and that these "subprocesses" might be supported by anatomically distinct cortical areas. To explore the topographic distribution of error types in auditory verbal processing. Twenty-one patients requiring left temporal lobe surgery underwent preresection language mapping using direct cortical stimulation. Auditory naming was tested at temporal sites extending from 1 cm from the anterior tip to the parietal operculum. Errors were dichotomized as either "expressive" or "receptive." The topographic distribution of error types was explored. Sites associated with the two error types were topographically distinct from one another. Most receptive sites were located in the middle portion of the superior temporal gyrus (STG), whereas most expressive sites fell outside this region, scattered along lateral temporal and temporoparietal cortex. Results raise clinical questions regarding the inclusion of the STG in temporal lobe epilepsy surgery and suggest that more detailed cortical mapping might enable better prediction of postoperative language decline. From a theoretical perspective, results carry implications regarding the understanding of structure-function relations underlying temporal lobe mediation of auditory language processing.
Multisensory guidance of orienting behavior.
Maier, Joost X; Groh, Jennifer M
2009-12-01
We use both vision and audition when localizing objects and events in our environment. However, these sensory systems receive spatial information in different coordinate systems: sounds are localized using inter-aural and spectral cues, yielding a head-centered representation of space, whereas the visual system uses an eye-centered representation of space, based on the site of activation on the retina. In addition, the visual system employs a place-coded, retinotopic map of space, whereas the auditory system's representational format is characterized by broad spatial tuning and a lack of topographical organization. A common view is that the brain needs to reconcile these differences in order to control behavior, such as orienting gaze to the location of a sound source. To accomplish this, it seems that either auditory spatial information must be transformed from a head-centered rate code to an eye-centered map to match the frame of reference used by the visual system, or vice versa. Here, we review a number of studies that have focused on the neural basis underlying such transformations in the primate auditory system. Although, these studies have found some evidence for such transformations, many differences in the way the auditory and visual system encode space exist throughout the auditory pathway. We will review these differences at the neural level, and will discuss them in relation to differences in the way auditory and visual information is used in guiding orienting movements.
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Skouras, Stavros; Lohmann, Gabriele
2018-01-01
Sound is a potent elicitor of emotions. Auditory core, belt and parabelt regions have anatomical connections to a large array of limbic and paralimbic structures which are involved in the generation of affective activity. However, little is known about the functional role of auditory cortical regions in emotion processing. Using functional magnetic resonance imaging and music stimuli that evoke joy or fear, our study reveals that anterior and posterior regions of auditory association cortex have emotion-characteristic functional connectivity with limbic/paralimbic (insula, cingulate cortex, and striatum), somatosensory, visual, motor-related, and attentional structures. We found that these regions have remarkably high emotion-characteristic eigenvector centrality, revealing that they have influential positions within emotion-processing brain networks with “small-world” properties. By contrast, primary auditory fields showed surprisingly strong emotion-characteristic functional connectivity with intra-auditory regions. Our findings demonstrate that the auditory cortex hosts regions that are influential within networks underlying the affective processing of auditory information. We anticipate our results to incite research specifying the role of the auditory cortex—and sensory systems in general—in emotion processing, beyond the traditional view that sensory cortices have merely perceptual functions. PMID:29385142
2014-07-01
Molecular evidence of stress- induced acute heart injury in a mouse model simulating posttraumatic stress disorder. Proc Natl Acad Sci U S A. 2014 Feb...obtaining measures aligned with the core neurocognitive domains: IQ, working memory ( auditory /visual), processing speed, verbal memory (immediate...in the test sample and combined sample with a similar pattern for the validation sample. Similarly, performance on tests of auditory and visual
Auditory-motor Mapping for Pitch Control in Singers and Nonsingers
Jones, Jeffery A.; Keough, Dwayne
2009-01-01
Little is known about the basic processes underlying the behavior of singing. This experiment was designed to examine differences in the representation of the mapping between fundamental frequency (F0) feedback and the vocal production system in singers and nonsingers. Auditory feedback regarding F0 was shifted down in frequency while participants sang the consonant-vowel /ta/. During the initial frequency-altered trials, singers compensated to a lesser degree than nonsingers, but this difference was reduced with continued exposure to frequency-altered feedback. After brief exposure to frequency altered auditory feedback, both singers and nonsingers suddenly heard their F0 unaltered. When participants received this unaltered feedback, only singers' F0 values were found to be significantly higher than their F0 values produced during baseline and control trials. These aftereffects in singers were replicated when participants sang a different note than the note they produced while hearing altered feedback. Together, these results suggest that singers rely more on internal models than nonsingers to regulate vocal productions rather than real time auditory feedback. PMID:18592224
Izquierdo, M.A.; Oliver, D.L.; Malmierca, M.S.
2010-01-01
Summary Introduction and development Sensory systems show a topographic representation of the sensory epithelium in the central nervous system. In the auditory system this representation originates tonotopic maps. For the last four decades these changes in tonotopic maps have been widely studied either after peripheral mechanical lesions or by exposing animals to an augmented acoustic environment. These sensory manipulations induce plastic reorganizations in the tonotopic map of the auditory cortex. By contrast, acoustic trauma does not seem to induce functional plasticity at subcortical nuclei. Mechanisms that generate these changes differ in their molecular basis and temporal course and we can distinguish two different mechanisms: those involving an active reorganization process, and those that show a simple reflection of the loss of peripheral afferences. Only the former involve a genuine process of plastic reorganization. Neuronal plasticity is critical for the normal development and function of the adult auditory system, as well as for the rehabilitation needed after the implantation of auditory prostheses. However, development of plasticity can also generate abnormal sensation like tinnitus. Recently, a new concept in neurobiology so-called ‘neuronal stability’ has emerged and its implications and conceptual basis could help to improve the treatments of hearing loss. Conclusion A combination of neuronal plasticity and stability is suggested as a powerful and promising future strategy in the design of new treatments of hearing loss. PMID:19340783
Earl, Brian R.; Chertoff, Mark E.
2012-01-01
Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596
Deshpande, Aniruddha K; Tan, Lirong; Lu, Long J; Altaye, Mekibib; Holland, Scott K
2016-01-01
Despite the positive effects of cochlear implantation, postimplant variability in speech perception and oral language outcomes is still difficult to predict. The aim of this study was to identify neuroimaging biomarkers of postimplant speech perception and oral language performance in children with hearing loss who receive a cochlear implant. The authors hypothesized positive correlations between blood oxygen level-dependent functional magnetic resonance imaging (fMRI) activation in brain regions related to auditory language processing and attention and scores on the Clinical Evaluation of Language Fundamentals-Preschool, Second Edition (CELF-P2) and the Early Speech Perception Test for Profoundly Hearing-Impaired Children (ESP), in children with congenital hearing loss. Eleven children with congenital hearing loss were recruited for the present study based on referral for clinical MRI and other inclusion criteria. All participants were <24 months at fMRI scanning and <36 months at first implantation. A silent background fMRI acquisition method was performed to acquire fMRI during auditory stimulation. A voxel-based analysis technique was utilized to generate z maps showing significant contrast in brain activation between auditory stimulation conditions (spoken narratives and narrow band noise). CELF-P2 and ESP were administered 2 years after implantation. Because most participants reached a ceiling on ESP, a voxel-wise regression analysis was performed between preimplant fMRI activation and postimplant CELF-P2 scores alone. Age at implantation and preimplant hearing thresholds were controlled in this regression analysis. Four brain regions were found to be significantly correlated with CELF-P2 scores. These clusters of positive correlation encompassed the temporo-parieto-occipital junction, areas in the prefrontal cortex and the cingulate gyrus. For the story versus silence contrast, CELF-P2 core language score demonstrated significant positive correlation with activation in the right angular gyrus (r = 0.95), left medial frontal gyrus (r = 0.94), and left cingulate gyrus (r = 0.96). For the narrow band noise versus silence contrast, the CELF-P2 core language score exhibited significant positive correlation with activation in the left angular gyrus (r = 0.89; for all clusters, corrected p < 0.05). Four brain regions related to language function and attention were identified that correlated with CELF-P2. Children with better oral language performance postimplant displayed greater activation in these regions preimplant. The results suggest that despite auditory deprivation, these regions are more receptive to gains in oral language development performance of children with hearing loss who receive early intervention via cochlear implantation. The present study suggests that oral language outcome following cochlear implant may be predicted by preimplant fMRI with auditory stimulation using natural speech.
Addis, L; Friederici, A D; Kotz, S A; Sabisch, B; Barry, J; Richter, N; Ludwig, A A; Rübsamen, R; Albert, F W; Pääbo, S; Newbury, D F; Monaco, A P
2010-01-01
Despite the apparent robustness of language learning in humans, a large number of children still fail to develop appropriate language skills despite adequate means and opportunity. Most cases of language impairment have a complex etiology, with genetic and environmental influences. In contrast, we describe a three-generation German family who present with an apparently simple segregation of language impairment. Investigations of the family indicate auditory processing difficulties as a core deficit. Affected members performed poorly on a nonword repetition task and present with communication impairments. The brain activation pattern for syllable duration as measured by event-related brain potentials showed clear differences between affected family members and controls, with only affected members displaying a late discrimination negativity. In conjunction with psychoacoustic data showing deficiencies in auditory duration discrimination, the present results indicate increased processing demands in discriminating syllables of different duration. This, we argue, forms the cognitive basis of the observed language impairment in this family. Genome-wide linkage analysis showed a haplotype in the central region of chromosome 12 which reaches the maximum possible logarithm of odds ratio (LOD) score and fully co-segregates with the language impairment, consistent with an autosomal dominant, fully penetrant mode of inheritance. Whole genome analysis yielded no novel inherited copy number variants strengthening the case for a simple inheritance pattern. Several genes in this region of chromosome 12 which are potentially implicated in language impairment did not contain polymorphisms likely to be the causative mutation, which is as yet unknown. PMID:20345892
Hashimoto, Ryu-Ichiro; Itahashi, Takashi; Okada, Rieko; Hasegawa, Sayaka; Tani, Masayuki; Kato, Nobumasa; Mimura, Masaru
2018-01-01
Abnormalities in functional brain networks in schizophrenia have been studied by examining intrinsic and extrinsic brain activity under various experimental paradigms. However, the identified patterns of abnormal functional connectivity (FC) vary depending on the adopted paradigms. Thus, it is unclear whether and how these patterns are inter-related. In order to assess relationships between abnormal patterns of FC during intrinsic activity and those during extrinsic activity, we adopted a data-fusion approach and applied partial least square (PLS) analyses to FC datasets from 25 patients with chronic schizophrenia and 25 age- and sex-matched normal controls. For the input to the PLS analyses, we generated a pair of FC maps during the resting state (REST) and the auditory deviance response (ADR) from each participant using the common seed region in the left middle temporal gyrus, which is a focus of activity associated with auditory verbal hallucinations (AVHs). PLS correlation (PLS-C) analysis revealed that patients with schizophrenia have significantly lower loadings of a component containing positive FCs in default-mode network regions during REST and a component containing positive FCs in the auditory and attention-related networks during ADR. Specifically, loadings of the REST component were significantly correlated with the severities of positive symptoms and AVH in patients with schizophrenia. The co-occurrence of such altered FC patterns during REST and ADR was replicated using PLS regression, wherein FC patterns during REST are modeled to predict patterns during ADR. These findings provide an integrative understanding of altered FCs during intrinsic and extrinsic activity underlying core schizophrenia symptoms.
Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia
Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail
2015-01-01
Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635
The topography of frequency and time representation in primate auditory cortices
Baumann, Simon; Joly, Olivier; Rees, Adrian; Petkov, Christopher I; Sun, Li; Thiele, Alexander; Griffiths, Timothy D
2015-01-01
Natural sounds can be characterised by their spectral content and temporal modulation, but how the brain is organized to analyse these two critical sound dimensions remains uncertain. Using functional magnetic resonance imaging, we demonstrate a topographical representation of amplitude modulation rate in the auditory cortex of awake macaques. The representation of this temporal dimension is organized in approximately concentric bands of equal rates across the superior temporal plane in both hemispheres, progressing from high rates in the posterior core to low rates in the anterior core and lateral belt cortex. In A1 the resulting gradient of modulation rate runs approximately perpendicular to the axis of the tonotopic gradient, suggesting an orthogonal organisation of spectral and temporal sound dimensions. In auditory belt areas this relationship is more complex. The data suggest a continuous representation of modulation rate across several physiological areas, in contradistinction to a separate representation of frequency within each area. DOI: http://dx.doi.org/10.7554/eLife.03256.001 PMID:25590651
Martí-Bonmatí, Luis; Lull, Juan José; García-Martí, Gracián; Aguilar, Eduardo J; Moratal-Pérez, David; Poyatos, Cecilio; Robles, Montserrat; Sanjuán, Julio
2007-08-01
To prospectively evaluate if functional magnetic resonance (MR) imaging abnormalities associated with auditory emotional stimuli coexist with focal brain reductions in schizophrenic patients with chronic auditory hallucinations. Institutional review board approval was obtained and all participants gave written informed consent. Twenty-one right-handed male patients with schizophrenia and persistent hallucinations (started to hear hallucinations at a mean age of 23 years +/- 10, with 15 years +/- 8 of mean illness duration) and 10 healthy paired participants (same ethnic group [white], age, and education level [secondary school]) were studied. Functional echo-planar T2*-weighted (after both emotional and neutral auditory stimulation) and morphometric three-dimensional gradient-recalled echo T1-weighted MR images were analyzed using Statistical Parametric Mapping (SPM2) software. Brain activation images were extracted by subtracting those with emotional from nonemotional words. Anatomic differences were explored by optimized voxel-based morphometry. The functional and morphometric MR images were overlaid to depict voxels statistically reported by both techniques. A coincidence map was generated by multiplying the emotional subtracted functional MR and volume decrement morphometric maps. Statistical analysis used the general linear model, Student t tests, random effects analyses, and analysis of covariance with a correction for multiple comparisons following the false discovery rate method. Large coinciding brain clusters (P < .005) were found in the left and right middle temporal and superior temporal gyri. Smaller coinciding clusters were found in the left posterior and right anterior cingular gyri, left inferior frontal gyrus, and middle occipital gyrus. The middle and superior temporal and the cingular gyri are closely related to the abnormal neural network involved in the auditory emotional dysfunction seen in schizophrenic patients.
Kajikawa, Yoshinao; Frey, Stephen; Ross, Deborah; Falchier, Arnaud; Hackett, Troy A; Schroeder, Charles E
2015-03-11
The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas. Copyright © 2015 the authors 0270-6474/15/354140-11$15.00/0.
Bioacoustic Signal Classification in Cat Auditory Cortex
1994-01-01
for fast FM sweeps. A second maximum (i.e., sub- In Fig. 8D (87-001) the orie.-tation of the mapped area Iwo 11 .MWRN NOWO 0 lo 74 was tilted 214...Brashear, H.R., and Heilman, K.M. Pure word deafness after bilateral primary auditory cortex infarcts. Neuroiogy 34: 347 -352, 1984. Cranford, J.L., Stream
Visual map and instruction-based bicycle navigation: a comparison of effects on behaviour.
de Waard, Dick; Westerhuis, Frank; Joling, Danielle; Weiland, Stella; Stadtbäumer, Ronja; Kaltofen, Leonie
2017-09-01
Cycling with a classic paper map was compared with navigating with a moving map displayed on a smartphone, and with auditory, and visual turn-by-turn route guidance. Spatial skills were found to be related to navigation performance, however only when navigating from a paper or electronic map, not with turn-by-turn (instruction based) navigation. While navigating, 25% of the time cyclists fixated at the devices that present visual information. Navigating from a paper map required most mental effort and both young and older cyclists preferred electronic over paper map navigation. In particular a turn-by-turn dedicated guidance device was favoured. Visual maps are in particular useful for cyclists with higher spatial skills. Turn-by-turn information is used by all cyclists, and it is useful to make these directions available in all devices. Practitioner Summary: Electronic navigation devices are preferred over a paper map. People with lower spatial skills benefit most from turn-by-turn guidance information, presented either auditory or on a dedicated device. People with higher spatial skills perform well with all devices. It is advised to keep in mind that all users benefit from turn-by-turn information when developing a navigation device for cyclists.
Küssner, Mats B.; Tidhar, Dan; Prior, Helen M.; Leech-Wilkinson, Daniel
2014-01-01
Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures—accounting for the intrinsic link between movement and sound—are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones—continually sounding and concurrently varied in pitch, loudness and tempo—with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising–falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided. PMID:25120506
Interconnected growing self-organizing maps for auditory and semantic acquisition modeling
Cao, Mengxue; Li, Aijun; Fang, Qiang; Kaufmann, Emily; Kröger, Bernd J.
2014-01-01
Based on the incremental nature of knowledge acquisition, in this study we propose a growing self-organizing neural network approach for modeling the acquisition of auditory and semantic categories. We introduce an Interconnected Growing Self-Organizing Maps (I-GSOM) algorithm, which takes associations between auditory information and semantic information into consideration, in this paper. Direct phonetic–semantic association is simulated in order to model the language acquisition in early phases, such as the babbling and imitation stages, in which no phonological representations exist. Based on the I-GSOM algorithm, we conducted experiments using paired acoustic and semantic training data. We use a cyclical reinforcing and reviewing training procedure to model the teaching and learning process between children and their communication partners. A reinforcing-by-link training procedure and a link-forgetting procedure are introduced to model the acquisition of associative relations between auditory and semantic information. Experimental results indicate that (1) I-GSOM has good ability to learn auditory and semantic categories presented within the training data; (2) clear auditory and semantic boundaries can be found in the network representation; (3) cyclical reinforcing and reviewing training leads to a detailed categorization as well as to a detailed clustering, while keeping the clusters that have already been learned and the network structure that has already been developed stable; and (4) reinforcing-by-link training leads to well-perceived auditory–semantic associations. Our I-GSOM model suggests that it is important to associate auditory information with semantic information during language acquisition. Despite its high level of abstraction, our I-GSOM approach can be interpreted as a biologically-inspired neurocomputational model. PMID:24688478
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J
2007-02-01
Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.
2006-01-01
Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004
Terband, H; Maassen, B; Guenther, F H; Brumberg, J
2014-01-01
Differentiating the symptom complex due to phonological-level disorders, speech delay and pediatric motor speech disorders is a controversial issue in the field of pediatric speech and language pathology. The present study investigated the developmental interaction between neurological deficits in auditory and motor processes using computational modeling with the DIVA model. In a series of computer simulations, we investigated the effect of a motor processing deficit alone (MPD), and the effect of a motor processing deficit in combination with an auditory processing deficit (MPD+APD) on the trajectory and endpoint of speech motor development in the DIVA model. Simulation results showed that a motor programming deficit predominantly leads to deterioration on the phonological level (phonemic mappings) when auditory self-monitoring is intact, and on the systemic level (systemic mapping) if auditory self-monitoring is impaired. These findings suggest a close relation between quality of auditory self-monitoring and the involvement of phonological vs. motor processes in children with pediatric motor speech disorders. It is suggested that MPD+APD might be involved in typically apraxic speech output disorders and MPD in pediatric motor speech disorders that also have a phonological component. Possibilities to verify these hypotheses using empirical data collected from human subjects are discussed. The reader will be able to: (1) identify the difficulties in studying disordered speech motor development; (2) describe the differences in speech motor characteristics between SSD and subtype CAS; (3) describe the different types of learning that occur in the sensory-motor system during babbling and early speech acquisition; (4) identify the neural control subsystems involved in speech production; (5) describe the potential role of auditory self-monitoring in developmental speech disorders. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory Implant Research at the House Ear Institute 1989–2013
Shannon, Robert V.
2014-01-01
The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House – in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8–10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. PMID:25449009
Sensing Super-position: Visual Instrument Sensor Replacement
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2006-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This project addresses the technical feasibility of augmenting human vision through Sensing Super-position using a Visual Instrument Sensory Organ Replacement (VISOR). The current implementation of the VISOR device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of the human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system.
Spine Formation and Maturation in the Developing Rat Auditory Cortex
Schachtele, Scott J.; Losh, Joe; Dailey, Michael E.; Green, Steven H.
2013-01-01
The rat auditory cortex is organized as a tonotopic map of sound frequency. This map is broadly tuned at birth and is refined during the first 3 weeks postnatal. The structural correlates underlying tonotopic map maturation and reorganization during development are poorly understood. We employed fluorescent dye ballistic labeling (“DiOlistics”) alone, or in conjunction with immunohistochemistry, to quantify synaptogenesis in the auditory cortex of normal hearing rats. We show that the developmental appearance of dendritic protrusions, which include both immature filopodia and mature spines, on layers 2/3, 4, and 5 pyramidal and layer 4 spiny nonpyramidal neurons occurs in three phases: slow addition of dendritic protrusions from postnatal day 4 (P4) to P9, rapid addition of dendritic protrusions from P9 to P19, and a final phase where mature protrusion density is achieved (>P21). Next, we combined DiOlistics with immunohistochemical labeling of bassoon, a presynaptic scaffolding protein, as a novel method to categorize dendritic protrusions as either filopodia or mature spines in cortex fixed in vivo. Using this method we observed an increase in the spine-to-filopodium ratio from P9–P16, indicating a period of rapid spine maturation. Previous studies report mature spines as being shorter in length compared to filopodia. We similarly observed a reduction in protrusion length between P9 and P16, corroborating our immunohistochemical spine maturation data. These studies show that dendritic protrusion formation and spine maturation occur rapidly at a time previously shown to correspond to auditory cortical tonotopic map refinement (P11–P14), providing a structural correlate of physiological maturation. PMID:21800311
Exploring Modality Compatibility in the Response-Effect Compatibility Paradigm.
Földes, Noémi; Philipp, Andrea M; Badets, Arnaud; Koch, Iring
2017-01-01
According to ideomotor theory , action planning is based on anticipatory perceptual representations of action-effects. This aspect of action control has been investigated in studies using the response-effect compatibility (REC) paradigm, in which responses have been shown to be facilitated if ensuing perceptual effects share codes with the response based on dimensional overlap (i.e., REC). Additionally, according to the notion of ideomotor compatibility, certain response-effect (R-E) mappings will be stronger than others because some response features resemble the anticipated sensory response effects more strongly than others (e.g., since vocal responses usually produce auditory effects, an auditory stimulus should be anticipated in a stronger manner following vocal responses rather than following manual responses). Yet, systematic research on this matter is lacking. In the present study, two REC experiments aimed to explore the influence of R-E modality mappings. In Experiment 1, vocal number word responses produced visual effects on the screen (digits vs. number words; i.e., visual-symbolic vs. visual-verbal effect codes). The REC effect was only marginally larger for visual-verbal than for visual-symbolic effects. Using verbal effect codes in Experiment 2, we found that the REC effect was larger with auditory-verbal R-E mapping than with visual-verbal R-E mapping. Overall, the findings support the hypothesis of a role of R-E modality mappings in REC effects, suggesting both further evidence for ideomotor accounts as well as code-specific and modality-specific contributions to effect anticipation.
Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.
Brosch, Michael; Selezneva, Elena; Scheich, Henning
2015-03-01
This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Scheperle, Rachel A; Abbas, Paul J
2015-01-01
The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
Lin, Yung-Song
2009-03-01
Cochlear implantation via the scala vestibuli is a viable approach in those with ossification in the scala tympani. With extended cochlear implant experience, there is no significant difference in the mapping parameters and auditory performance between those implanted via scala vestibuli and via scala tympani. To assess the clinical outcomes of cochlear implantation via scala vestibuli. In a cohort follow-up study, 11 prelingually deafened children who received cochlear implantation between age 3 and 10 years through the scala vestibuli served as participants. The mapping parameters (i.e. comfortable level (C), threshold level (T), dynamic range) and auditory performance of each participant were evaluated following initial cochlear implant stimulation, then at 3 month intervals for 2 years, then semi-annually. The follow-up period lasted for 9 years 9 months on average, with a minimum of 8 years 3 months. The clinical results of the mapping parameters and auditory performance of children implanted via the scala vestibuli were comparative to those who were implanted via the scala tympani. No balance problem was reported by any of these patients. One child exhibited residual low frequency hearing after implantation.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732
Suga, Nobuo
2018-04-01
For echolocation, mustached bats emit velocity-sensitive orientation sounds (pulses) containing a constant-frequency component consisting of four harmonics (CF 1-4 ). They show unique behavior called Doppler-shift compensation for Doppler-shifted echoes and hunting behavior for frequency and amplitude modulated echoes from fluttering insects. Their peripheral auditory system is highly specialized for fine frequency analysis of CF 2 (∼61.0 kHz) and detecting echo CF 2 from fluttering insects. In their central auditory system, lateral inhibition occurring at multiple levels sharpens V-shaped frequency-tuning curves at the periphery and creates sharp spindle-shaped tuning curves and amplitude tuning. The large CF 2 -tuned area of the auditory cortex systematically represents the frequency and amplitude of CF 2 in a frequency-versus-amplitude map. "CF/CF" neurons are tuned to a specific combination of pulse CF 1 and Doppler-shifted echo CF 2 or 3 . They are tuned to specific velocities. CF/CF neurons cluster in the CC ("C" stands for CF) and DIF (dorsal intrafossa) areas of the auditory cortex. The CC area has the velocity map for Doppler imaging. The DIF area is particularly for Dopper imaging of other bats approaching in cruising flight. To optimize the processing of behaviorally relevant sounds, cortico-cortical interactions and corticofugal feedback modulate the frequency tuning of cortical and sub-cortical auditory neurons and cochlear hair cells through a neural net consisting of positive feedback associated with lateral inhibition. Copyright © 2018 Elsevier B.V. All rights reserved.
Electrostimulation mapping of comprehension of auditory and visual words.
Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François
2015-10-01
In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
2015-11-01
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).
Tierney, Adam; Kraus, Nina
2014-01-01
Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.
Functional MRI of the vocalization-processing network in the macaque brain
Ortiz-Rios, Michael; Kuśmierek, Paweł; DeWitt, Iain; Archakov, Denis; Azevedo, Frederico A. C.; Sams, Mikko; Jääskeläinen, Iiro P.; Keliris, Georgios A.; Rauschecker, Josef P.
2015-01-01
Using functional magnetic resonance imaging in awake behaving monkeys we investigated how species-specific vocalizations are represented in auditory and auditory-related regions of the macaque brain. We found clusters of active voxels along the ascending auditory pathway that responded to various types of complex sounds: inferior colliculus (IC), medial geniculate nucleus (MGN), auditory core, belt, and parabelt cortex, and other parts of the superior temporal gyrus (STG) and sulcus (STS). Regions sensitive to monkey calls were most prevalent in the anterior STG, but some clusters were also found in frontal and parietal cortex on the basis of comparisons between responses to calls and environmental sounds. Surprisingly, we found that spectrotemporal control sounds derived from the monkey calls (“scrambled calls”) also activated the parietal and frontal regions. Taken together, our results demonstrate that species-specific vocalizations in rhesus monkeys activate preferentially the auditory ventral stream, and in particular areas of the antero-lateral belt and parabelt. PMID:25883546
Mayo's Older Americans Normative Studies (MOANS): Factor Structure of a Core Battery.
ERIC Educational Resources Information Center
Smith, Glenn E.; And Others
1992-01-01
Using the Mayo Older Americans Normative Studies (MOANS) group (526 55-to 97-year-old adults), factor models were examined for the Wechsler Adult Intelligence Scale-Revised (WAIS-R); the Wechsler Memory Scale (WMS); and a core battery of the WAIS-R, the WMS, and the Rey Auditory-Verbal Learning Test. (SLD)
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.
Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P
2005-05-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.
Early auditory processing in area V5/MT+ of the congenitally blind brain.
Watkins, Kate E; Shakespeare, Timothy J; O'Donoghue, M Clare; Alexander, Iona; Ragge, Nicola; Cowey, Alan; Bridge, Holly
2013-11-13
Previous imaging studies of congenital blindness have studied individuals with heterogeneous causes of blindness, which may influence the nature and extent of cross-modal plasticity. Here, we scanned a homogeneous group of blind people with bilateral congenital anophthalmia, a condition in which both eyes fail to develop, and, as a result, the visual pathway is not stimulated by either light or retinal waves. This model of congenital blindness presents an opportunity to investigate the effects of very early visual deafferentation on the functional organization of the brain. In anophthalmic animals, the occipital cortex receives direct subcortical auditory input. We hypothesized that this pattern of subcortical reorganization ought to result in a topographic mapping of auditory frequency information in the occipital cortex of anophthalmic people. Using functional MRI, we examined auditory-evoked activity to pure tones of high, medium, and low frequencies. Activity in the superior temporal cortex was significantly reduced in anophthalmic compared with sighted participants. In the occipital cortex, a region corresponding to the cytoarchitectural area V5/MT+ was activated in the anophthalmic participants but not in sighted controls. Whereas previous studies in the blind indicate that this cortical area is activated to auditory motion, our data show it is also active for trains of pure tone stimuli and in some anophthalmic participants shows a topographic mapping (tonotopy). Therefore, this region appears to be performing early sensory processing, possibly served by direct subcortical input from the pulvinar to V5/MT+.
Godfrey, Donald A; Chen, Kejian; O'Toole, Thomas R; Mustapha, Abdurrahman I A A
2017-07-01
Older adults generally experience difficulties with hearing. Age-related changes in the chemistry of central auditory regions, especially the chemistry underlying synaptic transmission between neurons, may be of particular relevance for hearing changes. In this study, we used quantitative microchemical methods to map concentrations of amino acids, including the major neurotransmitters of the brain, in all the major central auditory structures of young (6 months), middle-aged (22 months), and old (33 months old) Fischer 344 x Brown Norway rats. In addition, some amino acid measurements were made for vestibular nuclei, and activities of choline acetyltransferase, the enzyme for acetylcholine synthesis, were mapped in the superior olive and auditory cortex. In old, as compared to young, rats, glutamate concentrations were lower throughout central auditory regions. Aspartate and glycine concentrations were significantly lower in many and GABA and taurine concentrations in some cochlear nucleus and superior olive regions. Glutamine concentrations and choline acetyltransferase activities were higher in most auditory cortex layers of old rats as compared to young. Where there were differences between young and old rats, amino acid concentrations in middle-aged rats often lay between those in young and old rats, suggesting gradual changes during adult life. The results suggest that hearing deficits in older adults may relate to decreases in excitatory (glutamate) as well as inhibitory (glycine and GABA) neurotransmitter amino acid functions. Chemical changes measured in aged rats often differed from changes measured after manipulations that directly damage the cochlea, suggesting that chemical changes during aging may not all be secondary to cochlear damage. Copyright © 2017 Elsevier B.V. All rights reserved.
The Influence of Tactile Cognitive Maps on Auditory Space Perception in Sighted Persons.
Tonelli, Alessia; Gori, Monica; Brayda, Luca
2016-01-01
We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.
Do informal musical activities shape auditory skill development in preschool-age children?
Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari
2013-08-29
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.
Do informal musical activities shape auditory skill development in preschool-age children?
Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari
2013-01-01
The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children. PMID:24009597
Benson, John; Payabvash, Seyedmehdi; Salazar, Pascal; Jagadeesan, Bharathi; Palmer, Christopher S; Truwit, Charles L; McKinney, Alexander M
2015-04-01
To assess the accuracy and reliability of one vendor's (Vital Images, Toshiba Medical, Minnetonka, MN) automated CT perfusion (CTP) summary maps in identification and volume estimation of infarcted tissue in patients with acute middle cerebral artery (MCA) distribution infarcts. From 1085 CTP examinations over 5.5 years, 43 diffusion-weighted imaging (DWI)-positive patients were included who underwent both CTP and DWI <12 h after symptom onset, with another 43 age-matched patients as controls (DWI-negative). Automated delay-corrected postprocessing software (DC-SVD) generated both infarct "core only" and "core+penumbra" CTP summary maps. Three reviewers independently tabulated Alberta Stroke Program Early CT scores (ASPECTS) of both CTP summary maps and coregistered DWI. Of 86 included patients, 36 had DWI infarct volumes ≤70 ml, 7 had volumes >70 ml, and 43 were negative; the automated CTP "core only" map correctly classified each as >70 ml or ≤70 ml, while the "core+penumbra" map misclassified 4 as >70 ml. There were strong correlations between DWI volume with both summary map-based volumes: "core only" (r=0.93), and "core+penumbra" (r=0.77) (both p<0.0001). Agreement between ASPECTS scores of infarct core on DWI with summary maps was 0.65-0.74 for "core only" map, and 0.61-0.65 for "core+penumbra" (both p<0.0001). Using DWI-based ASPECTS scores as the standard, the accuracy of the CTP-based maps were 79.1-86.0% for the "core only" map, and 83.7-88.4% for "core+penumbra." Automated CTP summary maps appear to be relatively accurate in both the detection of acute MCA distribution infarcts, and the discrimination of volumes using a 70 ml threshold. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Nieto-Diego, Javier; Malmierca, Manuel S.
2016-01-01
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
Mawson, Kerry
2014-04-01
The aim of this study was to determine if simulation aided by media technology contributes towards an increase in knowledge, empathy, and a change in attitudes in regards to auditory hallucinations for nursing students. A convenience sample of 60 second-year undergraduate nursing students from an Australian university was invited to be part of the study. A pre-post-test design was used, with data analysed using a paired samples t-test to identify pre- and post-changes on nursing students' scores on knowledge of auditory hallucinations. Nine of the 11 questions reported statistically-significant results. The remaining two questions highlighted knowledge embedded within the curriculum, with therapeutic communication being the core work of mental health nursing. The implications for practice are that simulation aided by media technology increases the knowledge of students in regards to auditory hallucinations. © 2013 Australian College of Mental Health Nurses Inc.
A psychophysiological evaluation of the perceived urgency of auditory warning signals
NASA Technical Reports Server (NTRS)
Burt, J. L.; Bartolome, D. S.; Burdette, D. W.; Comstock, J. R. Jr
1995-01-01
One significant concern that pilots have about cockpit auditory warnings is that the signals presently used lack a sense of priority. The relationship between auditory warning sound parameters and perceived urgency is, therefore, an important topic of enquiry in aviation psychology. The present investigation examined the relationship among subjective assessments of urgency, reaction time, and brainwave activity with three auditory warning signals. Subjects performed a tracking task involving automated and manual conditions, and were presented with auditory warnings having various levels of perceived and situational urgency. Subjective assessments revealed that subjects were able to rank warnings on an urgency scale, but rankings were altered after warnings were mapped to a situational urgency scale. Reaction times differed between automated and manual tracking task conditions, and physiological data showed attentional differences in response to perceived and situational warning urgency levels. This study shows that the use of physiological measures sensitive to attention and arousal, in conjunction with behavioural and subjective measures, may lead to the design of auditory warnings that produce a sense of urgency in an operator that matches the urgency of the situation.
Forebrain pathway for auditory space processing in the barn owl.
Cohen, Y E; Miller, G L; Knudsen, E I
1998-02-01
The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.
Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf
Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao
2016-01-01
Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461
Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex
Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.
2009-01-01
‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492
Brown, Erik C.; Rothermel, Robert; Nishida, Masaaki; Juhász, Csaba; Muzik, Otto; Hoechstetter, Karsten; Sood, Sandeep; Chugani, Harry T.; Asano, Eishi
2008-01-01
We determined if high-frequency gamma-oscillations (50- to 150-Hz) were induced by simple auditory communication over the language network areas in children with focal epilepsy. Four children (ages: 7, 9, 10 and 16 years) with intractable left-hemispheric focal epilepsy underwent extraoperative electrocorticography (ECoG) as well as language mapping using neurostimulation and auditory-language-induced gamma-oscillations on ECoG. The audible communication was recorded concurrently and integrated with ECoG recording to allow for accurate time-lock upon ECoG analysis. In three children, who successfully completed the auditory-language task, high-frequency gamma-augmentation sequentially involved: i) the posterior superior temporal gyrus when listening to the question, ii) the posterior lateral temporal region and the posterior frontal region in the time interval between question completion and the patient’s vocalization, and iii) the pre- and post-central gyri immediately preceding and during the patient’s vocalization. The youngest child, with attention deficits, failed to cooperate during the auditory-language task, and high-frequency gamma-augmentation was noted only in the posterior superior temporal gyrus when audible questions were given. The size of language areas suggested by statistically-significant high-frequency gamma-augmentation was larger than that defined by neurostimulation. The present method can provide in-vivo imaging of electrophysiological activities over the language network areas during language processes. Further studies are warranted to determine whether recording of language-induced gamma-oscillations can supplement language mapping using neurostimulation in presurgical evaluation of children with focal epilepsy. PMID:18455440
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex
Moucha, Raluca; Pandya, Pritesh K.; Engineer, Navzer D.; Rathbun, Daniel L.
2010-01-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8–4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity PMID:15616812
Harmonic template neurons in primate auditory cortex underlying complex sound processing
Feng, Lei
2017-01-01
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music. PMID:28096341
Crossmodal association of auditory and visual material properties in infants.
Ujiie, Yuta; Yamashita, Wakayo; Fujisaki, Waka; Kanazawa, So; Yamaguchi, Masami K
2018-06-18
The human perceptual system enables us to extract visual properties of an object's material from auditory information. In monkeys, the neural basis underlying such multisensory association develops through experience of exposure to a material; material information could be processed in the posterior inferior temporal cortex, progressively from the high-order visual areas. In humans, however, the development of this neural representation remains poorly understood. Here, we demonstrated for the first time the presence of a mapping of the auditory material property with visual material ("Metal" and "Wood") in the right temporal region in preverbal 4- to 8-month-old infants, using near-infrared spectroscopy (NIRS). Furthermore, we found that infants acquired the audio-visual mapping for a property of the "Metal" material later than for the "Wood" material, since infants form the visual property of "Metal" material after approximately 6 months of age. These findings indicate that multisensory processing of material information induces the activation of brain areas related to sound symbolism. Our findings also indicate that the material's familiarity might facilitate the development of multisensory processing during the first year of life.
Spectral context affects temporal processing in awake auditory cortex
Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.
2013-01-01
Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811
Functional correlates of the anterolateral processing hierarchy in human auditory cortex.
Chevillet, Mark; Riesenhuber, Maximilian; Rauschecker, Josef P
2011-06-22
Converging evidence supports the hypothesis that an anterolateral processing pathway mediates sound identification in auditory cortex, analogous to the role of the ventral cortical pathway in visual object recognition. Studies in nonhuman primates have characterized the anterolateral auditory pathway as a processing hierarchy, composed of three anatomically and physiologically distinct initial stages: core, belt, and parabelt. In humans, potential homologs of these regions have been identified anatomically, but reliable and complete functional distinctions between them have yet to be established. Because the anatomical locations of these fields vary across subjects, investigations of potential homologs between monkeys and humans require these fields to be defined in single subjects. Using functional MRI, we presented three classes of sounds (tones, band-passed noise bursts, and conspecific vocalizations), equivalent to those used in previous monkey studies. In each individual subject, three regions showing functional similarities to macaque core, belt, and parabelt were readily identified. Furthermore, the relative sizes and locations of these regions were consistent with those reported in human anatomical studies. Our results demonstrate that the functional organization of the anterolateral processing pathway in humans is largely consistent with that of nonhuman primates. Because our scanning sessions last only 15 min/subject, they can be run in conjunction with other scans. This will enable future studies to characterize functional modules in human auditory cortex at a level of detail previously possible only in visual cortex. Furthermore, the approach of using identical schemes in both humans and monkeys will aid with establishing potential homologies between them.
Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.
Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R
2003-05-01
A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.
Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.
Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard
2018-01-01
The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
Sadovsky, Alexander J.
2013-01-01
Mapping the flow of activity through neocortical microcircuits provides key insights into the underlying circuit architecture. Using a comparative analysis we determined the extent to which the dynamics of microcircuits in mouse primary somatosensory barrel field (S1BF) and auditory (A1) neocortex generalize. We imaged the simultaneous dynamics of up to 1126 neurons spanning multiple columns and layers using high-speed multiphoton imaging. The temporal progression and reliability of reactivation of circuit events in both regions suggested common underlying cortical design features. We used circuit activity flow to generate functional connectivity maps, or graphs, to test the microcircuit hypothesis within a functional framework. S1BF and A1 present a useful test of the postulate as both regions map sensory input anatomically, but each area appears organized according to different design principles. We projected the functional topologies into anatomical space and found benchmarks of organization that had been previously described using physiology and anatomical methods, consistent with a close mapping between anatomy and functional dynamics. By comparing graphs representing activity flow we found that each region is similarly organized as highlighted by hallmarks of small world, scale free, and hierarchical modular topologies. Models of prototypical functional circuits from each area of cortex were sufficient to recapitulate experimentally observed circuit activity. Convergence to common behavior by these models was accomplished using preferential attachment to scale from an auditory up to a somatosensory circuit. These functional data imply that the microcircuit hypothesis be framed as scalable principles of neocortical circuit design. PMID:23986241
Cross-modal metaphorical mapping of spoken emotion words onto vertical space.
Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.
Cross-modal metaphorical mapping of spoken emotion words onto vertical space
Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
Lammert-Siepmann, Nils; Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory.
Bestgen, Anne-Kathrin; Edler, Dennis; Kuchinke, Lars; Dickmann, Frank
2017-01-01
Knowing the correct location of a specific object learned from a (topographic) map is fundamental for orientation and navigation tasks. Spatial reference systems, such as coordinates or cardinal directions, are helpful tools for any geometric localization of positions that aims to be as exact as possible. Considering modern visualization techniques of multimedia cartography, map elements transferred through the auditory channel can be added easily. Audiovisual approaches have been discussed in the cartographic community for many years. However, the effectiveness of audiovisual map elements for map use has hardly been explored so far. Within an interdisciplinary (cartography-cognitive psychology) research project, it is examined whether map users remember object-locations better if they do not just read the corresponding place names, but also listen to them as voice recordings. This approach is based on the idea that learning object-identities influences learning object-locations, which is crucial for map-reading tasks. The results of an empirical study show that the additional auditory communication of object names not only improves memory for the names (object-identities), but also for the spatial accuracy of their corresponding object-locations. The audiovisual communication of semantic attribute information of a spatial object seems to improve the binding of object-identity and object-location, which enhances the spatial accuracy of object-location memory. PMID:29059237
Binaural fusion and the representation of virtual pitch in the human auditory cortex.
Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E
1996-10-01
The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.
van den Hurk, Job; Van Baelen, Marc; Op de Beeck, Hans P.
2017-01-01
To what extent does functional brain organization rely on sensory input? Here, we show that for the penultimate visual-processing region, ventral-temporal cortex (VTC), visual experience is not the origin of its fundamental organizational property, category selectivity. In the fMRI study reported here, we presented 14 congenitally blind participants with face-, body-, scene-, and object-related natural sounds and presented 20 healthy controls with both auditory and visual stimuli from these categories. Using macroanatomical alignment, response mapping, and surface-based multivoxel pattern analysis, we demonstrated that VTC in blind individuals shows robust discriminatory responses elicited by the four categories and that these patterns of activity in blind subjects could successfully predict the visual categories in sighted controls. These findings were confirmed in a subset of blind participants born without eyes and thus deprived from all light perception since conception. The sounds also could be decoded in primary visual and primary auditory cortex, but these regions did not sustain generalization across modalities. Surprisingly, although not as strong as visual responses, selectivity for auditory stimulation in visual cortex was stronger in blind individuals than in controls. The opposite was observed in primary auditory cortex. Overall, we demonstrated a striking similarity in the cortical response layout of VTC in blind individuals and sighted controls, demonstrating that the overall category-selective map in extrastriate cortex develops independently from visual experience. PMID:28507127
Stone, David B.; Urrea, Laura J.; Aine, Cheryl J.; Bustillo, Juan R.; Clark, Vincent P.; Stephen, Julia M.
2011-01-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. PMID:21807011
Sensing Super-Position: Human Sensing Beyond the Visual Spectrum
NASA Technical Reports Server (NTRS)
Maluf, David A.; Schipper, John F.
2007-01-01
The coming decade of fast, cheap and miniaturized electronics and sensory devices opens new pathways for the development of sophisticated equipment to overcome limitations of the human senses. This paper addresses the technical feasibility of augmenting human vision through Sensing Super-position by mixing natural Human sensing. The current implementation of the device translates visual and other passive or active sensory instruments into sounds, which become relevant when the visual resolution is insufficient for very difficult and particular sensing tasks. A successful Sensing Super-position meets many human and pilot vehicle system requirements. The system can be further developed into cheap, portable, and low power taking into account the limited capabilities of the human user as well as the typical characteristics of his dynamic environment. The system operates in real time, giving the desired information for the particular augmented sensing tasks. The Sensing Super-position device increases the image resolution perception and is obtained via an auditory representation as well as the visual representation. Auditory mapping is performed to distribute an image in time. The three-dimensional spatial brightness and multi-spectral maps of a sensed image are processed using real-time image processing techniques (e.g. histogram normalization) and transformed into a two-dimensional map of an audio signal as a function of frequency and time. This paper details the approach of developing Sensing Super-position systems as a way to augment the human vision system by exploiting the capabilities of Lie human hearing system as an additional neural input. The human hearing system is capable of learning to process and interpret extremely complicated and rapidly changing auditory patterns. The known capabilities of the human hearing system to learn and understand complicated auditory patterns provided the basic motivation for developing an image-to-sound mapping system. The human brain is superior to most existing computer systems in rapidly extracting relevant information from blurred, noisy, and redundant images. From a theoretical viewpoint, this means that the available bandwidth is not exploited in an optimal way. While image-processing techniques can manipulate, condense and focus the information (e.g., Fourier Transforms), keeping the mapping as direct and simple as possible might also reduce the risk of accidentally filtering out important clues. After all, especially a perfect non-redundant sound representation is prone to loss of relevant information in the non-perfect human hearing system. Also, a complicated non-redundant image-to-sound mapping may well be far more difficult to learn and comprehend than a straightforward mapping, while the mapping system would increase in complexity and cost. This work will demonstrate some basic information processing for optimal information capture for headmounted systems.
Expertise-dependent motor somatotopy of music perception.
Furukawa, Yuta; Uehara, Kazumasa; Furuya, Shinichi
2017-05-22
Precise mapping between sound and motion underlies successful communication and information transmission in speech and musical performance. Formation of the map typically undergoes plastic changes in the neuronal network between auditory and motor regions through training. However, to what extent the map is somatotopically-tuned so that auditory information can specifically modulate the corticospinal system responsible for the relevant motor action has not been elucidated. Here we addressed this issue by assessing the excitability of corticospinal system including the primary motor cortex (M1) innervating the hand intrinsic muscles by means of transcranial magnetic stimulation while trained pianists and musically-untrained individuals (non-musicians) were listening to either piano tones or noise. M1 excitability was evaluated at two anatomically-independent muscles of the hand. The results demonstrated elevation of M1 excitability at not all but one specific muscle while listening to piano tones in the pianists, but no excitability change in both of the muscles in the non-musicians. However, listening to noise did not elicit any changes of M1 excitability at both muscles in both the pianists and the non-musicians. These findings indicate that auditory information representing the trained motor action tunes M1 excitability in a non-uniform, somatotopically-specific manner, which is likely associated with multimodal experiences in musical training. Copyright © 2017 Elsevier B.V. All rights reserved.
Evidence for pitch chroma mapping in human auditory cortex.
Briley, Paul M; Breakey, Charlotte; Krumbholz, Katrin
2013-11-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, "pitch height," and a cyclical, "pitch chroma," dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex.
Separating pitch chroma and pitch height in the human brain
Warren, J. D.; Uppenkamp, S.; Patterson, R. D.; Griffiths, T. D.
2003-01-01
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas. PMID:12909719
Separating pitch chroma and pitch height in the human brain.
Warren, J D; Uppenkamp, S; Patterson, R D; Griffiths, T D
2003-08-19
Musicians recognize pitch as having two dimensions. On the keyboard, these are illustrated by the octave and the cycle of notes within the octave. In perception, these dimensions are referred to as pitch height and pitch chroma, respectively. Pitch chroma provides a basis for presenting acoustic patterns (melodies) that do not depend on the particular sound source. In contrast, pitch height provides a basis for segregation of notes into streams to separate sound sources. This paper reports a functional magnetic resonance experiment designed to search for distinct mappings of these two types of pitch change in the human brain. The results show that chroma change is specifically represented anterior to primary auditory cortex, whereas height change is specifically represented posterior to primary auditory cortex. We propose that tracking of acoustic information streams occurs in anterior auditory areas, whereas the segregation of sound objects (a crucial aspect of auditory scene analysis) depends on posterior areas.
Evidence for Pitch Chroma Mapping in Human Auditory Cortex
Briley, Paul M.; Breakey, Charlotte; Krumbholz, Katrin
2013-01-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, “pitch height,” and a cyclical, “pitch chroma,” dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex. PMID:22918980
Ultrasound Produces Extensive Brain Activation via a Cochlear Pathway.
Guo, Hongsun; Hamilton, Mark; Offutt, Sarah J; Gloeckner, Cory D; Li, Tianqi; Kim, Yohan; Legon, Wynn; Alford, Jamu K; Lim, Hubert H
2018-06-06
Ultrasound (US) can noninvasively activate intact brain circuits, making it a promising neuromodulation technique. However, little is known about the underlying mechanism. Here, we apply transcranial US and perform brain mapping studies in guinea pigs using extracellular electrophysiology. We find that US elicits extensive activation across cortical and subcortical brain regions. However, transection of the auditory nerves or removal of cochlear fluids eliminates the US-induced activity, revealing an indirect auditory mechanism for US neural activation. Our findings indicate that US activates the ascending auditory system through a cochlear pathway, which can activate other non-auditory regions through cross-modal projections. This cochlear pathway mechanism challenges the idea that US can directly activate neurons in the intact brain, suggesting that future US stimulation studies will need to control for this effect to reach reliable conclusions. Copyright © 2018 Elsevier Inc. All rights reserved.
Transformation of temporal sequences in the zebra finch auditory system
Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J
2016-01-01
This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971
Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments
NASA Astrophysics Data System (ADS)
Horowitz, Seth S.; Simmons, Andrea M.; Blue, China
2005-09-01
Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.
Mohebbi, Mehrnaz; Mahmoudian, Saeid; Alborzi, Marzieh Sharifian; Najafi-Koopaie, Mojtaba; Farahani, Ehsan Darestani; Farhadi, Mohammad
2014-09-01
To investigate the association of handedness with auditory middle latency responses (AMLRs) using topographic brain mapping by comparing amplitudes and latencies in frontocentral and hemispheric regions of interest (ROIs). The study included 44 healthy subjects with normal hearing (22 left handed and 22 right handed). AMLRs were recorded from 29 scalp electrodes in response to binaural 4-kHz tone bursts. Frontocentral ROI comparisons revealed that Pa and Pb amplitudes were significantly larger in the left-handed than the right-handed group. Topographic brain maps showed different distributions in AMLR components between the two groups. In hemispheric comparisons, Pa amplitude differed significantly across groups. A left-hemisphere emphasis of Pa was found in the right-handed group but not in the left-handed group. This study provides evidence that handedness is associated with AMLR components in frontocentral and hemispheric ROI. Handedness should be considered an essential factor in the clinical or experimental use of AMLRs.
Referential Coding Contributes to the Horizontal SMARC Effect
ERIC Educational Resources Information Center
Cho, Yang Seok; Bae, Gi Yeul; Proctor, Robert W.
2012-01-01
The present study tested whether coding of tone pitch relative to a referent contributes to the correspondence effect between the pitch height of an auditory stimulus and the location of a lateralized response. When left-right responses are mapped to high or low pitch tones, performance is better with the high-right/low-left mapping than with the…
Eitan, Zohar; Timmers, Renee
2010-03-01
Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and structured. In particular, we inquired (1) how Western subjects apply Western and non-Western metaphors to "high" and "low" pitches, (2) whether mappings applied in an abstract conceptual task are similarly applied by listeners to actual music, (3) how mappings of spatial height relate to these pitch mappings, and (4) how mappings of "high" and "low" pitch associate with other dimensions, in particular quantity, size, intensity and valence. The results show strong agreement among Western participants in applying familiar and unfamiliar metaphors for pitch, in both an abstract, conceptual task (Exp. 1) and in a music listening task (Exp. 2), indicating that diverse cross-domain mappings for pitch exist latently besides the common verticality metaphor. Furthermore, limited overlap between mappings of spatial height and pitch height was found, suggesting that, the ubiquity of the verticality metaphor in Western usage notwithstanding, cross-domain pitch mappings are largely independent of that metaphor, and seem to be based upon other underlying dimensions. Part of the discrepancy between spatial height and pitch height is that, for pitch, "up" is not necessarily "more," nor is it necessarily "good." High pitch is only "more" for height, intensity and brightness. It is "less" for mass, size and quantity. We discuss implications of these findings for music and speech prosody, and their relevance to notions of embodied cognition and of cross-domain magnitude representation. Copyright 2009 Elsevier B.V. All rights reserved.
Kell, Alexander J E; Yamins, Daniel L K; Shook, Erica N; Norman-Haignere, Sam V; McDermott, Josh H
2018-05-02
A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems. Copyright © 2018 Elsevier Inc. All rights reserved.
Rising tones and rustling noises: Metaphors in gestural depictions of sounds
Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick
2017-01-01
Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants’ gestures. PMID:28750071
Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex
Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie
2013-01-01
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225
Relative size of auditory pathways in symmetrically and asymmetrically eared owls.
Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R
2011-01-01
Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.
Penhune, V B; Zatorre, R J; Feindel, W H
1999-03-01
This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.
He, Qionger; Arroyo, Erica D; Smukowski, Samuel N; Xu, Jian; Piochon, Claire; Savas, Jeffrey N; Portera-Cailliau, Carlos; Contractor, Anis
2018-04-27
Sensory perturbations in visual, auditory and tactile perception are core problems in fragile X syndrome (FXS). In the Fmr1 knockout mouse model of FXS, the maturation of synapses and circuits during critical period (CP) development in the somatosensory cortex is delayed, but it is unclear how this contributes to altered tactile sensory processing in the mature CNS. Here we demonstrate that inhibiting the juvenile chloride co-transporter NKCC1, which contributes to altered chloride homeostasis in developing cortical neurons of FXS mice, rectifies the chloride imbalance in layer IV somatosensory cortex neurons and corrects the development of thalamocortical excitatory synapses during the CP. Comparison of protein abundances demonstrated that NKCC1 inhibition during early development caused a broad remodeling of the proteome in the barrel cortex. In addition, the abnormally large size of whisker-evoked cortical maps in adult Fmr1 knockout mice was corrected by rectifying the chloride imbalance during the early CP. These data demonstrate that correcting the disrupted driving force through GABA A receptors during the CP in cortical neurons restores their synaptic development, has an unexpectedly large effect on differentially expressed proteins, and produces a long-lasting correction of somatosensory circuit function in FXS mice.
Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.
Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly
2015-12-01
Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).
The representation of sound localization cues in the barn owl's inferior colliculus
Singheiser, Martin; Gutfreund, Yoram; Wagner, Hermann
2012-01-01
The barn owl is a well-known model system for studying auditory processing and sound localization. This article reviews the morphological and functional organization, as well as the role of the underlying microcircuits, of the barn owl's inferior colliculus (IC). We focus on the processing of frequency and interaural time (ITD) and level differences (ILD). We first summarize the morphology of the sub-nuclei belonging to the IC and their differentiation by antero- and retrograde labeling and by staining with various antibodies. We then focus on the response properties of neurons in the three major sub-nuclei of IC [core of the central nucleus of the IC (ICCc), lateral shell of the central nucleus of the IC (ICCls), and the external nucleus of the IC (ICX)]. ICCc projects to ICCls, which in turn sends its information to ICX. The responses of neurons in ICCc are sensitive to changes in ITD but not to changes in ILD. The distribution of ITD sensitivity with frequency in ICCc can only partly be explained by optimal coding. We continue with the tuning properties of ICCls neurons, the first station in the midbrain where the ITD and ILD pathways merge after they have split at the level of the cochlear nucleus. The ICCc and ICCls share similar ITD and frequency tuning. By contrast, ICCls shows sigmoidal ILD tuning which is absent in ICCc. Both ICCc and ICCls project to the forebrain, and ICCls also projects to ICX, where space-specific neurons are found. Space-specific neurons exhibit side peak suppression in ITD tuning, bell-shaped ILD tuning, and are broadly tuned to frequency. These neurons respond only to restricted positions of auditory space and form a map of two-dimensional auditory space. Finally, we briefly review major IC features, including multiplication-like computations, correlates of echo suppression, plasticity, and adaptation. PMID:22798945
Ivanova, Tamara N; Gross, Christina; Mappus, Rudolph C; Kwon, Yong Jun; Bassell, Gary J; Liu, Robert C
2017-12-01
Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression in the auditory system of a key synaptic plasticity effector immediate early gene, Arc/Arg3.1 , which is required for memory consolidation. We use the ultrasonic communication system between mouse pups and adult females to study whether prior familiarity with pup vocalizations alters how Arc is engaged in the core auditory cortex after playback of novel exemplars from the pup vocal category. A computerized, 3D surface-assisted cellular compartmental analysis, validated against manual cell counts, demonstrates significant changes in the recruitment of neurons expressing Arc in pup-experienced animals (mothers and virgin females "cocaring" for pups) compared with pup-inexperienced animals (pup-naïve virgins), especially when listening to more familiar, natural calls compared to less familiar but similarly recognized tonal model calls. Our data support the hypothesis that the kinetics of Arc induction to refine cortical representations of sensory categories is sensitive to the familiarity of the sensory experience. © 2017 Ivanova et al.; Published by Cold Spring Harbor Laboratory Press.
Dynamic plasticity in coupled avian midbrain maps
NASA Astrophysics Data System (ADS)
Atwal, Gurinder Singh
2004-12-01
Internal mapping of the external environment is carried out using the receptive fields of topographic neurons in the brain, and in a normal barn owl the aural and visual subcortical maps are aligned from early experiences. However, instantaneous misalignment of the aural and visual stimuli has been observed to result in adaptive behavior, manifested by functional and anatomical changes of the auditory processing system. Using methods of information theory and statistical mechanics a model of the adaptive dynamics of the aural receptive field is presented and analyzed. The dynamics is determined by maximizing the mutual information between the neural output and the weighted sensory neural inputs, admixed with noise, subject to biophysical constraints. The reduced costs of neural rewiring, as in the case of young barn owls, reveal two qualitatively different types of receptive field adaptation depending on the magnitude of the audiovisual misalignment. By letting the misalignment increase with time, it is shown that the ability to adapt can be increased even when neural rewiring costs are high, in agreement with recent experimental reports of the increased plasticity of the auditory space map in adult barn owls due to incremental learning. Finally, a critical speed of misalignment is identified, demarcating the crossover from adaptive to nonadaptive behavior.
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
Stone, David B; Urrea, Laura J; Aine, Cheryl J; Bustillo, Juan R; Clark, Vincent P; Stephen, Julia M
2011-10-01
In real-world settings, information from multiple sensory modalities is combined to form a complete, behaviorally salient percept - a process known as multisensory integration. While deficits in auditory and visual processing are often observed in schizophrenia, little is known about how multisensory integration is affected by the disorder. The present study examined auditory, visual, and combined audio-visual processing in schizophrenia patients using high-density electrical mapping. An ecologically relevant task was used to compare unisensory and multisensory evoked potentials from schizophrenia patients to potentials from healthy normal volunteers. Analysis of unisensory responses revealed a large decrease in the N100 component of the auditory-evoked potential, as well as early differences in the visual-evoked components in the schizophrenia group. Differences in early evoked responses to multisensory stimuli were also detected. Multisensory facilitation was assessed by comparing the sum of auditory and visual evoked responses to the audio-visual evoked response. Schizophrenia patients showed a significantly greater absolute magnitude response to audio-visual stimuli than to summed unisensory stimuli when compared to healthy volunteers, indicating significantly greater multisensory facilitation in the patient group. Behavioral responses also indicated increased facilitation from multisensory stimuli. The results represent the first report of increased multisensory facilitation in schizophrenia and suggest that, although unisensory deficits are present, compensatory mechanisms may exist under certain conditions that permit improved multisensory integration in individuals afflicted with the disorder. Copyright © 2011 Elsevier Ltd. All rights reserved.
Cross-Modal Recruitment of Auditory and Orofacial Areas During Sign Language in a Deaf Subject.
Martino, Juan; Velasquez, Carlos; Vázquez-Bourgon, Javier; de Lucas, Enrique Marco; Gomez, Elsa
2017-09-01
Modern sign languages used by deaf people are fully expressive, natural human languages that are perceived visually and produced manually. The literature contains little data concerning human brain organization in conditions of deficient sensory information such as deafness. A deaf-mute patient underwent surgery of a left temporoinsular low-grade glioma. The patient underwent awake surgery with intraoperative electrical stimulation mapping, allowing direct study of the cortical and subcortical organization of sign language. We found a similar distribution of language sites to what has been reported in mapping studies of patients with oral language, including 1) speech perception areas inducing anomias and alexias close to the auditory cortex (at the posterior portion of the superior temporal gyrus and supramarginal gyrus); 2) speech production areas inducing speech arrest (anarthria) at the ventral premotor cortex, close to the lip motor area and away from the hand motor area; and 3) subcortical stimulation-induced semantic paraphasias at the inferior fronto-occipital fasciculus at the temporal isthmus. The intraoperative setup for sign language mapping with intraoperative electrical stimulation in deaf-mute patients is similar to the setup described in patients with oral language. To elucidate the type of language errors, a sign language interpreter in close interaction with the neuropsychologist is necessary. Sign language is perceived visually and produced manually; however, this case revealed a cross-modal recruitment of auditory and orofacial motor areas. Copyright © 2017 Elsevier Inc. All rights reserved.
Cerebral processing of auditory stimuli in patients with irritable bowel syndrome
Andresen, Viola; Poellinger, Alexander; Tsrouya, Chedwa; Bach, Dominik; Stroh, Albrecht; Foerschler, Annette; Georgiewa, Petra; Schmidtmann, Marco; van der Voort, Ivo R; Kobelt, Peter; Zimmer, Claus; Wiedenmann, Bertram; Klapp, Burghard F; Monnikes, Hubert
2006-01-01
AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms, we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality. METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans. Statistical maps were constructed using the general linear model. RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds. To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations. CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affective reactivity, possibly associated with the psychological comorbidity often found in IBS patients. PMID:16586541
Keough, Dwayne; Jones, Jeffery A.
2009-01-01
Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers’ and untrained singers’ (nonsingers’) sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel ∕ta∕ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers’ F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers’ models appear to be more sensitive in response to subtle discrepancies in auditory feedback. PMID:19640048
Keough, Dwayne; Jones, Jeffery A
2009-08-01
Singing requires accurate control of the fundamental frequency (F0) of the voice. This study examined trained singers' and untrained singers' (nonsingers') sensitivity to subtle manipulations in auditory feedback and the subsequent effect on the mapping between F0 feedback and vocal control. Participants produced the consonant-vowel /ta/ while receiving auditory feedback that was shifted up and down in frequency. Results showed that singers and nonsingers compensated to a similar degree when presented with frequency-altered feedback (FAF); however, singers' F0 values were consistently closer to the intended pitch target. Moreover, singers initiated their compensatory responses when auditory feedback was shifted up or down 6 cents or more, compared to nonsingers who began compensating when feedback was shifted up 26 cents and down 22 cents. Additionally, examination of the first 50 ms of vocalization indicated that participants commenced subsequent vocal utterances, during FAF, near the F0 value on previous shift trials. Interestingly, nonsingers commenced F0 productions below the pitch target and increased their F0 until they matched the note. Thus, singers and nonsingers rely on an internal model to regulate voice F0, but singers' models appear to be more sensitive in response to subtle discrepancies in auditory feedback.
Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming
Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.
2013-01-01
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the dynamics of human perception in auditory streaming. PMID:23516340
Cortico-Cortical Connectivity Within Ferret Auditory Cortex.
Bizley, Jennifer K; Bajo, Victoria M; Nodal, Fernando R; King, Andrew J
2015-10-15
Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency-matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non-overlapping, consistent with the non-tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xuan, Hejun; Wang, Yuping; Xu, Zhanqi; Hao, Shanshan; Wang, Xiaoli
2017-11-01
Virtualization technology can greatly improve the efficiency of the networks by allowing the virtual optical networks to share the resources of the physical networks. However, it will face some challenges, such as finding the efficient strategies for virtual nodes mapping, virtual links mapping and spectrum assignment. It is even more complex and challenging when the physical elastic optical networks using multi-core fibers. To tackle these challenges, we establish a constrained optimization model to determine the optimal schemes of optical network mapping, core allocation and spectrum assignment. To solve the model efficiently, tailor-made encoding scheme, crossover and mutation operators are designed. Based on these, an efficient genetic algorithm is proposed to obtain the optimal schemes of the virtual nodes mapping, virtual links mapping, core allocation. The simulation experiments are conducted on three widely used networks, and the experimental results show the effectiveness of the proposed model and algorithm.
Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F
2010-07-01
Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).
To, Wing Ting; Ost, Jan; Hart, John; De Ridder, Dirk; Vanneste, Sven
2017-01-01
Tinnitus is the perception of a sound in the absence of a corresponding external sound source. Research has suggested that functional abnormalities in tinnitus patients involve auditory as well as non-auditory brain areas. Transcranial electrical stimulation (tES), such as transcranial direct current stimulation (tDCS) to the dorsolateral prefrontal cortex and transcranial random noise stimulation (tRNS) to the auditory cortex, has demonstrated modulation of brain activity to transiently suppress tinnitus symptoms. Targeting two core regions of the tinnitus network by tES might establish a promising strategy to enhance treatment effects. This proof-of-concept study aims to investigate the effect of a multisite tES treatment protocol on tinnitus intensity and distress. A total of 40 tinnitus patients were enrolled in this study and received either bifrontal tDCS or the multisite treatment of bifrontal tDCS before bilateral auditory cortex tRNS. Both groups were treated on eight sessions (two times a week for 4 weeks). Our results show that a multisite treatment protocol resulted in more pronounced effects when compared with the bifrontal tDCS protocol or the waiting list group, suggesting an added value of auditory cortex tRNS to the bifrontal tDCS protocol for tinnitus patients. These findings support the involvement of the auditory as well as non-auditory brain areas in the pathophysiology of tinnitus and demonstrate the idea of the efficacy of network stimulation in the treatment of neurological disorders. This multisite tES treatment protocol proved to be save and feasible for clinical routine in tinnitus patients.
Geissler, Diana B; Ehret, Günter
2004-02-01
Details of brain areas for acoustical Gestalt perception and the recognition of species-specific vocalizations are not known. Here we show how spectral properties and the recognition of the acoustical Gestalt of wriggling calls of mouse pups based on a temporal property are represented in auditory cortical fields and an association area (dorsal field) of the pups' mothers. We stimulated either with a call model releasing maternal behaviour at a high rate (call recognition) or with two models of low behavioural significance (perception without recognition). Brain activation was quantified using c-Fos immunocytochemistry, counting Fos-positive cells in electrophysiologically mapped auditory cortical fields and the dorsal field. A frequency-specific labelling in two primary auditory fields is related to call perception but not to the discrimination of the biological significance of the call models used. Labelling related to call recognition is present in the second auditory field (AII). A left hemisphere advantage of labelling in the dorsoposterior field seems to reflect an integration of call recognition with maternal responsiveness. The dorsal field is activated only in the left hemisphere. The spatial extent of Fos-positive cells within the auditory cortex and its fields is larger in the left than in the right hemisphere. Our data show that a left hemisphere advantage in processing of a species-specific vocalization up to recognition is present in mice. The differential representation of vocalizations of high vs. low biological significance, as seen only in higher-order and not in primary fields of the auditory cortex, is discussed in the context of perceptual strategies.
Scheperle, Rachel A.; Abbas, Paul J.
2014-01-01
Objectives The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. Design Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. Results All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor. Conclusions The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered. PMID:25658746
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.
Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
2016-01-01
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex
Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2010-01-01
Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343
François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni
2017-04-01
Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jenison, Rick L.; Reale, Richard A.; Armstrong, Amanda L.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2015-01-01
Spectro-Temporal Receptive Fields (STRFs) were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM). A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl’s gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl’s gyrus recordings elicited by click-train stimuli. PMID:26367010
Sensory maps in the claustrum of the cat.
Olson, C R; Graybiel, A M
1980-12-04
The claustrum is a telencephalic cell group (Fig. 1A, B) possessing widespread reciprocal connections with the neocortex. In this regard, it bears a unique and striking resemblance to the thalamus. We have now examined the anatomical ordering of pathways linking the claustrum with sensory areas of the cat neocortex and, in parallel electrophysiological experiments, have studied the functional organization of claustral sensory zones so identified. Our findings indicate that there are discrete visual and somatosensory subdivisions in the claustrum interconnected with the corresponding primary sensory areas of the neocortex and that the respective zones contain orderly retinotopic and somatotopic maps. A third claustral region receiving fibre projections from the auditory cortex in or near area Ep was found to contain neurones responsive to auditory stimulation. We conclude that loops connecting sensory areas of the neocortex with satellite zones in the claustrum contribute to the early processing of exteroceptive information by the forebrain.
Maps and streams in the auditory cortex: nonhuman primates illuminate human speech processing
Rauschecker, Josef P; Scott, Sophie K
2010-01-01
Speech and language are considered uniquely human abilities: animals have communication systems, but they do not match human linguistic skills in terms of recursive structure and combinatorial power. Yet, in evolution, spoken language must have emerged from neural mechanisms at least partially available in animals. In this paper, we will demonstrate how our understanding of speech perception, one important facet of language, has profited from findings and theory in nonhuman primate studies. Chief among these are physiological and anatomical studies showing that primate auditory cortex, across species, shows patterns of hierarchical structure, topographic mapping and streams of functional processing. We will identify roles for different cortical areas in the perceptual processing of speech and review functional imaging work in humans that bears on our understanding of how the brain decodes and monitors speech. A new model connects structures in the temporal, frontal and parietal lobes linking speech perception and production. PMID:19471271
De Martino, Federico; Moerel, Michelle; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
2015-12-29
Columnar arrangements of neurons with similar preference have been suggested as the fundamental processing units of the cerebral cortex. Within these columnar arrangements, feed-forward information enters at middle cortical layers whereas feedback information arrives at superficial and deep layers. This interplay of feed-forward and feedback processing is at the core of perception and behavior. Here we provide in vivo evidence consistent with a columnar organization of the processing of sound frequency in the human auditory cortex. We measure submillimeter functional responses to sound frequency sweeps at high magnetic fields (7 tesla) and show that frequency preference is stable through cortical depth in primary auditory cortex. Furthermore, we demonstrate that-in this highly columnar cortex-task demands sharpen the frequency tuning in superficial cortical layers more than in middle or deep layers. These findings are pivotal to understanding mechanisms of neural information processing and flow during the active perception of sounds.
Ackermann; Mathiak
1999-11-01
Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.
Timing in audiovisual speech perception: A mini review and new psychophysical data.
Venezia, Jonathan H; Thurman, Steven M; Matchin, William; George, Sahara E; Hickok, Gregory
2016-02-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (~35 % identification of /apa/ compared to ~5 % in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (~130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content.
Timing in Audiovisual Speech Perception: A Mini Review and New Psychophysical Data
Venezia, Jonathan H.; Thurman, Steven M.; Matchin, William; George, Sahara E.; Hickok, Gregory
2015-01-01
Recent influential models of audiovisual speech perception suggest that visual speech aids perception by generating predictions about the identity of upcoming speech sounds. These models place stock in the assumption that visual speech leads auditory speech in time. However, it is unclear whether and to what extent temporally-leading visual speech information contributes to perception. Previous studies exploring audiovisual-speech timing have relied upon psychophysical procedures that require artificial manipulation of cross-modal alignment or stimulus duration. We introduce a classification procedure that tracks perceptually-relevant visual speech information in time without requiring such manipulations. Participants were shown videos of a McGurk syllable (auditory /apa/ + visual /aka/ = perceptual /ata/) and asked to perform phoneme identification (/apa/ yes-no). The mouth region of the visual stimulus was overlaid with a dynamic transparency mask that obscured visual speech in some frames but not others randomly across trials. Variability in participants' responses (∼35% identification of /apa/ compared to ∼5% in the absence of the masker) served as the basis for classification analysis. The outcome was a high resolution spatiotemporal map of perceptually-relevant visual features. We produced these maps for McGurk stimuli at different audiovisual temporal offsets (natural timing, 50-ms visual lead, and 100-ms visual lead). Briefly, temporally-leading (∼130 ms) visual information did influence auditory perception. Moreover, several visual features influenced perception of a single speech sound, with the relative influence of each feature depending on both its temporal relation to the auditory signal and its informational content. PMID:26669309
Attentional influences on functional mapping of speech sounds in human auditory cortex.
Obleser, Jonas; Elbert, Thomas; Eulitz, Carsten
2004-07-21
The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects. During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.
The function of BDNF in the adult auditory system.
Singer, Wibke; Panford-Walsh, Rama; Knipper, Marlies
2014-01-01
The inner ear of vertebrates is specialized to perceive sound, gravity and movements. Each of the specialized sensory organs within the cochlea (sound) and vestibular system (gravity, head movements) transmits information to specific areas of the brain. During development, brain-derived neurotrophic factor (BDNF) orchestrates the survival and outgrowth of afferent fibers connecting the vestibular organ and those regions in the cochlea that map information for low frequency sound to central auditory nuclei and higher-auditory centers. The role of BDNF in the mature inner ear is less understood. This is mainly due to the fact that constitutive BDNF mutant mice are postnatally lethal. Only in the last few years has the improved technology of performing conditional cell specific deletion of BDNF in vivo allowed the study of the function of BDNF in the mature developed organ. This review provides an overview of the current knowledge of the expression pattern and function of BDNF in the peripheral and central auditory system from just prior to the first auditory experience onwards. A special focus will be put on the differential mechanisms in which BDNF drives refinement of auditory circuitries during the onset of sensory experience and in the adult brain. This article is part of the Special Issue entitled 'BDNF Regulation of Synaptic Structure, Function, and Plasticity'. Copyright © 2013 Elsevier Ltd. All rights reserved.
Cortico‐cortical connectivity within ferret auditory cortex
Bajo, Victoria M.; Nodal, Fernando R.; King, Andrew J.
2015-01-01
ABSTRACT Despite numerous studies of auditory cortical processing in the ferret (Mustela putorius), very little is known about the connections between the different regions of the auditory cortex that have been characterized cytoarchitectonically and physiologically. We examined the distribution of retrograde and anterograde labeling after injecting tracers into one or more regions of ferret auditory cortex. Injections of different tracers at frequency‐matched locations in the core areas, the primary auditory cortex (A1) and anterior auditory field (AAF), of the same animal revealed the presence of reciprocal connections with overlapping projections to and from discrete regions within the posterior pseudosylvian and suprasylvian fields (PPF and PSF), suggesting that these connections are frequency specific. In contrast, projections from the primary areas to the anterior dorsal field (ADF) on the anterior ectosylvian gyrus were scattered and non‐overlapping, consistent with the non‐tonotopic organization of this field. The relative strength of the projections originating in each of the primary fields differed, with A1 predominantly targeting the posterior bank fields PPF and PSF, which in turn project to the ventral posterior field, whereas AAF projects more heavily to the ADF, which then projects to the anteroventral field and the pseudosylvian sulcal cortex. These findings suggest that parallel anterior and posterior processing networks may exist, although the connections between different areas often overlap and interactions were present at all levels. J. Comp. Neurol. 523:2187–2210, 2015. © 2015 Wiley Periodicals, Inc. PMID:25845831
A framework for testing and comparing binaural models.
Dietz, Mathias; Lestang, Jean-Hugues; Majdak, Piotr; Stern, Richard M; Marquardt, Torsten; Ewert, Stephan D; Hartmann, William M; Goodman, Dan F M
2018-03-01
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject. Copyright © 2017 Elsevier B.V. All rights reserved.
Newborn infants perceive abstract numbers
Izard, Véronique; Sann, Coralie; Spelke, Elizabeth S.; Streri, Arlette
2009-01-01
Although infants and animals respond to the approximate number of elements in visual, auditory, and tactile arrays, only human children and adults have been shown to possess abstract numerical representations that apply to entities of all kinds (e.g., 7 samurai, seas, or sins). Do abstract numerical concepts depend on language or culture, or do they form a part of humans' innate, core knowledge? Here we show that newborn infants spontaneously associate stationary, visual-spatial arrays of 4–18 objects with auditory sequences of events on the basis of number. Their performance provides evidence for abstract numerical representations at the start of postnatal experience. PMID:19520833
Neurobehavioral Mechanisms of Temporal Processing Deficits In Parkinson’s Disease
2011-01-01
Foam padding was used to limit head motion. Auditory stimuli were delivered binaurally through a headphone that together with earplugs attenuated...core timer.’ Specifically, by the striatal beat frequency (SBF) model, Figure 5. Percent signal change in regions showing abnormal activation OFF
Takahashi, Kuniyuki; Hishida, Ryuichi; Kubota, Yamato; Kudoh, Masaharu; Takahashi, Sugata; Shibuki, Katsuei
2006-03-01
Functional brain imaging using endogenous fluorescence of mitochondrial flavoprotein is useful for investigating mouse cortical activities via the intact skull, which is thin and sufficiently transparent in mice. We applied this method to investigate auditory cortical plasticity regulated by acoustic environments. Normal mice of the C57BL/6 strain, reared in various acoustic environments for at least 4 weeks after birth, were anaesthetized with urethane (1.7 g/kg, i.p.). Auditory cortical images of endogenous green fluorescence in blue light were recorded by a cooled CCD camera via the intact skull. Cortical responses elicited by tonal stimuli (5, 10 and 20 kHz) exhibited mirror-symmetrical tonotopic maps in the primary auditory cortex (AI) and anterior auditory field (AAF). Depression of auditory cortical responses regarding response duration was observed in sound-deprived mice compared with naïve mice reared in a normal acoustic environment. When mice were exposed to an environmental tonal stimulus at 10 kHz for more than 4 weeks after birth, the cortical responses were potentiated in a frequency-specific manner in respect to peak amplitude of the responses in AI, but not for the size of the responsive areas. Changes in AAF were less clear than those in AI. To determine the modified synapses by acoustic environments, neural responses in cortical slices were investigated with endogenous fluorescence imaging. The vertical thickness of responsive areas after supragranular electrical stimulation was significantly reduced in the slices obtained from sound-deprived mice. These results suggest that acoustic environments regulate the development of vertical intracortical circuits in the mouse auditory cortex.
2017-01-01
Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698
DOT National Transportation Integrated Search
2011-06-01
People with vision impairment have different perception and spatial cognition as compared to the sighted people. Blind pedestrians primarily rely on auditory, olfactory, or tactile feedback to determine spatial location and find their way. They gener...
Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Stevenson, Ryan A.; Siemann, Justin K.; Woynaroski, Tiffany G.; Schneider, Brittany C.; Eberly, Haley E.; Camarata, Stephen M.; Wallace, Mark T.
2014-01-01
Atypical communicative abilities are a core marker of Autism Spectrum Disorders (ASD). A number of studies have shown that, in addition to auditory comprehension differences, individuals with autism frequently show atypical responses to audiovisual speech, suggesting a multisensory contribution to these communicative differences from their…
Encoding, Memory, and Transcoding Deficits in Childhood Apraxia of Speech
ERIC Educational Resources Information Center
Shriberg, Lawrence D.; Lohmeier, Heather L.; Strand, Edythe A.; Jakielski, Kathy J.
2012-01-01
A central question in Childhood Apraxia of Speech (CAS) is whether the core phenotype is limited to transcoding (planning/programming) deficits or if speakers with CAS also have deficits in auditory-perceptual "encoding" (representational) and/or "memory" (storage and retrieval of representations) processes. We addressed this and other questions…
Visual-Auditory Integration during Speech Imitation in Autism
ERIC Educational Resources Information Center
Williams, Justin H. G.; Massaro, Dominic W.; Peel, Natalie J.; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional "mirror neuron" systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a "virtual" head (Baldi), delivered speech stimuli for…
Neural Processing of Target Distance by Echolocating Bats: Functional Roles of the Auditory Midbrain
Wenstrup, Jeffrey J.; Portfors, Christine V.
2011-01-01
Using their biological sonar, bats estimate distance to avoid obstacles and capture moving prey. The primary distance cue is the delay between the bat's emitted echolocation pulse and the return of an echo. The mustached bat's auditory midbrain (inferior colliculus, IC) is crucial to the analysis of pulse-echo delay. IC neurons are selective for certain delays between frequency modulated (FM) elements of the pulse and echo. One role of the IC is to create these “delay-tuned”, “FM-FM” response properties through a series of spectro-temporal integrative interactions. A second major role of the midbrain is to project target distance information to many parts of the brain. Pathways through auditory thalamus undergo radical reorganization to create highly ordered maps of pulse-echo delay in auditory cortex, likely contributing to perceptual features of target distance analysis. FM-FM neurons in IC also project strongly to pre-motor centers including the pretectum and the pontine nuclei. These pathways may contribute to rapid adjustments in flight, body position, and sonar vocalizations that occur as a bat closes in on a target. PMID:21238485
Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C
2015-08-19
The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
On pure word deafness, temporal processing, and the left hemisphere.
Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean
2005-07-01
Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.
Walker, Jennifer L; Monjaraz-Fuentes, Fernanda; Pedrow, Christi R; Rector, David M
2011-03-15
We developed a high speed voice coil based whisker stimulator that delivers precise deflections of a single whisker or group of whiskers in a repeatable manner. The device is miniature, quiet, and inexpensive to build. Multiple stimulators fit together for independent stimulation of four or more whiskers. The system can be used with animals under anesthesia as well as awake animals with head-restraint, and does not require trimming the whiskers. The system can deliver 1-2 mm deflections in 2 ms resulting in velocities up to 900 mm/s to attain a wide range of evoked responses. Since auditory artifacts can influence behavioral studies using whisker stimulation, we tested potential effects of auditory noise by recording somatosensory evoked potentials (SEP) with varying auditory click levels, and with/without 80 dBa background white noise. We found that auditory clicks as low as 40 dBa significantly influence the SEP. With background white noise, auditory clicks as low as 50 dBa were still detected in components of the SEP. For behavioral studies where animals must learn to respond to whisker stimulation, these sounds must be minimized. Together, the stimulator and data system can be used for psychometric vigilance tasks, mapping of the barrel cortex and other electrophysiological paradigms. Copyright © 2010 Elsevier B.V. All rights reserved.
The dorsal stream contribution to phonological retrieval in object naming
Faseyitan, Olufunsho; Kim, Junghoon; Coslett, H. Branch
2012-01-01
Meaningful speech, as exemplified in object naming, calls on knowledge of the mappings between word meanings and phonological forms. Phonological errors in naming (e.g. GHOST named as ‘goath’) are commonly seen in persisting post-stroke aphasia and are thought to signal impairment in retrieval of phonological form information. We performed a voxel-based lesion-symptom mapping analysis of 1718 phonological naming errors collected from 106 individuals with diverse profiles of aphasia. Voxels in which lesion status correlated with phonological error rates localized to dorsal stream areas, in keeping with classical and contemporary brain-language models. Within the dorsal stream, the critical voxels were concentrated in premotor cortex, pre- and postcentral gyri and supramarginal gyrus with minimal extension into auditory-related posterior temporal and temporo-parietal cortices. This challenges the popular notion that error-free phonological retrieval requires guidance from sensory traces stored in posterior auditory regions and points instead to sensory-motor processes located further anterior in the dorsal stream. In a separate analysis, we compared the lesion maps for phonological and semantic errors and determined that there was no spatial overlap, demonstrating that the brain segregates phonological and semantic retrieval operations in word production. PMID:23171662
Auditory Sensory Substitution is Intuitive and Automatic with Texture Stimuli
Stiles, Noelle R. B.; Shimojo, Shinsuke
2015-01-01
Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities. PMID:26490260
Attentional influences on functional mapping of speech sounds in human auditory cortex
Obleser, Jonas; Elbert, Thomas; Eulitz, Carsten
2004-01-01
Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content), and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization), while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization). Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG), and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands. PMID:15268765
When music is salty: The crossmodal associations between sound and taste.
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population.
Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M
2017-01-03
Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.
2016-01-01
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965
Atypical coordination of cortical oscillations in response to speech in autism
Jochaut, Delphine; Lehongre, Katia; Saitovitch, Ana; Devauchelle, Anne-Dominique; Olasagasti, Itsaso; Chabane, Nadia; Zilbovicius, Monica; Giraud, Anne-Lise
2015-01-01
Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations. PMID:25870556
Moerel, Michelle; De Martino, Federico; Kemper, Valentin G; Schmitter, Sebastian; Vu, An T; Uğurbil, Kâmil; Formisano, Elia; Yacoub, Essa
2018-01-01
Following rapid technological advances, ultra-high field functional MRI (fMRI) enables exploring correlates of neuronal population activity at an increasing spatial resolution. However, as the fMRI blood-oxygenation-level-dependent (BOLD) contrast is a vascular signal, the spatial specificity of fMRI data is ultimately determined by the characteristics of the underlying vasculature. At 7T, fMRI measurement parameters determine the relative contribution of the macro- and microvasculature to the acquired signal. Here we investigate how these parameters affect relevant high-end fMRI analyses such as encoding, decoding, and submillimeter mapping of voxel preferences in the human auditory cortex. Specifically, we compare a T 2 * weighted fMRI dataset, obtained with 2D gradient echo (GE) EPI, to a predominantly T 2 weighted dataset obtained with 3D GRASE. We first investigated the decoding accuracy based on two encoding models that represented different hypotheses about auditory cortical processing. This encoding/decoding analysis profited from the large spatial coverage and sensitivity of the T 2 * weighted acquisitions, as evidenced by a significantly higher prediction accuracy in the GE-EPI dataset compared to the 3D GRASE dataset for both encoding models. The main disadvantage of the T 2 * weighted GE-EPI dataset for encoding/decoding analyses was that the prediction accuracy exhibited cortical depth dependent vascular biases. However, we propose that the comparison of prediction accuracy across the different encoding models may be used as a post processing technique to salvage the spatial interpretability of the GE-EPI cortical depth-dependent prediction accuracy. Second, we explored the mapping of voxel preferences. Large-scale maps of frequency preference (i.e., tonotopy) were similar across datasets, yet the GE-EPI dataset was preferable due to its larger spatial coverage and sensitivity. However, submillimeter tonotopy maps revealed biases in assigned frequency preference and selectivity for the GE-EPI dataset, but not for the 3D GRASE dataset. Thus, a T 2 weighted acquisition is recommended if high specificity in tonotopic maps is required. In conclusion, different fMRI acquisitions were better suited for different analyses. It is therefore critical that any sequence parameter optimization considers the eventual intended fMRI analyses and the nature of the neuroscience questions being asked. Copyright © 2017 Elsevier Inc. All rights reserved.
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
Ylinen, Sari; Nora, Anni; Leminen, Alina; Hakala, Tero; Huotilainen, Minna; Shtyrov, Yury; Mäkelä, Jyrki P; Service, Elisabet
2015-06-01
Speech production, both overt and covert, down-regulates the activation of auditory cortex. This is thought to be due to forward prediction of the sensory consequences of speech, contributing to a feedback control mechanism for speech production. Critically, however, these regulatory effects should be specific to speech content to enable accurate speech monitoring. To determine the extent to which such forward prediction is content-specific, we recorded the brain's neuromagnetic responses to heard multisyllabic pseudowords during covert rehearsal in working memory, contrasted with a control task. The cortical auditory processing of target syllables was significantly suppressed during rehearsal compared with control, but only when they matched the rehearsed items. This critical specificity to speech content enables accurate speech monitoring by forward prediction, as proposed by current models of speech production. The one-to-one phonological motor-to-auditory mappings also appear to serve the maintenance of information in phonological working memory. Further findings of right-hemispheric suppression in the case of whole-item matches and left-hemispheric enhancement for last-syllable mismatches suggest that speech production is monitored by 2 auditory-motor circuits operating on different timescales: Finer grain in the left versus coarser grain in the right hemisphere. Taken together, our findings provide hemisphere-specific evidence of the interface between inner and heard speech. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Wang, Qiuju; Gu, Rui; Han, Dongyi; Yang, Weiyan
2003-09-01
Auditory neuropathy is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses and normal cochlear outer hair cell function as measured by otoacoustic emission recordings. Many risk factors are thought to be involved in its etiology and pathophysiology. Four Chinese pedigrees with familial auditory neuropathy were presented to demonstrate involvement of genetic factors in the etiology of auditory neuropathy. Probands of the above-mentioned pedigrees, who had been diagnosed with auditory neuropathy, were evaluated and followed in the Department of Otolaryngology-Head and Neck Surgery, China People Liberation Army General Hospital (Beijing, China). Their family members were studied, and the pedigree maps established. History of illness, physical examination, pure-tone audiometry, acoustic reflex, auditory brainstem responses, and transient evoked and distortion-product otoacoustic emissions were obtained from members of these families. Some subjects received vestibular caloric testing, computed tomography scan of the temporal bone, and electrocardiography to exclude other possible neuropathic disorders. In most affected patients, hearing loss of various degrees and speech discrimination difficulties started at 10 to 16 years of age. Their audiological evaluation showed absence of acoustic reflex and auditory brainstem responses. As expected in auditory neuropathy, these patients exhibited near-normal cochlear outer hair cell function as shown in distortion product otoacoustic emission recordings. Pure-tone audiometry revealed hearing loss ranging from mild to profound in these patients. Different inheritance patterns were observed in the four families. In Pedigree I, 7 male patients were identified among 43 family members, exhibiting an X-linked recessive pattern. Affected brothers were found in Pedigrees II and III, whereas in pedigree IV, two sisters were affected. All the patients were otherwise normal without evidence of peripheral neuropathy at the time of writing. Patients with characteristics of nonsyndromic hereditary auditory neuropathy were identified in one large and three smaller Chinese families. Pedigree analysis suggested an X-linked, recessive hereditary pattern in one pedigree and autosomal recessive inheritances in the other three pedigrees. The phenotypes in the study were typical of auditory neuropathy; they were transmitted in different inheritance patterns, indicating clinical and genetic heterogeneity of this disorder. The observed inheritance and clinical audiological findings are different from those previously described for nonsyndromic low-frequency sensorineural hearing loss. This information should facilitate future molecular linkage analyses and positional cloning for the relative genes contributing to auditory neuropathy.
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Research and Studies Directory for Manpower, Personnel, and Training
1988-01-01
314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.
Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J.; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M.; Lenarz, Thomas; Lim, Hubert H.
2015-01-01
Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Cognitive/emotional models for human behavior representation in 3D avatar simulations
NASA Astrophysics Data System (ADS)
Peterson, James K.
2004-08-01
Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.
Associating mapping of stigma characteristics using the USDA rice core collection
USDA-ARS?s Scientific Manuscript database
A mini-core from the USDA rice core collection was phenotyped for nine traits of stigma and spikelet and genotyped with 109 DNA markers. Marker-trait association mapping was used to identify the regions associated with the nine traits. Resulting associations were adjusted using false discovery rate ...
ERIC Educational Resources Information Center
Northwest Evaluation Association, 2013
2013-01-01
While many educators expect the Common Core State Standards (CCSS) to be more rigorous than previous state standards, some wonder if the transition to CCSS and to a Common Core aligned MAP test will have an impact on their students' RIT scores or the NWEA norms. MAP assessments use a proprietary scale known as the RIT (Rasch unit) scale to measure…
Background noise exerts diverse effects on the cortical encoding of foreground sounds.
Malone, B J; Heiser, Marc A; Beitel, Ralph E; Schreiner, Christoph E
2017-08-01
In natural listening conditions, many sounds must be detected and identified in the context of competing sound sources, which function as background noise. Traditionally, noise is thought to degrade the cortical representation of sounds by suppressing responses and increasing response variability. However, recent studies of neural network models and brain slices have shown that background synaptic noise can improve the detection of signals. Because acoustic noise affects the synaptic background activity of cortical networks, it may improve the cortical responses to signals. We used spike train decoding techniques to determine the functional effects of a continuous white noise background on the responses of clusters of neurons in auditory cortex to foreground signals, specifically frequency-modulated sweeps (FMs) of different velocities, directions, and amplitudes. Whereas the addition of noise progressively suppressed the FM responses of some cortical sites in the core fields with decreasing signal-to-noise ratios (SNRs), the stimulus representation remained robust or was even significantly enhanced at specific SNRs in many others. Even though the background noise level was typically not explicitly encoded in cortical responses, significant information about noise context could be decoded from cortical responses on the basis of how the neural representation of the foreground sweeps was affected. These findings demonstrate significant diversity in signal in noise processing even within the core auditory fields that could support noise-robust hearing across a wide range of listening conditions. NEW & NOTEWORTHY The ability to detect and discriminate sounds in background noise is critical for our ability to communicate. The neural basis of robust perceptual performance in noise is not well understood. We identified neuronal populations in core auditory cortex of squirrel monkeys that differ in how they process foreground signals in background noise and that may contribute to robust signal representation and discrimination in acoustic environments with prominent background noise. Copyright © 2017 the American Physiological Society.
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.
Shahin, Antoine J; Backer, Kristina C; Rosenblum, Lawrence D; Kerlin, Jess R
2018-02-14
Audiovisual (AV) integration is essential for speech comprehension, especially in adverse listening situations. Divergent, but not mutually exclusive, theories have been proposed to explain the neural mechanisms underlying AV integration. One theory advocates that this process occurs via interactions between the auditory and visual cortices, as opposed to fusion of AV percepts in a multisensory integrator. Building upon this idea, we proposed that AV integration in spoken language reflects visually induced weighting of phonetic representations at the auditory cortex. EEG was recorded while male and female human subjects watched and listened to videos of a speaker uttering consonant vowel (CV) syllables /ba/ and /fa/, presented in Auditory-only, AV congruent or incongruent contexts. Subjects reported whether they heard /ba/ or /fa/. We hypothesized that vision alters phonetic encoding by dynamically weighting which phonetic representation in the auditory cortex is strengthened or weakened. That is, when subjects are presented with visual /fa/ and acoustic /ba/ and hear /fa/ ( illusion-fa ), the visual input strengthens the weighting of the phone /f/ representation. When subjects are presented with visual /ba/ and acoustic /fa/ and hear /ba/ ( illusion-ba ), the visual input weakens the weighting of the phone /f/ representation. Indeed, we found an enlarged N1 auditory evoked potential when subjects perceived illusion-ba , and a reduced N1 when they perceived illusion-fa , mirroring the N1 behavior for /ba/ and /fa/ in Auditory-only settings. These effects were especially pronounced in individuals with more robust illusory perception. These findings provide evidence that visual speech modifies phonetic encoding at the auditory cortex. SIGNIFICANCE STATEMENT The current study presents evidence that audiovisual integration in spoken language occurs when one modality (vision) acts on representations of a second modality (audition). Using the McGurk illusion, we show that visual context primes phonetic representations at the auditory cortex, altering the auditory percept, evidenced by changes in the N1 auditory evoked potential. This finding reinforces the theory that audiovisual integration occurs via visual networks influencing phonetic representations in the auditory cortex. We believe that this will lead to the generation of new hypotheses regarding cross-modal mapping, particularly whether it occurs via direct or indirect routes (e.g., via a multisensory mediator). Copyright © 2018 the authors 0270-6474/18/381835-15$15.00/0.
Magnetic resonance imaging abnormalities in familial temporal lobe epilepsy with auditory auras.
Kobayashi, Eliane; Santos, Neide F; Torres, Fabio R; Secolin, Rodrigo; Sardinha, Luiz A C; Lopez-Cendes, Iscia; Cendes, Fernando
2003-11-01
Two forms of familial temporal lobe epilepsy (FTLE) have been described: mesial FTLE and FTLE with auditory auras. The gene responsible for mesial FTLE has not been mapped yet, whereas mutations in the LGI1 (leucine-rich, glioma-inactivated 1) gene, localized on chromosome 10q, have been found in FTLE with auditory auras. To describe magnetic resonance imaging (MRI) findings in patients with FTLE with auditory auras. We performed detailed clinical and molecular studies as well as MRI evaluation (including volumetry) in all available individuals from one family, segregating FTLE from auditory auras. We evaluated 18 of 23 possibly affected individuals, and 13 patients reported auditory auras. In one patient, auditory auras were associated with déjà vu; in one patient, with ictal aphasia; and in 2 patients, with visual misperception. Most patients were not taking medication at the time, although all of them reported sporadic auras. Two-point lod scores were positive for 7 genotyped markers on chromosome 10q, and a Zmax of 6.35 was achieved with marker D10S185 at a recombination fraction of 0.0. Nucleotide sequence analysis of the LGI1 gene showed a point mutation, VIIIS7(-2)A-G, in all affected individuals. Magnetic resonance imaging was performed in 22 individuals (7 asymptomatic, 4 of them carriers of the affected haplotype on chromosome 10q and the VIIIS7[-2]A-G mutation). Lateral temporal lobe malformations were identified by visual analysis in 10 individuals, 2 of them with global enlargement demonstrated by volumetry. Mildly reduced hippocampi were observed in 4 individuals. In this family with FTLE with auditory auras, we found developmental abnormalities in the lateral cortex of the temporal lobes in 53% of the affected individuals. In contrast with mesial FTLE, none of the affected individuals had MRI evidence of hippocampal sclerosis.
Representations of Pitch and Timbre Variation in Human Auditory Cortex
2017-01-01
Pitch and timbre are two primary dimensions of auditory perception, but how they are represented in the human brain remains a matter of contention. Some animal studies of auditory cortical processing have suggested modular processing, with different brain regions preferentially coding for pitch or timbre, whereas other studies have suggested a distributed code for different attributes across the same population of neurons. This study tested whether variations in pitch and timbre elicit activity in distinct regions of the human temporal lobes. Listeners were presented with sequences of sounds that varied in either fundamental frequency (eliciting changes in pitch) or spectral centroid (eliciting changes in brightness, an important attribute of timbre), with the degree of pitch or timbre variation in each sequence parametrically manipulated. The BOLD responses from auditory cortex increased with increasing sequence variance along each perceptual dimension. The spatial extent, region, and laterality of the cortical regions most responsive to variations in pitch or timbre at the univariate level of analysis were largely overlapping. However, patterns of activation in response to pitch or timbre variations were discriminable in most subjects at an individual level using multivoxel pattern analysis, suggesting a distributed coding of the two dimensions bilaterally in human auditory cortex. SIGNIFICANCE STATEMENT Pitch and timbre are two crucial aspects of auditory perception. Pitch governs our perception of musical melodies and harmonies, and conveys both prosodic and (in tone languages) lexical information in speech. Brightness—an aspect of timbre or sound quality—allows us to distinguish different musical instruments and speech sounds. Frequency-mapping studies have revealed tonotopic organization in primary auditory cortex, but the use of pure tones or noise bands has precluded the possibility of dissociating pitch from brightness. Our results suggest a distributed code, with no clear anatomical distinctions between auditory cortical regions responsive to changes in either pitch or timbre, but also reveal a population code that can differentiate between changes in either dimension within the same cortical regions. PMID:28025255
Educational Testing of an Auditory Display of Mars Gamma Ray Spectrometer Data
NASA Astrophysics Data System (ADS)
Keller, J. M.; Pompea, S. M.; Prather, E. E.; Slater, T. F.; Boynton, W. V.; Enos, H. L.; Quinn, M.
2003-12-01
A unique, alternative educational and public outreach product was created to investigate the use and effectiveness of auditory displays in science education. The product, which allows students to both visualize and hear seasonal variations in data detected by the Gamma Ray Spectrometer (GRS) aboard the Mars Odyssey spacecraft, consists of an animation of false-color maps of hydrogen concentrations on Mars along with a musical presentation, or sonification, of the same data. Learners can access this data using the visual false-color animation, the auditory false-pitch sonification, or both. Central to the development of this product is the question of its educational effectiveness and implementation. During the spring 2003 semester, three sections of an introductory astronomy course, each with ˜100 non-science undergraduates, were presented with one of three different exposures to GRS hydrogen data: one auditory, one visual, and one both auditory and visual. Student achievement data was collected through use of multiple-choice and open-ended surveys administered before, immediately following, and three and six weeks following the experiment. It was found that the three student groups performed equally well in their ability to perceive and interpret the data presented. Additionally, student groups exposed to the auditory display reported a higher interest and engagement level than the student group exposed to the visual data alone. Based upon this preliminary testing,we have made improvements to both the educational product and our evaluation protocol. This fall, we will conduct further testing with ˜100 additional students, half receiving auditory data and half receiving visual data, and we will conduct interviews with individual students as they interface with the auditory display. Through this process, we hope to further assess both learning and engagement gains associated with alternative and multi-modal representations of scientific data that extend beyond traditional visualization approaches. This work has been supported by the GRS Education and Public Outreach Program and the NASA Spacegrant Graduate Fellowship Program.
François, Clément; Schön, Daniele
2014-02-01
There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.
Neural networks supporting audiovisual integration for speech: A large-scale lesion study.
Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius
2018-06-01
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.
Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T
2013-02-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.
Response variance in functional maps: neural darwinism revisited.
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population.
Response Variance in Functional Maps: Neural Darwinism Revisited
Takahashi, Hirokazu; Yokota, Ryo; Kanzaki, Ryohei
2013-01-01
The mechanisms by which functional maps and map plasticity contribute to cortical computation remain controversial. Recent studies have revisited the theory of neural Darwinism to interpret the learning-induced map plasticity and neuronal heterogeneity observed in the cortex. Here, we hypothesize that the Darwinian principle provides a substrate to explain the relationship between neuron heterogeneity and cortical functional maps. We demonstrate in the rat auditory cortex that the degree of response variance is closely correlated with the size of its representational area. Further, we show that the response variance within a given population is altered through training. These results suggest that larger representational areas may help to accommodate heterogeneous populations of neurons. Thus, functional maps and map plasticity are likely to play essential roles in Darwinian computation, serving as effective, but not absolutely necessary, structures to generate diverse response properties within a neural population. PMID:23874733
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.
Golob, Edward J; Winston, Jenna; Mock, Jeffrey R
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients
Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024
NASA Astrophysics Data System (ADS)
Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell
2004-09-01
Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.
Spatial Cues Provided by Sound Improve Postural Stabilization: Evidence of a Spatial Auditory Map?
Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe
2017-01-01
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
Chimaeric sounds reveal dichotomies in auditory perception
Smith, Zachary M.; Delgutte, Bertrand; Oxenham, Andrew J.
2008-01-01
By Fourier's theorem1, signals can be decomposed into a sum of sinusoids of different frequencies. This is especially relevant for hearing, because the inner ear performs a form of mechanical Fourier transform by mapping frequencies along the length of the cochlear partition. An alternative signal decomposition, originated by Hilbert2, is to factor a signal into the product of a slowly varying envelope and a rapidly varying fine time structure. Neurons in the auditory brainstem3–6 sensitive to these features have been found in mammalian physiological studies. To investigate the relative perceptual importance of envelope and fine structure, we synthesized stimuli that we call ‘auditory chimaeras’, which have the envelope of one sound and the fine structure of another. Here we show that the envelope is most important for speech reception, and the fine structure is most important for pitch perception and sound localization. When the two features are in conflict, the sound of speech is heard at a location determined by the fine structure, but the words are identified according to the envelope. This finding reveals a possible acoustic basis for the hypothesized ‘what’ and ‘where’ pathways in the auditory cortex7–10. PMID:11882898
Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem
Wallace, Matthew M.; Harris, J. Aaron; Brubaker, Donald Q.; Klotz, Caitlyn A.; Gabriele, Mark L.
2016-01-01
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience. PMID:26906676
Graded and discontinuous EphA-ephrinB expression patterns in the developing auditory brainstem.
Wallace, Matthew M; Harris, J Aaron; Brubaker, Donald Q; Klotz, Caitlyn A; Gabriele, Mark L
2016-05-01
Eph-ephrin interactions guide topographic mapping and pattern formation in a variety of systems. In contrast to other sensory pathways, their precise role in the assembly of central auditory circuits remains poorly understood. The auditory midbrain, or inferior colliculus (IC) is an intriguing structure for exploring guidance of patterned projections as adjacent subdivisions exhibit distinct organizational features. The central nucleus of the IC (CNIC) and deep aspects of its neighboring lateral cortex (LCIC, Layer 3) are tonotopically-organized and receive layered inputs from primarily downstream auditory sources. While less is known about more superficial aspects of the LCIC, its inputs are multimodal, lack a clear tonotopic order, and appear discontinuous, terminating in modular, patch/matrix-like distributions. Here we utilize X-Gal staining approaches in lacZ mutant mice (ephrin-B2, -B3, and EphA4) to reveal EphA-ephrinB expression patterns in the nascent IC during the period of projection shaping that precedes hearing onset. We also report early postnatal protein expression in the cochlear nuclei, the superior olivary complex, the nuclei of the lateral lemniscus, and relevant midline structures. Continuous ephrin-B2 and EphA4 expression gradients exist along frequency axes of the CNIC and LCIC Layer 3. In contrast, more superficial LCIC localization is not graded, but confined to a series of discrete ephrin-B2 and EphA4-positive Layer 2 modules. While heavily expressed in the midline, much of the auditory brainstem is devoid of ephrin-B3, including the CNIC, LCIC Layer 2 modular fields, the dorsal nucleus of the lateral lemniscus (DNLL), as well as much of the superior olivary complex and cochlear nuclei. Ephrin-B3 LCIC expression appears complementary to that of ephrin-B2 and EphA4, with protein most concentrated in presumptive extramodular zones. Described tonotopic gradients and seemingly complementary modular/extramodular patterns suggest Eph-ephrin guidance in establishing juxtaposed continuous and discrete neural maps in the developing IC prior to experience. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Nakada, Hirofumi; Horie, Seichi; Kawanami, Shoko; Inoue, Jinro; Iijima, Yoshinori; Sato, Kiyoharu; Abe, Takeshi
2017-09-01
We aimed to develop a practical method to estimate oesophageal temperature by measuring multi-locational auditory canal temperatures. This method can be applied to prevent heatstroke by simultaneously and continuously monitoring the core temperatures of people working under hot environments. We asked 11 healthy male volunteers to exercise, generating 80 W for 45 min in a climatic chamber set at 24, 32 and 40 °C, at 50% relative humidity. We also exposed the participants to radiation at 32 °C. We continuously measured temperatures at the oesophagus, rectum and three different locations along the external auditory canal. We developed equations for estimating oesophageal temperatures from auditory canal temperatures and compared their fitness and errors. The rectal temperature increased or decreased faster than oesophageal temperature at the start or end of exercise in all conditions. Estimated temperature showed good similarity with oesophageal temperature, and the square of the correlation coefficient of the best fitting model reached 0.904. We observed intermediate values between rectal and oesophageal temperatures during the rest phase. Even under the condition with radiation, estimated oesophageal temperature demonstrated concordant movement with oesophageal temperature at around 0.1 °C overestimation. Our method measured temperatures at three different locations along the external auditory canal. We confirmed that the approach can credibly estimate the oesophageal temperature from 24 to 40 °C for people performing exercise in the same place in a windless environment.
Gestures, vocalizations, and memory in language origins.
Aboitiz, Francisco
2012-01-01
THIS ARTICLE DISCUSSES THE POSSIBLE HOMOLOGIES BETWEEN THE HUMAN LANGUAGE NETWORKS AND COMPARABLE AUDITORY PROJECTION SYSTEMS IN THE MACAQUE BRAIN, IN AN ATTEMPT TO RECONCILE TWO EXISTING VIEWS ON LANGUAGE EVOLUTION: one that emphasizes hand control and gestures, and the other that emphasizes auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced in the non-human primate brain. At its core, this circuit constitutes an auditory-vocal sensorimotor circuit with two main components, a "ventral pathway" connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a "dorsal pathway" connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homolog of the dorsal circuit overlaps with an inferior parietal-premotor network for hand and gesture selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of the dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to represent a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of vocal language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.
Utilising reinforcement learning to develop strategies for driving auditory neural implants.
Lee, Geoffrey W; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G
2016-08-01
In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model's function. We show the ability to effectively learn stimulation patterns which mimic the cochlea's ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.
Representing Knowledge: Assessment of Creativity in Humanities
ERIC Educational Resources Information Center
Zemits, Birut Irena
2017-01-01
Traditionally, assessment for university students in the humanities has been in an essay format, but this has changed extensively in the last decade. Assessments now may entail auditory and visual presentations, films, mind-maps, and other modes of communication. These formats are outside the established conventions of humanities and may be…
Cognitive Load Theory and the Effects of Transient Information on the Modality Effect
ERIC Educational Resources Information Center
Leahy, Wayne; Sweller, John
2016-01-01
Based on cognitive load theory and the "transient information effect," this paper investigated the "modality effect" while interpreting a contour map. The length and complexity of auditory and visual text instructions were manipulated. Experiment 1 indicated that longer audio text information within a presentation was inferior…
Cross-Situational Learning of Minimal Word Pairs
ERIC Educational Resources Information Center
Escudero, Paola; Mulak, Karen E.; Vlach, Haley A.
2016-01-01
"Cross-situational statistical learning" of words involves tracking co-occurrences of auditory words and objects across time to infer word-referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words (e.g., WUG, DAX), but it remains unknown whether learners can…
Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes
ERIC Educational Resources Information Center
Dich, Nadya
2014-01-01
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…
ERIC Educational Resources Information Center
Edgar, J. Christopher; Khan, Sarah Y.; Blaskey, Lisa; Chow, Vivian Y.; Rey, Michael; Gaetz, William; Cannon, Katelyn M.; Monroe, Justin F.; Cornew, Lauren; Qasmieh, Saba; Liu, Song; Welsh, John P.; Levy, Susan E.; Roberts, Timothy P. L.
2015-01-01
Previous studies have observed evoked response latency as well as gamma band superior temporal gyrus (STG) auditory abnormalities in individuals with autism spectrum disorders (ASD). A limitation of these studies is that associations between these two abnormalities, as well as the full extent of oscillatory phenomena in ASD in terms of frequency…
ERIC Educational Resources Information Center
Ivanova, Tamara N.; Gross, Christina; Mappus, Rudolph C.; Kwon, Yong Jun; Bassell, Gary J.; Liu, Robert C.
2017-01-01
Learning to recognize a stimulus category requires experience with its many natural variations. However, the mechanisms that allow a category's sensorineural representation to be updated after experiencing new exemplars are not well understood, particularly at the molecular level. Here we investigate how a natural vocal category induces expression…
[Children with specific language impairment: electrophysiological and pedaudiological findings].
Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S
2014-08-01
Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.
Neuronal correlates of visual and auditory alertness in the DMT and ketamine model of psychosis.
Daumann, J; Wagner, D; Heekeren, K; Neukirch, A; Thiel, C M; Gouzoulis-Mayfrank, E
2010-10-01
Deficits in attentional functions belong to the core cognitive symptoms in schizophrenic patients. Alertness is a nonselective attention component that refers to a state of general readiness that improves stimulus processing and response initiation. The main goal of the present study was to investigate cerebral correlates of alertness in the human 5HT(2A) agonist and N-methyl-D-aspartic acid (NMDA) antagonist model of psychosis. Fourteen healthy volunteers participated in a randomized double-blind, cross-over event-related functional magnetic resonance imaging (fMRI) study with dimethyltryptamine (DMT) and S-ketamine. A target detection task with cued and uncued trials in both the visual and the auditory modality was used. Administration of DMT led to decreased blood oxygenation level-dependent response during performance of an alertness task, particularly in extrastriate regions during visual alerting and in temporal regions during auditory alerting. In general, the effects for the visual modality were more pronounced. In contrast, administration of S-ketamine led to increased cortical activation in the left insula and precentral gyrus in the auditory modality. The results of the present study might deliver more insight into potential differences and overlapping pathomechanisms in schizophrenia. These conclusions must remain preliminary and should be explored by further fMRI studies with schizophrenic patients performing modality-specific alertness tasks.
Mismatch Negativity in Recent-Onset and Chronic Schizophrenia: A Current Source Density Analysis
Fulham, W. Ross; Michie, Patricia T.; Ward, Philip B.; Rasser, Paul E.; Todd, Juanita; Johnston, Patrick J.; Thompson, Paul M.; Schall, Ulrich
2014-01-01
Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia. PMID:24949859
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
2016-01-01
Medial olivocochlear (MOC) neurons provide an efferent innervation to outer hair cells (OHCs) of the cochlea, but their tonotopic mapping is incompletely known. In the present study of anesthetized guinea pigs, the MOC mapping was investigated using in vivo, extracellular recording, and labeling at a site along the cochlear course of the axons. The MOC axons enter the cochlea at its base and spiral apically, successively turning out to innervate OHCs according to their characteristic frequencies (CFs). Recordings made at a site in the cochlear basal turn yielded a distribution of MOC CFs with an upper limit, or “edge,” due to usually absent higher-CF axons that presumably innervate more basal locations. The CFs at the edge, normalized across preparations, were equal to the CFs of the auditory nerve fibers (ANFs) at the recording sites (near 16 kHz). Corresponding anatomical data from extracellular injections showed spiraling MOC axons giving rise to an edge of labeling at the position of a narrow band of labeled ANFs. Overall, the edges of the MOC CFs and labeling, with their correspondences to ANFs, suggest similar tonotopic mappings of these efferent and afferent fibers, at least in the cochlear basal turn. They also suggest that MOC axons miss much of the position of the more basally located cochlear amplifier appropriate for their CF; instead, the MOC innervation may be optimized for protection from damage by acoustic overstimulation. PMID:26823515
Brown, M Christian
2016-03-01
Medial olivocochlear (MOC) neurons provide an efferent innervation to outer hair cells (OHCs) of the cochlea, but their tonotopic mapping is incompletely known. In the present study of anesthetized guinea pigs, the MOC mapping was investigated using in vivo, extracellular recording, and labeling at a site along the cochlear course of the axons. The MOC axons enter the cochlea at its base and spiral apically, successively turning out to innervate OHCs according to their characteristic frequencies (CFs). Recordings made at a site in the cochlear basal turn yielded a distribution of MOC CFs with an upper limit, or "edge," due to usually absent higher-CF axons that presumably innervate more basal locations. The CFs at the edge, normalized across preparations, were equal to the CFs of the auditory nerve fibers (ANFs) at the recording sites (near 16 kHz). Corresponding anatomical data from extracellular injections showed spiraling MOC axons giving rise to an edge of labeling at the position of a narrow band of labeled ANFs. Overall, the edges of the MOC CFs and labeling, with their correspondences to ANFs, suggest similar tonotopic mappings of these efferent and afferent fibers, at least in the cochlear basal turn. They also suggest that MOC axons miss much of the position of the more basally located cochlear amplifier appropriate for their CF; instead, the MOC innervation may be optimized for protection from damage by acoustic overstimulation. Copyright © 2016 the American Physiological Society.
Probing cochlear tuning and tonotopy in the tiger using otoacoustic emissions.
Bergevin, Christopher; Walsh, Edward J; McGee, JoAnn; Shera, Christopher A
2012-08-01
Otoacoustic emissions (sound emitted from the ear) allow cochlear function to be probed noninvasively. The emissions evoked by pure tones, known as stimulus-frequency emissions (SFOAEs), have been shown to provide reliable estimates of peripheral frequency tuning in a variety of mammalian and non-mammalian species. Here, we apply the same methodology to explore peripheral auditory function in the largest member of the cat family, the tiger (Panthera tigris). We measured SFOAEs in 9 unique ears of 5 anesthetized tigers. The tigers, housed at the Henry Doorly Zoo (Omaha, NE), were of both sexes and ranged in age from 3 to 10 years. SFOAE phase-gradient delays are significantly longer in tigers--by approximately a factor of two above 2 kHz and even more at lower frequencies--than in domestic cats (Felis catus), a species commonly used in auditory studies. Based on correlations between tuning and delay established in other species, our results imply that cochlear tuning in the tiger is significantly sharper than in domestic cat and appears comparable to that of humans. Furthermore, the SFOAE data indicate that tigers have a larger tonotopic mapping constant (mm/octave) than domestic cats. A larger mapping constant in tiger is consistent both with auditory brainstem response thresholds (that suggest a lower upper frequency limit of hearing for the tiger than domestic cat) and with measurements of basilar-membrane length (about 1.5 times longer in the tiger than domestic cat).
When music is salty: The crossmodal associations between sound and taste
Guetta, Rachel; Loui, Psyche
2017-01-01
Here we investigate associations between complex auditory and complex taste stimuli. A novel piece of music was composed and recorded in four different styles of musical articulation to reflect the four basic tastes groups (sweet, sour, salty, bitter). In Experiment 1, participants performed above chance at pairing the music clips with corresponding taste words. Experiment 2 uses multidimensional scaling to interpret how participants categorize these musical stimuli, and to show that auditory categories can be organized in a similar manner as taste categories. Experiment 3 introduces four different flavors of custom-made chocolate ganache and shows that participants can match music clips with the corresponding taste stimuli with above-chance accuracy. Experiment 4 demonstrates the partial role of pleasantness in crossmodal mappings between sound and taste. The present findings confirm that individuals are able to make crossmodal associations between complex auditory and gustatory stimuli, and that valence may mediate multisensory integration in the general population. PMID:28355227
Tinnitus: development of a neurophysiologic correlate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sasaki, C.T.; Babitz, L.; Kauer, J.S.
Although tinnitus severely afflicts 7.2 million Americans, the pathophysiology of this problem remains obscure because there presently exists no good animal model in which to study the phenomenon. We have examined changes in activity in the guinea pig auditory pathway using an autoradiographic method of functional brain mapping after short-term and long-term cochlear ablations which can, in humans, initiate the occurrence of tinnitus. With this method we have observed a reduction in activity in various nuclei in the auditory pathway between 4 hrs and 10 days after unilateral cochlear ablation. In contrast to these findings we have found a returnmore » of activity in these same nuclei if they are observed from 12 to 48 days following the lesion. These preliminary data suggest that this return of activity in the absence of sensory input may be a valid experimental analogue for tinnitus in humans. Such evidence for auditory plasticity may represent a significant first step toward understanding this common and profound otologic symptom.« less
Cortical systems associated with covert music rehearsal.
Langheim, Frederick J P; Callicott, Joseph H; Mattay, Venkata S; Duyn, Jeff H; Weinberger, Daniel R
2002-08-01
Musical representation and overt music production are necessarily complex cognitive phenomena. While overt musical performance may be observed and studied, the act of performance itself necessarily skews results toward the importance of primary sensorimotor and auditory cortices. However, imagined musical performance (IMP) represents a complex behavioral task involving components suited to exploring the physiological underpinnings of musical cognition in music performance without the sensorimotor and auditory confounds of overt performance. We mapped the blood oxygenation level-dependent fMRI activation response associated with IMP in experienced musicians independent of the piece imagined. IMP consistently activated supplementary motor and premotor areas, right superior parietal lobule, right inferior frontal gyrus, bilateral mid-frontal gyri, and bilateral lateral cerebellum in contrast with rest, in a manner distinct from fingertapping versus rest and passive listening to the same piece versus rest. These data implicate an associative network independent of primary sensorimotor and auditory activity, likely representing the cortical elements most intimately linked to music production.
VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway
Hackett, Troy A.; Takahata, Toru; Balaram, Pooja
2011-01-01
The vesicular glutamate transporters (VGLUTs) regulate storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II – VI of the core, belt and parabelt regions. VGLUT2 was most strongly expressed by neurons in layers IIIb and IV, weakly by neurons in layers II – IIIa, and at very low levels in layers V – VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in cortico-cortical (CC) and cortico-thalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establishes a baseline for detailed studies of these transporters in selected circuits of the auditory system. PMID:21111036
VGLUT1 and VGLUT2 mRNA expression in the primate auditory pathway.
Hackett, Troy A; Takahata, Toru; Balaram, Pooja
2011-04-01
The vesicular glutamate transporters (VGLUTs) regulate the storage and release of glutamate in the brain. In adult animals, the VGLUT1 and VGLUT2 isoforms are widely expressed and differentially distributed, suggesting that neural circuits exhibit distinct modes of glutamate regulation. Studies in rodents suggest that VGLUT1 and VGLUT2 mRNA expression patterns are partly complementary, with VGLUT1 expressed at higher levels in the cortex and VGLUT2 prominent subcortically, but with overlapping distributions in some nuclei. In primates, VGLUT gene expression has not been previously studied in any part of the brain. The purposes of the present study were to document the regional expression of VGLUT1 and VGLUT2 mRNA in the auditory pathway through A1 in the cortex, and to determine whether their distributions are comparable to rodents. In situ hybridization with antisense riboprobes revealed that VGLUT2 was strongly expressed by neurons in the cerebellum and most major auditory nuclei, including the dorsal and ventral cochlear nuclei, medial and lateral superior olivary nuclei, central nucleus of the inferior colliculus, sagulum, and all divisions of the medial geniculate. VGLUT1 was densely expressed in the hippocampus and ventral cochlear nuclei, and at reduced levels in other auditory nuclei. In the auditory cortex, neurons expressing VGLUT1 were widely distributed in layers II-VI of the core, belt and parabelt regions. VGLUT2 was expressed most strongly by neurons in layers IIIb and IV, weakly by neurons in layers II-IIIa, and at very low levels in layers V-VI. The findings indicate that VGLUT2 is strongly expressed by neurons at all levels of the subcortical auditory pathway, and by neurons in the middle layers of the cortex, whereas VGLUT1 is strongly expressed by most if not all glutamatergic neurons in the auditory cortex and at variable levels among auditory subcortical nuclei. These patterns imply that VGLUT2 is the main vesicular glutamate transporter in subcortical and thalamocortical (TC) circuits, whereas VGLUT1 is dominant in corticocortical (CC) and corticothalamic (CT) systems of projections. The results also suggest that VGLUT mRNA expression patterns in primates are similar to rodents, and establish a baseline for detailed studies of these transporters in selected circuits of the auditory system. Copyright © 2010 Elsevier B.V. All rights reserved.
Eggermont, Jos J
2017-09-01
It is known that hearing loss induces plastic changes in the brain, causing loudness recruitment and hyperacusis, increased spontaneous firing rates and neural synchrony, reorganizations of the cortical tonotopic maps, and tinnitus. Much less in known about the central effects of exposure to sounds that cause a temporary hearing loss, affect the ribbon synapses in the inner hair cells, and cause a loss of high-threshold auditory nerve fibers. In contrast there is a wealth of information about central effects of long-duration sound exposures at levels ≤80 dB SPL that do not even cause a temporary hearing loss. The central effects for these moderate level exposures described in this review include changes in central gain, increased spontaneous firing rates and neural synchrony, and reorganization of the cortical tonotopic map. A putative mechanism is outlined, and the effect of the acoustic environment during the recovery process is illustrated. Parallels are drawn with hearing problems in humans with long-duration exposures to occupational noise but with clinical normal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.
Lesion localization of speech comprehension deficits in chronic aphasia
Binder, Jeffrey R.; Humphries, Colin; Gross, William L.; Book, Diane S.
2017-01-01
Objective: Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Methods: Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. Results: ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Conclusions: Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. PMID:28179469
Lesion localization of speech comprehension deficits in chronic aphasia.
Pillay, Sara B; Binder, Jeffrey R; Humphries, Colin; Gross, William L; Book, Diane S
2017-03-07
Voxel-based lesion-symptom mapping (VLSM) was used to localize impairments specific to multiword (phrase and sentence) spoken language comprehension. Participants were 51 right-handed patients with chronic left hemisphere stroke. They performed an auditory description naming (ADN) task requiring comprehension of a verbal description, an auditory sentence comprehension (ASC) task, and a picture naming (PN) task. Lesions were mapped using high-resolution MRI. VLSM analyses identified the lesion correlates of ADN and ASC impairment, first with no control measures, then adding PN impairment as a covariate to control for cognitive and language processes not specific to spoken language. ADN and ASC deficits were associated with lesions in a distributed frontal-temporal parietal language network. When PN impairment was included as a covariate, both ADN and ASC deficits were specifically correlated with damage localized to the mid-to-posterior portion of the middle temporal gyrus (MTG). Damage to the mid-to-posterior MTG is associated with an inability to integrate multiword utterances during comprehension of spoken language. Impairment of this integration process likely underlies the speech comprehension deficits characteristic of Wernicke aphasia. © 2017 American Academy of Neurology.
ERIC Educational Resources Information Center
Howard, A. M.; Park, Chung Hyuk; Remy, S.
2012-01-01
The robotics field represents the integration of multiple facets of computer science and engineering. Robotics-based activities have been shown to encourage K-12 students to consider careers in computing and have even been adopted as part of core computer-science curriculum at a number of universities. Unfortunately, for students with visual…
Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A
2011-01-01
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
Auditory and visual interactions between the superior and inferior colliculi in the ferret.
Stitt, Iain; Galindo-Leon, Edgar; Pieper, Florian; Hollensteiner, Karl J; Engler, Gerhard; Engel, Andreas K
2015-05-01
The integration of visual and auditory spatial information is important for building an accurate perception of the external world, but the fundamental mechanisms governing such audiovisual interaction have only partially been resolved. The earliest interface between auditory and visual processing pathways is in the midbrain, where the superior (SC) and inferior colliculi (IC) are reciprocally connected in an audiovisual loop. Here, we investigate the mechanisms of audiovisual interaction in the midbrain by recording neural signals from the SC and IC simultaneously in anesthetized ferrets. Visual stimuli reliably produced band-limited phase locking of IC local field potentials (LFPs) in two distinct frequency bands: 6-10 and 15-30 Hz. These visual LFP responses co-localized with robust auditory responses that were characteristic of the IC. Imaginary coherence analysis confirmed that visual responses in the IC were not volume-conducted signals from the neighboring SC. Visual responses in the IC occurred later than retinally driven superficial SC layers and earlier than deep SC layers that receive indirect visual inputs, suggesting that retinal inputs do not drive visually evoked responses in the IC. In addition, SC and IC recording sites with overlapping visual spatial receptive fields displayed stronger functional connectivity than sites with separate receptive fields, indicating that visual spatial maps are aligned across both midbrain structures. Reciprocal coupling between the IC and SC therefore probably serves the dynamic integration of visual and auditory representations of space. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Van der Haegen, Lise; Acke, Frederic; Vingerhoets, Guy; Dhooge, Ingeborg; De Leenheer, Els; Cai, Qing; Brysbaert, Marc
2016-12-01
Auditory speech perception, speech production and reading lateralize to the left hemisphere in the majority of healthy right-handers. In this study, we investigated to what extent sensory input underlies the side of language dominance. We measured the lateralization of the three core subprocesses of language in patients who had profound hearing loss in the right ear from birth and in matched control subjects. They took part in a semantic decision listening task involving speech and sound stimuli (auditory perception), a word generation task (speech production) and a passive reading task (reading). The results show that a lack of sensory auditory input on the right side, which is strongly connected to the contralateral left hemisphere, does not lead to atypical lateralization of speech perception. Speech production and reading were also typically left lateralized in all but one patient, contradicting previous small scale studies. Other factors such as genetic constraints presumably overrule the role of sensory input in the development of (a)typical language lateralization. Copyright © 2015 Elsevier Ltd. All rights reserved.
Neuropsychological implications of selective attentional functioning in psychopathic offenders.
Mayer, Andrew R; Kosson, David S; Bedrick, Edward J
2006-09-01
Several core characteristics of the psychopathic personality disorder (i.e., impulsivity, failure to attend to interpersonal cues) suggest that psychopaths suffer from disordered attention. However, there is mixed evidence from the cognitive literature as to whether they exhibit superior or deficient selective attention, which has led to the formation of several distinct theories of attentional functioning in psychopathy. The present experiment investigated participants' abilities to purposely allocate attentional resources on the basis of auditory or visual linguistic information and directly tested both theories of deficient or superior selective attention in psychopathy. Specifically, 91 male inmates at a county jail were presented with either auditory or visual linguistic cues (with and without distractors) that correctly indicated the position of an upcoming visual target in 75% of the trials. The results indicated that psychopaths did not exhibit evidence of superior selective attention in any of the conditions but were generally less efficient in shifting attention on the basis of linguistic cues, especially in regard to auditory information. Implications for understanding psychopaths' cognitive functioning and possible neuropsychological deficits are addressed. ((c) 2006 APA, all rights reserved).
The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl
Baxter, Caitlin S.; Takahashi, Terry T.
2013-01-01
Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801
Publications - GMC 262 | Alaska Division of Geological & Geophysical
DGGS GMC 262 Publication Details Title: Map location and geological logs of core for 7 1991 diamond Reference Cominco American Inc., 1996, Map location and geological logs of core for 7 1991 diamond drill
The Transition from Diffuse to Dense Gas in Herschel Dust Emission Maps
NASA Astrophysics Data System (ADS)
Goldsmith, Paul
Dense cores in dark clouds are the sites where young stars form. These regions manifest as relatively small (<0.1pc) pockets of cold and dense gas. If we wish to understand the star formation process, we have to understand the physical conditions in dense cores. This has been a main aim of star formation research in the past decade. Today, we do indeed possess a good knowledge of the density and velocity structure of cores, as well as their chemical evolution and physical lifetime. However, we do not understand well how dense cores form out of the diffuse gas clouds surrounding them. It is crucial that we constrain the relationship between dense cores and their environment: if we only understand dense cores, we may be able to understand how individual stars form --- but we would not know how the star forming dense cores themselves come into existence. We therefore propose to obtain data sets that reveal both dense cores and the clouds containing them in the same map. Based on these maps, we will study how dense cores form out of their natal clouds. Since cores form stars, this knowledge is crucial for the development of a complete theoretical and observational understanding of the formation of stars and their planets, as envisioned in NASA's Strategic Science Plan. Fortunately, existing archival data allow to derive exactly the sort of maps we need for our analysis. Here, we describe a program that exclusively builds on PACS and SPIRE dust emission imaging data from the NASA-supported Herschel mission. The degree-sized wide-field Herschel maps of the nearby (<260pc) Polaris Flare and Aquila Rift clouds are ideal for our work. They permit to resolve dense cores (<0.1pc), while the maps also reveal large-scale cloud structure (5pc and larger). We will generate column density maps from these dust emission maps and then run a tree-based hierarchical multi-scale structure analysis on them. Only this procedure permits to exploit the full potential of the maps: we will characterize cloud structure over a vast range of spatial scales. This work has many advantages over previous studies, where information about dense cores and their environment was pieced together using a variety of methods an instruments. Now, the Herschel maps permit for the first time to characterize both molecular clouds and their cores in one shot in a single data set. We use these data to answer a variety of simple yet very important questions. First, we study whether dense cores have sharp boundaries. If such boundaries exist, they would indicate that dense cores have an individual identity well-separate from the near-fractal cloud structure on larger spatial scales. Second, we will --- in very approximate sense --- derive global density gradients for molecular clouds from radii <0.1pc to 5pc and larger. These "synoptic" density gradients provide a useful quantitative description of the relation between cloud material at very different spatial scales. Also, these measurements can be compared to synoptic density gradients derived in the same fashion for theoretical cloud models. Third, we study how dense cores are nested into the "clumps" forming molecular clouds, i.e., we study whether the most massive dense cores in a cloud (<0.1pc) reside in the most massive regions identified on lager spatial scale (1pc and larger). This will show how the properties of dense cores are influenced by their environment. Our study will derive unique constraints to cloud structure. But our small sample forbids to make strong statements. This pilot study does thus prepare future larger efforts. Our entire project builds on data reduction and analysis methods which our team has used in the past. This guarantees a swift completion of the project with predictable efficiency. We present pilot studies that demonstrate that the data and analysis methods are suited to tackle the science goals. This project is thus guaranteed to return significant results.
Auditory-neurophysiological responses to speech during early childhood: Effects of background noise
White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina
2015-01-01
Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025
Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia.
Kantrowitz, Joshua T; Hoptman, Matthew J; Leitman, David I; Moreno-Ortega, Marta; Lehrfeld, Jonathan M; Dias, Elisa; Sehatpour, Pejman; Laukka, Petri; Silipo, Gail; Javitt, Daniel C
2015-11-04
Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal ("prosodic") features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention. Schizophrenia patients show deficits in the ability to infer emotion based upon tone of voice [auditory emotion recognition (AER)] that drive impairments in social cognition and global functional outcome. This study evaluated neural substrates of impaired AER in schizophrenia using a combined event-related potential/resting-state fMRI approach. Patients showed impaired mismatch negativity response to emotionally relevant frequency modulated tones along with impaired functional connectivity between auditory and medial temporal (anterior insula) cortex. These deficits contributed in parallel to impaired AER and accounted for ∼50% of variance in AER performance. Overall, these findings demonstrate the importance of both auditory-level dysfunction and impaired auditory/insula connectivity in the pathophysiology of social cognitive dysfunction in schizophrenia. Copyright © 2015 the authors 0270-6474/15/3514910-13$15.00/0.
Mapping the cortical representation of speech sounds in a syllable repetition task.
Markiewicz, Christopher J; Bohland, Jason W
2016-11-01
Speech repetition relies on a series of distributed cortical representations and functional pathways. A speaker must map auditory representations of incoming sounds onto learned speech items, maintain an accurate representation of those items in short-term memory, interface that representation with the motor output system, and fluently articulate the target sequence. A "dorsal stream" consisting of posterior temporal, inferior parietal and premotor regions is thought to mediate auditory-motor representations and transformations, but the nature and activation of these representations for different portions of speech repetition tasks remains unclear. Here we mapped the correlates of phonetic and/or phonological information related to the specific phonemes and syllables that were heard, remembered, and produced using a series of cortical searchlight multi-voxel pattern analyses trained on estimates of BOLD responses from individual trials. Based on responses linked to input events (auditory syllable presentation), predictive vowel-level information was found in the left inferior frontal sulcus, while syllable prediction revealed significant clusters in the left ventral premotor cortex and central sulcus and the left mid superior temporal sulcus. Responses linked to output events (the GO signal cueing overt production) revealed strong clusters of vowel-related information bilaterally in the mid to posterior superior temporal sulcus. For the prediction of onset and coda consonants, input-linked responses yielded distributed clusters in the superior temporal cortices, which were further informative for classifiers trained on output-linked responses. Output-linked responses in the Rolandic cortex made strong predictions for the syllables and consonants produced, but their predictive power was reduced for vowels. The results of this study provide a systematic survey of how cortical response patterns covary with the identity of speech sounds, which will help to constrain and guide theoretical models of speech perception, speech production, and phonological working memory. Copyright © 2016 Elsevier Inc. All rights reserved.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.
Poremba, Amy; Mishkin, Mortimer
2007-07-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys
Mishkin, Mortimer
2009-01-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703
Da Costa, Sandra; Bourquin, Nathalie M.-P.; Knebel, Jean-François; Saenz, Melissa; van der Zwaag, Wietske; Clarke, Stephanie
2015-01-01
Environmental sounds are highly complex stimuli whose recognition depends on the interaction of top-down and bottom-up processes in the brain. Their semantic representations were shown to yield repetition suppression effects, i. e. a decrease in activity during exposure to a sound that is perceived as belonging to the same source as a preceding sound. Making use of the high spatial resolution of 7T fMRI we have investigated the representations of sound objects within early-stage auditory areas on the supratemporal plane. The primary auditory cortex was identified by means of tonotopic mapping and the non-primary areas by comparison with previous histological studies. Repeated presentations of different exemplars of the same sound source, as compared to the presentation of different sound sources, yielded significant repetition suppression effects within a subset of early-stage areas. This effect was found within the right hemisphere in primary areas A1 and R as well as two non-primary areas on the antero-medial part of the planum temporale, and within the left hemisphere in A1 and a non-primary area on the medial part of Heschl’s gyrus. Thus, several, but not all early-stage auditory areas encode the meaning of environmental sounds. PMID:25938430
Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J
2017-01-01
In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Open Touch/Sound Maps: A system to convey street data through haptic and auditory feedback
NASA Astrophysics Data System (ADS)
Kaklanis, Nikolaos; Votis, Konstantinos; Tzovaras, Dimitrios
2013-08-01
The use of spatial (geographic) information is becoming ever more central and pervasive in today's internet society but the most of it is currently inaccessible to visually impaired users. However, access in visual maps is severely restricted to visually impaired and people with blindness, due to their inability to interpret graphical information. Thus, alternative ways of a map's presentation have to be explored, in order to enforce the accessibility of maps. Multiple types of sensory perception like touch and hearing may work as a substitute of vision for the exploration of maps. The use of multimodal virtual environments seems to be a promising alternative for people with visual impairments. The present paper introduces a tool for automatic multimodal map generation having haptic and audio feedback using OpenStreetMap data. For a desired map area, an elevation map is being automatically generated and can be explored by touch, using a haptic device. A sonification and a text-to-speech (TTS) mechanism provide also audio navigation information during the haptic exploration of the map.
ERIC Educational Resources Information Center
Farmer, Thomas W.; Xie, Hongling
2013-01-01
In this commentary on the "Multiple Meanings of Peer Groups in Social Cognitive Mapping," Thomas W. Farmer and Hongling Xie discuss core issues in the identification of peer social groups in natural settings using the social cognitive mapping (SCM) procedures. Farmer and Xie applaud the authors for their efforts to advance the study of…
Representation of Dynamic Interaural Phase Difference in Auditory Cortex of Awake Rhesus Macaques
Scott, Brian H.; Malone, Brian J.; Semple, Malcolm N.
2009-01-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level. PMID:19164111
Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.
Scott, Brian H; Malone, Brian J; Semple, Malcolm N
2009-04-01
Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.
Asynchronous inputs alter excitability, spike timing, and topography in primary auditory cortex
Pandya, Pritesh K.; Moucha, Raluca; Engineer, Navzer D.; Rathbun, Daniel L.; Vazquez, Jessica; Kilgard, Michael P.
2010-01-01
Correlation-based synaptic plasticity provides a potential cellular mechanism for learning and memory. Studies in the visual and somatosensory systems have shown that behavioral and surgical manipulation of sensory inputs leads to changes in cortical organization that are consistent with the operation of these learning rules. In this study, we examine how the organization of primary auditory cortex (A1) is altered by tones designed to decrease the average input correlation across the frequency map. After one month of separately pairing nucleus basalis stimulation with 2 and 14 kHz tones, a greater proportion of A1 neurons responded to frequencies below 2 kHz and above 14 kHz. Despite the expanded representation of these tones, cortical excitability was specifically reduced in the high and low frequency regions of A1, as evidenced by increased neural thresholds and decreased response strength. In contrast, in the frequency region between the two paired tones, driven rates were unaffected and spontaneous firing rate was increased. Neural response latencies were increased across the frequency map when nucleus basalis stimulation was associated with asynchronous activation of the high and low frequency regions of A1. This set of changes did not occur when pulsed noise bursts were paired with nucleus basalis stimulation. These results are consistent with earlier observations that sensory input statistics can shape cortical map organization and spike timing. PMID:15855025
ERIC Educational Resources Information Center
Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen
2017-01-01
Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…
ERIC Educational Resources Information Center
Mayer, Jennifer L.; Hannent, Ian; Heaton, Pamela F.
2016-01-01
Whilst enhanced perception has been widely reported in individuals with Autism Spectrum Disorders (ASDs), relatively little is known about the developmental trajectory and impact of atypical auditory processing on speech perception in intellectually high-functioning adults with ASD. This paper presents data on perception of complex tones and…
ERIC Educational Resources Information Center
Stephan, Denise Nadine; Koch, Iring
2010-01-01
Two experiments examined the role of compatibility of input and output (I-O) modality mappings in task switching. We define I-O modality compatibility in terms of similarity of stimulus modality and modality of response-related sensory consequences. Experiment 1 included switching between 2 compatible tasks (auditory-vocal vs. visual-manual) and…
Auditory integration training and other sound therapies for autism spectrum disorders (ASD).
Sinha, Yashwant; Silove, Natalie; Hayen, Andrew; Williams, Katrina
2011-12-07
Auditory integration therapy was developed as a technique for improving abnormal sound sensitivity in individuals with behavioural disorders including autism spectrum disorders. Other sound therapies bearing similarities to auditory integration therapy include the Tomatis Method and Samonas Sound Therapy. To determine the effectiveness of auditory integration therapy or other methods of sound therapy in individuals with autism spectrum disorders. For this update, we searched the following databases in September 2010: CENTRAL (2010, Issue 2), MEDLINE (1950 to September week 2, 2010), EMBASE (1980 to Week 38, 2010), CINAHL (1937 to current), PsycINFO (1887 to current), ERIC (1966 to current), LILACS (September 2010) and the reference lists of published papers. One new study was found for inclusion. Randomised controlled trials involving adults or children with autism spectrum disorders. Treatment was auditory integration therapy or other sound therapies involving listening to music modified by filtering and modulation. Control groups could involve no treatment, a waiting list, usual therapy or a placebo equivalent. The outcomes were changes in core and associated features of autism spectrum disorders, auditory processing, quality of life and adverse events. Two independent review authors performed data extraction. All outcome data in the included papers were continuous. We calculated point estimates and standard errors from t-test scores and post-intervention means. Meta-analysis was inappropriate for the available data. We identified six randomised comtrolled trials of auditory integration therapy and one of Tomatis therapy, involving a total of 182 individuals aged three to 39 years. Two were cross-over trials. Five trials had fewer than 20 participants. Allocation concealment was inadequate for all studies. Twenty different outcome measures were used and only two outcomes were used by three or more studies. Meta-analysis was not possible due to very high heterogeneity or the presentation of data in unusable forms. Three studies (Bettison 1996; Zollweg 1997; Mudford 2000) did not demonstrate any benefit of auditory integration therapy over control conditions. Three studies (Veale 1993; Rimland 1995; Edelson 1999) reported improvements at three months for the auditory integration therapy group based on the Aberrant Behaviour Checklist, but they used a total score rather than subgroup scores, which is of questionable validity, and Veale's results did not reach statistical significance. Rimland 1995 also reported improvements at three months in the auditory integration therapy group for the Aberrant Behaviour Checklist subgroup scores. The study addressing Tomatis therapy (Corbett 2008) described an improvement in language with no difference between treatment and control conditions and did not report on the behavioural outcomes that were used in the auditory integration therapy trials. There is no evidence that auditory integration therapy or other sound therapies are effective as treatments for autism spectrum disorders. As synthesis of existing data has been limited by the disparate outcome measures used between studies, there is not sufficient evidence to prove that this treatment is not effective. However, of the seven studies including 182 participants that have been reported to date, only two (with an author in common), involving a total of 35 participants, report statistically significant improvements in the auditory intergration therapy group and for only two outcome measures (Aberrant Behaviour Checklist and Fisher's Auditory Problems Checklist). As such, there is no evidence to support the use of auditory integration therapy at this time.
Sestito, Mariateresa; Raballo, Andrea; Umiltà, Maria Alessandra; Leuci, Emanuela; Tonna, Matteo; Fortunati, Renata; De Paola, Giancarlo; Amore, Mario; Maggini, Carlo; Gallese, Vittorio
2015-01-01
Self-disorders (SDs) have been described as a core schizophrenia spectrum vulnerability phenotype, both in classic and contemporary psychopathological literature. However, such a core phenotype has not yet been investigated adopting a trans-domain approach that combines the phenomenological and the neurophysiological levels of analysis. The aim of this study is to investigate the relation between SDs and subtle, schizophrenia-specific impairments of emotional resonance that are supposed to reflect abnormalities in the mirror neurons mechanism. Specifically, we tested whether electromyographic response to emotional stimuli (i.e. a proxy for subtle changes in facial mimicry and related motor resonance mechanisms) would predict the occurrence of anomalous subjective experiences (i.e. SDs). Eighteen schizophrenia spectrum (SzSp) patients underwent a comprehensive psychopathological examination and were contextually tested with a multimodal paradigm, recording facial electromyographic activity of muscles in response to positive and negative emotional stimuli. Experiential anomalies were explored with the Bonn Scale for the Assessment of Basic Symptoms (BSABS) and then condensed into rational subscales mapping SzSp anomalous self-experiences. SzSp patients showed an imbalance in emotional motor resonance with a selective bias toward negative stimuli, as well as a multisensory integration impairment. Multiple regression analysis showed that electromyographic facial reactions in response to negative stimuli presented in auditory modality specifically and strongly correlated with SD subscore. The study confirms the potential of SDs as target phenotype for neurobiological research and encourages research into disturbed motor/emotional resonance as possible body-level correlate of disturbed subjective experiences in SzSp.
Forlano, Paul M; Licorish, Roshney R; Ghahramani, Zachary N; Timothy, Miky; Ferrari, Melissa; Palmer, William C; Sisneros, Joseph A
2017-10-01
Little is known regarding the coordination of audition with decision-making and subsequent motor responses that initiate social behavior including mate localization during courtship. Using the midshipman fish model, we tested the hypothesis that the time spent by females attending and responding to the advertisement call is correlated with the activation of a specific subset of catecholaminergic (CA) and social decision-making network (SDM) nuclei underlying auditory- driven sexual motivation. In addition, we quantified the relationship of neural activation between CA and SDM nuclei in all responders with the goal of providing a map of functional connectivity of the circuitry underlying a motivated state responsive to acoustic cues during mate localization. In order to make a baseline qualitative comparison of this functional brain map to unmotivated females, we made a similar correlative comparison of brain activation in females who were unresponsive to the advertisement call playback. Our results support an important role for dopaminergic neurons in the periventricular posterior tuberculum and ventral thalamus, putative A11 and A13 tetrapod homologues, respectively, as well as the posterior parvocellular preoptic area and dorsomedial telencephalon, (laterobasal amygdala homologue) in auditory attention and appetitive sexual behavior in fishes. These findings may also offer insights into the function of these highly conserved nuclei in the context of auditory-driven reproductive social behavior across vertebrates. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Mu, Kai
2017-02-01
The established “Map World” on the National Geographic Information Public Service Platform offers free access to many geographic information in the Core Area of the Silk Road Economic Belt. Considering the special security situation and severe splittism and anti-splittism struggles in the Core Area of the Silk Road Economic Belt, a set of moving target positioning and alarming platform based on J2EE platform and B/S structure was designed and realized by combining the “Map World” data and global navigation satellite system. This platform solves various problems, such as effective combination of Global Navigation Satellite System (GNSS) and “Map World” resources, moving target alarming setting, inquiry of historical routes, system management, etc.
Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula
2017-08-02
Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.
Młynarski, Wiktor
2015-05-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Graulty, Christian; Papaioannou, Orestis; Bauer, Phoebe; Pitts, Michael A; Canseco-Gonzalez, Enriqueta
2018-04-01
In auditory-visual sensory substitution, visual information (e.g., shape) can be extracted through strictly auditory input (e.g., soundscapes). Previous studies have shown that image-to-sound conversions that follow simple rules [such as the Meijer algorithm; Meijer, P. B. L. An experimental system for auditory image representation. Transactions on Biomedical Engineering, 39, 111-121, 1992] are highly intuitive and rapidly learned by both blind and sighted individuals. A number of recent fMRI studies have begun to explore the neuroplastic changes that result from sensory substitution training. However, the time course of cross-sensory information transfer in sensory substitution is largely unexplored and may offer insights into the underlying neural mechanisms. In this study, we recorded ERPs to soundscapes before and after sighted participants were trained with the Meijer algorithm. We compared these posttraining versus pretraining ERP differences with those of a control group who received the same set of 80 auditory/visual stimuli but with arbitrary pairings during training. Our behavioral results confirmed the rapid acquisition of cross-sensory mappings, and the group trained with the Meijer algorithm was able to generalize their learning to novel soundscapes at impressive levels of accuracy. The ERP results revealed an early cross-sensory learning effect (150-210 msec) that was significantly enhanced in the algorithm-trained group compared with the control group as well as a later difference (420-480 msec) that was unique to the algorithm-trained group. These ERP modulations are consistent with previous fMRI results and provide additional insight into the time course of cross-sensory information transfer in sensory substitution.
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J.
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface. PMID:29515494
Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J
2018-01-01
Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
2017-08-01
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat
Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.
2015-01-01
The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563
Mulert, C; Juckel, G; Augustin, H; Hegerl, U
2002-10-01
The loudness dependency of the auditory evoked potentials (LDAEP) is used as an indicator of the central serotonergic system and predicts clinical response to serotonin agonists. So far, LDAEP has been typically investigated with dipole source analysis, because with this method the primary and secondary auditory cortex (with a high versus low serotonergic innervation) can be separated at least in parts. We have developed a new analysis procedure that uses an MRI probabilistic map of the primary auditory cortex in Talairach space and analyzed the current density in this region of interest with low resolution electromagnetic tomography (LORETA). LORETA is a tomographic localization method that calculates the current density distribution in Talairach space. In a group of patients with major depression (n=15), this new method can predict the response to an selective serotonin reuptake inhibitor (citalopram) at least to the same degree than the traditional dipole source analysis method (P=0.019 vs. P=0.028). The correlation of the improvement in the Hamilton Scale is significant with the LORETA-LDAEP-values (0.56; P=0.031) but not with the dipole source analysis LDAEP-values (0.43; P=0.11). The new tomographic LDAEP analysis is a promising tool in the analysis of the central serotonergic system.
Hunter, Eric J; Svec, Jan G; Titze, Ingo R
2006-12-01
Frequency and intensity ranges (in true decibel sound pressure level, 20 microPa at 1 m) of voice production in trained and untrained vocalists were compared with the perceived dynamic range (phons) and units of loudness (sones) of the ear. Results were reported in terms of standard voice range profiles (VRPs), perceived VRPs (as predicted by accepted measures of auditory sensitivities), and a new metric labeled as an overall perceptual level construct. Trained classical singers made use of the most sensitive part of the hearing range (around 3-4 kHz) through the use of the singer's formant. When mapped onto the contours of equal loudness (depicting nonuniform spectral and dynamic sensitivities of the auditory system), the formant is perceived at an even higher sound level, as measured in phons, than a flat or A-weighted spectrum would indicate. The contributions of effects like the singer's formant and the sensitivities of the auditory system helped the trained singers produce 20% to 40% more units of loudness, as measured in sones, than the untrained singers. Trained male vocalists had a maximum overall perceptual level construct that was 40% higher than the untrained male vocalists. Although the A-weighted spectrum (commonly used in VRP measurement) is a reasonable first-order approximation of auditory sensitivities, it misrepresents the most salient part of the sensitivities (where the singer's formant is found) by nearly 10 dB.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Fear Processing in Dental Phobia during Crossmodal Symptom Provocation: An fMRI Study
Maslowski, Nina Isabel; Wittchen, Hans-Ulrich; Lueken, Ulrike
2014-01-01
While previous studies successfully identified the core neural substrates of the animal subtype of specific phobia, only few and inconsistent research is available for dental phobia. These findings might partly relate to the fact that, typically, visual stimuli were employed. The current study aimed to investigate the influence of stimulus modality on neural fear processing in dental phobia. Thirteen dental phobics (DP) and thirteen healthy controls (HC) attended a block-design functional magnetic resonance imaging (fMRI) symptom provocation paradigm encompassing both visual and auditory stimuli. Drill sounds and matched neutral sinus tones served as auditory stimuli and dentist scenes and matched neutral videos as visual stimuli. Group comparisons showed increased activation in the insula, anterior cingulate cortex, orbitofrontal cortex, and thalamus in DP compared to HC during auditory but not visual stimulation. On the contrary, no differential autonomic reactions were observed in DP. Present results are largely comparable to brain areas identified in animal phobia, but also point towards a potential downregulation of autonomic outflow by neural fear circuits in this disorder. Findings enlarge our knowledge about neural correlates of dental phobia and may help to understand the neural underpinnings of the clinical and physiological characteristics of the disorder. PMID:24738049
Auralization of CFD Vorticity Using an Auditory Illusion
NASA Astrophysics Data System (ADS)
Volpe, C. R.
2005-12-01
One way in which scientists and engineers interpret large quantities of data is through a process called visualization, i.e. generating graphical images that capture essential characteristics and highlight interesting relationships. Another approach, which has received far less attention, is to present complex information with sound. This approach, called ``auralization" or ``sonification", is the auditory analog of visualization. Early work in data auralization frequently involved directly mapping some variable in the data to a sound parameter, such as pitch or volume. Multi-variate data could be auralized by mapping several variables to several sound parameters simultaneously. A clear drawback of this approach is the limited practical range of sound parameters that can be presented to human listeners without exceeding their range of perception or comfort. A software auralization system built upon an existing visualization system is briefly described. This system incorporates an aural presentation synchronously and interactively with an animated scientific visualization, so that alternate auralization techniques can be investigated. One such alternate technique involves auditory illusions: sounds which trick the listener into perceiving something other than what is actually being presented. This software system will be used to present an auditory illusion, known for decades among cognitive psychologists, which produces a sound that seems to ascend or descend endlessly in pitch. The applicability of this illusion for presenting Computational Fluid Dynamics data will be demonstrated. CFD data is frequently visualized with thin stream-lines, but thicker stream-ribbons and stream-tubes can also be used, which rotate to convey fluid vorticity. But a purely graphical presentation can yield drawbacks of its own. Thicker stream-tubes can be self-obscuring, and can obscure other scene elements as well, thus motivating a different approach, such as using sound. Naturally, the simple approach of mapping clockwise and counterclockwise rotations to actual pitch increases and decreases, eventually results in sounds that the listener cannot hear. In this alternate presentation using an auditory illusion, repeated rotations of a stream-tube are replaced with continual increases or decreases in apparent pitch. These apparent pitch changes can continue without bound, yet never exceed the range of frequencies that the listener can hear. The effectiveness of this presentation technique has been studied, and empirical results, obtained through formal user testing and statistical analysis, are presented. These results demonstrate that an aural data presentation using an auditory illusion can improve performance in locating key data characteristics, a task that demonstrates a certain level of understanding of the data. The experiments show that this holds true even when the user expresses a subjective preference and greater confidence in a visual presentation. The CFD data used in the research comes from a number of different industrial domains, but the advantages of this technique could be equally applicable to the study of earth sciences involving fluid mechanics, such as atmospheric or ocean sciences. Furthermore, the approach is applicable not only to CFD data, but to any type of data in which a quantity that is cyclic in nature, such as orientation, needs to be presented. Although the techniques and tools were originally developed with scientists and engineers in mind, they can also be used to aid students, particularly those who are visually impaired or who have difficulty interpreting certain spatial relationships visually.
Mapping alteration minerals at prospect, outcrop and drill core scales using imaging spectrometry
Kruse, Fred A.; L. Bedell, Richard; Taranik, James V.; Peppin, William A.; Weatherbee, Oliver; Calvin, Wendy M.
2011-01-01
Imaging spectrometer data (also known as ‘hyperspectral imagery’ or HSI) are well established for detailed mineral mapping from airborne and satellite systems. Overhead data, however, have substantial additional potential when used together with ground-based measurements. An imaging spectrometer system was used to acquire airborne measurements and to image in-place outcrops (mine walls) and boxed drill core and rock chips using modified sensor-mounting configurations. Data were acquired at 5 nm nominal spectral resolution in 360 channels from 0.4 to 2.45 μm. Analysis results using standardized hyperspectral methodologies demonstrate rapid extraction of representative mineral spectra and mapping of mineral distributions and abundances in map-plan, with core depth, and on the mine walls. The examples shown highlight the capabilities of these data for mineral mapping. Integration of these approaches promotes improved understanding of relations between geology, alteration and spectral signatures in three dimensions and should lead to improved efficiency of mine development, operations and ultimately effective mine closure. PMID:25937681
Jao Keehn, R Joanne; Sanchez, Sandra S; Stewart, Claire R; Zhao, Weiqi; Grenesko-Stevens, Emily L; Keehn, Brandon; Müller, Ralph-Axel
2017-01-01
Autism spectrum disorders (ASD) are pervasive developmental disorders characterized by impairments in language development and social interaction, along with restricted and stereotyped behaviors. These behaviors often include atypical responses to sensory stimuli; some children with ASD are easily overwhelmed by sensory stimuli, while others may seem unaware of their environment. Vision and audition are two sensory modalities important for social interactions and language, and are differentially affected in ASD. In the present study, 16 children and adolescents with ASD and 16 typically developing (TD) participants matched for age, gender, nonverbal IQ, and handedness were tested using a mixed event-related/blocked functional magnetic resonance imaging paradigm to examine basic perceptual processes that may form the foundation for later-developing cognitive abilities. Auditory (high or low pitch) and visual conditions (dot located high or low in the display) were presented, and participants indicated whether the stimuli were "high" or "low." Results for the auditory condition showed downregulated activity of the visual cortex in the TD group, but upregulation in the ASD group. This atypical activity in visual cortex was associated with autism symptomatology. These findings suggest atypical crossmodal (auditory-visual) modulation linked to sociocommunicative deficits in ASD, in agreement with the general hypothesis of low-level sensorimotor impairments affecting core symptomatology. Autism Res 2017, 10: 130-143. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.
Encoding frequency contrast in primate auditory cortex
Scott, Brian H.; Semple, Malcolm N.
2014-01-01
Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525
Interdependent encoding of pitch, timbre and spatial location in auditory cortex
Bizley, Jennifer K.; Walker, Kerry M. M.; Silverman, Bernard W.; King, Andrew J.; Schnupp, Jan W. H.
2009-01-01
Because we can perceive the pitch, timbre and spatial location of a sound source independently, it seems natural to suppose that cortical processing of sounds might separate out spatial from non-spatial attributes. Indeed, recent studies support the existence of anatomically segregated ‘what’ and ‘where’ cortical processing streams. However, few attempts have been made to measure the responses of individual neurons in different cortical fields to sounds that vary simultaneously across spatial and non-spatial dimensions. We recorded responses to artificial vowels presented in virtual acoustic space to investigate the representations of pitch, timbre and sound source azimuth in both core and belt areas of ferret auditory cortex. A variance decomposition technique was used to quantify the way in which altering each parameter changed neural responses. Most units were sensitive to two or more of these stimulus attributes. Whilst indicating that neural encoding of pitch, location and timbre cues is distributed across auditory cortex, significant differences in average neuronal sensitivity were observed across cortical areas and depths, which could form the basis for the segregation of spatial and non-spatial cues at higher cortical levels. Some units exhibited significant non-linear interactions between particular combinations of pitch, timbre and azimuth. These interactions were most pronounced for pitch and timbre and were less commonly observed between spatial and non-spatial attributes. Such non-linearities were most prevalent in primary auditory cortex, although they tended to be small compared with stimulus main effects. PMID:19228960
NASA Astrophysics Data System (ADS)
Stanley, V.; Stewart, E.
2016-12-01
Rock cores collected during historic mineral exploration can provide invaluable data for modern analyses, but only if the samples are properly curated. The Cahoon Mine operated in Baraboo, WI during the 1910's and produced iron ore from the ca. 1.7 Ga Freedom Formation. The Freedom Formation is part of the well-known Baraboo-interval stratigraphy and is only present in the subsurface of Wisconsin (Weidman, 1904). Seventeen exploratory drill cores were rescued by Wisconsin Geological and Natural History Survey (WGNHS) from the original drying house at the mine site. The condition of the containers endangered the stratigraphic context of the collection; identifiers and depth markings were often obscured or lost. The individual core pieces were coated in residue and dust. Most of what is known about the Freedom Formation is from core logs and master's theses from the early 1900's (Leith, 1935; Schmidt, 1951). Ongoing subsurface mapping of the Baraboo-interval sediments and underlying basement of southern Wisconsin integrates new and existing subsurface and regional geophysical datasets. Mapping involves calibrating unique signals in regional aeromagnetic data to known lithology from drill core and cuttings. The Freedom Formation is especially important in this process as its iron-rich composition and regional continuity causes it to have a somewhat unique signal in regional aeromagnetic data. The Cahoon Mine cores in the WGNHS repository are the most extensive collection of physical samples from the Freedom Formation still in existence. We are in the process of curating the cores to facilitate their use in ongoing bedrock mapping. Today the cost and logistics of extensive sampling of this unit makes the existing core collection irreplaceable. We transferred the material to new containers, digitally recorded metadata, and created archival labels. As a result of this effort, the Cahoon Mine cores are now stored in a format that is physically and digitally accessible.
Mapping Frequency-Specific Tone Predictions in the Human Auditory Cortex at High Spatial Resolution.
Berlot, Eva; Formisano, Elia; De Martino, Federico
2018-05-23
Auditory inputs reaching our ears are often incomplete, but our brains nevertheless transform them into rich and complete perceptual phenomena such as meaningful conversations or pleasurable music. It has been hypothesized that our brains extract regularities in inputs, which enables us to predict the upcoming stimuli, leading to efficient sensory processing. However, it is unclear whether tone predictions are encoded with similar specificity as perceived signals. Here, we used high-field fMRI to investigate whether human auditory regions encode one of the most defining characteristics of auditory perception: the frequency of predicted tones. Two pairs of tone sequences were presented in ascending or descending directions, with the last tone omitted in half of the trials. Every pair of incomplete sequences contained identical sounds, but was associated with different expectations about the last tone (a high- or low-frequency target). This allowed us to disambiguate predictive signaling from sensory-driven processing. We recorded fMRI responses from eight female participants during passive listening to complete and incomplete sequences. Inspection of specificity and spatial patterns of responses revealed that target frequencies were encoded similarly during their presentations, as well as during omissions, suggesting frequency-specific encoding of predicted tones in the auditory cortex (AC). Importantly, frequency specificity of predictive signaling was observed already at the earliest levels of auditory cortical hierarchy: in the primary AC. Our findings provide evidence for content-specific predictive processing starting at the earliest cortical levels. SIGNIFICANCE STATEMENT Given the abundance of sensory information around us in any given moment, it has been proposed that our brain uses contextual information to prioritize and form predictions about incoming signals. However, there remains a surprising lack of understanding of the specificity and content of such prediction signaling; for example, whether a predicted tone is encoded with similar specificity as a perceived tone. Here, we show that early auditory regions encode the frequency of a tone that is predicted yet omitted. Our findings contribute to the understanding of how expectations shape sound processing in the human auditory cortex and provide further insights into how contextual information influences computations in neuronal circuits. Copyright © 2018 the authors 0270-6474/18/384934-09$15.00/0.
Object discrimination using optimized multi-frequency auditory cross-modal haptic feedback.
Gibson, Alison; Artemiadis, Panagiotis
2014-01-01
As the field of brain-machine interfaces and neuro-prosthetics continues to grow, there is a high need for sensor and actuation mechanisms that can provide haptic feedback to the user. Current technologies employ expensive, invasive and often inefficient force feedback methods, resulting in an unrealistic solution for individuals who rely on these devices. This paper responds through the development, integration and analysis of a novel feedback architecture where haptic information during the neural control of a prosthetic hand is perceived through multi-frequency auditory signals. Through representing force magnitude with volume and force location with frequency, the feedback architecture can translate the haptic experiences of a robotic end effector into the alternative sensory modality of sound. Previous research with the proposed cross-modal feedback method confirmed its learnability, so the current work aimed to investigate which frequency map (i.e. frequency-specific locations on the hand) is optimal in helping users distinguish between hand-held objects and tasks associated with them. After short use with the cross-modal feedback during the electromyographic (EMG) control of a prosthetic hand, testing results show that users are able to use audial feedback alone to discriminate between everyday objects. While users showed adaptation to three different frequency maps, the simplest map containing only two frequencies was found to be the most useful in discriminating between objects. This outcome provides support for the feasibility and practicality of the cross-modal feedback method during the neural control of prosthetics.
Constraining the Dust Opacity Law in Three Small and Isolated Molecular Clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb, K. A.; Thanjavur, K.; Di Francesco, J.
Density profiles of isolated cores derived from thermal dust continuum emission rely on models of dust properties, such as mass opacity, that are poorly constrained. With complementary measures from near-infrared extinction maps, we can assess the reliability of commonly used dust models. In this work, we compare Herschel -derived maps of the optical depth with equivalent maps derived from CFHT WIRCAM near-infrared observations for three isolated cores: CB 68, L 429, and L 1552. We assess the dust opacities provided from four models: OH1a, OH5a, Orm1, and Orm4. Although the consistency of the models differs between the three sources, themore » results suggest that the optical properties of dust in the envelopes of the cores are best described by either silicate and bare graphite grains (e.g., Orm1) or carbonaceous grains with some coagulation and either thin or no ice mantles (e.g., OH5a). None of the models, however, individually produced the most consistent optical depth maps for every source. The results suggest that either the dust in the cores is not well-described by any one dust property model, the application of the dust models cannot be extended beyond the very center of the cores, or more complex SED fitting functions are necessary.« less
Electrical tuning and transduction in short hair cells of the chicken auditory papilla.
Tan, Xiaodong; Beurg, Maryline; Hackney, Carole; Mahendrasingam, Shanthini; Fettiplace, Robert
2013-04-01
The avian auditory papilla contains two classes of sensory receptor, tall hair cells (THCs) and short hair cells (SHCs), the latter analogous to mammalian outer hair cells with large efferent but sparse afferent innervation. Little is known about the tuning, transduction, or electrical properties of SHCs. To address this problem, we made patch-clamp recordings from hair cells in an isolated chicken basilar papilla preparation at 33°C. We found that SHCs are electrically tuned by a Ca(2+)-activated K(+) current, their resonant frequency varying along the papilla in tandem with that of the THCs, which also exhibit electrical tuning. The tonotopic map for THCs was similar to maps previously described from auditory nerve fiber measurements. SHCs also possess an A-type K(+) current, but electrical tuning was observed only at resting potentials positive to -45 mV, where the A current is inactivated. We predict that the resting potential in vivo is approximately -40 mV, depolarized by a standing inward current through mechanotransducer (MT) channels having a resting open probability of ∼0.26. The resting open probability stems from a low endolymphatic Ca(2+) concentration (0.24 mM) and a high intracellular mobile Ca(2+) buffer concentration, estimated from perforated-patch recordings as equivalent to 0.5 mM BAPTA. The high buffer concentration was confirmed by quantifying parvalbumin-3 and calbindin D-28K with calibrated postembedding immunogold labeling, demonstrating >1 mM calcium-binding sites. Both proteins displayed an apex-to-base gradient matching that in the MT current amplitude, which increased exponentially along the papilla. Stereociliary bundles also labeled heavily with antibodies against the Ca(2+) pump isoform PMCA2a.
Mapping edge-based traffic measurements onto the internal links in MPLS network
NASA Astrophysics Data System (ADS)
Zhao, Guofeng; Tang, Hong; Zhang, Yi
2004-09-01
Applying multi-protocol label switching techniques to IP-based backbone for traffic engineering goals has shown advantageous. Obtaining a volume of load on each internal link of the network is crucial for traffic engineering applying. Though collecting can be available for each link, such as applying traditional SNMP scheme, the approach may cause heavy processing load and sharply degrade the throughput of the core routers. Then monitoring merely at the edge of the network and mapping the measurements onto the core provides a good alternative way. In this paper, we explore a scheme for traffic mapping with edge-based measurements in MPLS network. It is supposed that the volume of traffic on each internal link over the domain would be mapped onto by measurements available only at ingress nodes. We apply path-based measurements at ingress nodes without enabling measurements in the core of the network. We propose a method that can infer a path from the ingress to the egress node using label distribution protocol without collecting routing data from core routers. Based on flow theory and queuing theory, we prove that our approach is effective and present the algorithm for traffic mapping. We also show performance simulation results that indicate potential of our approach.
Measor, Kevin; Yarrow, Stuart; Razak, Khaleel A
2018-05-26
Sound level processing is a fundamental function of the auditory system. To determine how the cortex represents sound level, it is important to quantify how changes in level alter the spatiotemporal structure of cortical ensemble activity. This is particularly true for echolocating bats that have control over, and often rapidly adjust, call level to actively change echo level. To understand how cortical activity may change with sound level, here we mapped response rate and latency changes with sound level in the auditory cortex of the pallid bat. The pallid bat uses a 60-30 kHz downward frequency modulated (FM) sweep for echolocation. Neurons tuned to frequencies between 30 and 70 kHz in the auditory cortex are selective for the properties of FM sweeps used in echolocation forming the FM sweep selective region (FMSR). The FMSR is strongly selective for sound level between 30 and 50 dB SPL. Here we mapped the topography of level selectivity in the FMSR using downward FM sweeps and show that neurons with more monotonic rate level functions are located in caudomedial regions of the FMSR overlapping with high frequency (50-60 kHz) neurons. Non-monotonic neurons dominate the FMSR, and are distributed across the entire region, but there is no evidence for amplitopy. We also examined how first spike latency of FMSR neurons change with sound level. The majority of FMSR neurons exhibit paradoxical latency shift wherein the latency increases with sound level. Moreover, neurons with paradoxical latency shifts are more strongly level selective and are tuned to lower sound level than neurons in which latencies decrease with level. These data indicate a clustered arrangement of neurons according to monotonicity, with no strong evidence for finer scale topography, in the FMSR. The latency analysis suggests mechanisms for strong level selectivity that is based on relative timing of excitatory and inhibitory inputs. Taken together, these data suggest how the spatiotemporal spread of cortical activity may represent sound level. Copyright © 2018. Published by Elsevier B.V.
Audio-visual speech perception: a developmental ERP investigation
Knowland, Victoria CP; Mercure, Evelyne; Karmiloff-Smith, Annette; Dick, Fred; Thomas, Michael SC
2014-01-01
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development. PMID:24176002
Park, Jangho; Chung, Seockhoon; Lee, Jiho; Sung, Joo Hyun; Cho, Seung Woo; Sim, Chang Sun
2017-04-12
Excessive noise affects human health and interferes with daily activities. Although environmental noise may not directly cause mental illness, it may accelerate and intensify the development of latent mental disorders. Noise sensitivity (NS) is considered a moderator of non-auditory noise effects. In the present study, we aimed to assess whether NS is associated with non-auditory effects. We recruited a community sample of 1836 residents residing in Ulsan and Seoul, South Korea. From July to November 2015, participants were interviewed regarding their demographic characteristics, socioeconomic status, medical history, and NS. The non-auditory effects of noise were assessed using the Center of Epidemiologic Studies Depression, Insomnia Severity index, State Trait Anxiety Inventory state subscale, and Stress Response Inventory-Modified Form. Individual noise levels were recorded from noise maps. A three-model multivariate logistic regression analysis was performed to identify factors that might affect psychiatric illnesses. Participants ranged in age from 19 to 91 years (mean: 47.0 ± 16.1 years), and 37.9% (n = 696) were male. Participants with high NS were more likely to have been diagnosed with diabetes and hyperlipidemia and to use psychiatric medication. The multivariable analysis indicated that even after adjusting for noise-related variables, sociodemographic factors, medical illness, and duration of residence, subjects in the high NS group were more than 2 times more likely to experience depression and insomnia and 1.9 times more likely to have anxiety, compared with those in the low NS group. Noise exposure level was not identified as an explanatory value. NS increases the susceptibility and hence moderates there actions of individuals to noise. NS, rather than noise itself, is associated with an elevated susceptibility to non-auditory effects.
Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.
Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J
2015-03-01
Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.
Młynarski, Wiktor
2015-01-01
In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373
Nimmo, Lisa M; Lewandowsky, Stephan
2006-09-01
The notion of a link between time and memory is intuitively appealing and forms the core assumption of temporal distinctiveness models. Distinctiveness models predict that items that are temporally isolated from their neighbors at presentation should be recalled better than items that are temporally crowded. By contrast, event-based theories consider time to be incidental to the processes that govern memory, and such theories would not imply a temporal isolation advantage unless participants engaged in a consolidation process (e.g., rehearsal or selective encoding) that exploited the temporal structure of the list. In this report, we examine two studies that assessed the effect of temporal distinctiveness on memory, using auditory (Experiment 1) and auditory and visual (Experiment 2) presentation with unpredictably varying interitem intervals. The results show that with unpredictable intervals temporal isolation does not benefit memory, regardless of presentation modality.
ERIC Educational Resources Information Center
Singer, Bryan F.; Bryan, Myranda A.; Popov, Pavlo; Scarff, Raymond; Carter, Cody; Wright, Erin; Aragona, Brandon J.; Robinson, Terry E.
2016-01-01
The sensory properties of a reward-paired cue (a conditioned stimulus; CS) may impact the motivational value attributed to the cue, and in turn influence the form of the conditioned response (CR) that develops. A cue with multiple sensory qualities, such as a moving lever-CS, may activate numerous neural pathways that process auditory and visual…
ERIC Educational Resources Information Center
McCain, Katherine W.
1992-01-01
Demonstrates the interrelationship between two traditionally separate literatures, i.e., marine biology and physical oceanography, and develops a joint core journal list. The use of journal intercitation data from "Journal Citation Reports" for "Science Citation Index" and from SCISEARCH on DIALOG to create a cocitation map is…
Corvin, Jaime A; DeBate, Rita; Wolfe-Quintero, Kate; Petersen, Donna J
2017-01-01
In the twenty-first century, the dynamics of health and health care are changing, necessitating a commitment to revising traditional public health curricula to better meet present day challenges. This article describes how the College of Public Health at the University of South Florida utilized the Intervention Mapping framework to translate revised core competencies into an integrated, theory-driven core curriculum to meet the training needs of the twenty-first century public health scholar and practitioner. This process resulted in the development of four sequenced courses: History and Systems of Public Health and Population Assessment I delivered in the first semester and Population Assessment II and Translation to Practice delivered in the second semester. While the transformation process, moving from traditional public health core content to an integrated and innovative curriculum, is a challenging and daunting task, Intervention Mapping provides the ideal framework for guiding this process. Intervention mapping walks the curriculum developers from the broad goals and objectives to the finite details of a lesson plan. Throughout this process, critical lessons were learned, including the importance of being open to new ideologies and frameworks and the critical need to involve key-stakeholders in every step of the decision-making process to ensure the sustainability of the resulting integrated and theory-based curriculum. Ultimately, as a stronger curriculum emerged, the developers and instructors themselves were changed, fostering a stronger public health workforce from within.
Shang, Nan; Styles, Suzy J.
2017-01-01
Studies investigating cross-modal correspondences between auditory pitch and visual shapes have shown children and adults consistently match high pitch to pointy shapes and low pitch to curvy shapes, yet no studies have investigated linguistic-uses of pitch. In the present study, we used a bouba/kiki style task to investigate the sound/shape mappings for Tones of Mandarin Chinese, for three groups of participants with different language backgrounds. We recorded the vowels [i] and [u] articulated in each of the four tones of Mandarin Chinese. In Study 1 a single auditory stimulus was presented with two images (one curvy, one spiky). In Study 2 a single image was presented with two auditory stimuli differing only in tone. Participants were asked to select the best match in an online ‘Quiz.’ Across both studies, we replicated the previously observed ‘u-curvy, i-pointy’ sound/shape cross-modal correspondence in all groups. However, Tones were mapped differently by people with different language backgrounds: speakers of Mandarin Chinese classified as Chinese-dominant systematically matched Tone 1 (high, steady) to the curvy shape and Tone 4 (falling) to the pointy shape, while English speakers with no knowledge of Chinese preferred to match Tone 1 (high, steady) to the pointy shape and Tone 3 (low, dipping) to the curvy shape. These effects were observed most clearly in Study 2 where tone-pairs were contrasted explicitly. These findings are in line with the dominant patterns of linguistic pitch perception for speakers of these languages (pitch-change, and pitch height, respectively). Chinese English balanced bilinguals showed a bivalent pattern, swapping between the Chinese pitch-change pattern and the English pitch-height pattern depending on the task. These findings show for that the supposedly universal pattern of mapping linguistic sounds to shape is modulated by the sensory properties of a speaker’s language system, and that people with high functioning in more than one language can dynamically shift between patterns. PMID:29270147
NASA Astrophysics Data System (ADS)
Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.
2012-09-01
A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.
Mapping a lateralization gradient within the ventral stream for auditory speech perception.
Specht, Karsten
2013-01-01
Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.
NASA Astrophysics Data System (ADS)
West, Ruth G.; Margolis, Todd; Prudhomme, Andrew; Schulze, Jürgen P.; Mostafavi, Iman; Lewis, J. P.; Gossmann, Joachim; Singh, Rajvikram
2014-02-01
Scalable Metadata Environments (MDEs) are an artistic approach for designing immersive environments for large scale data exploration in which users interact with data by forming multiscale patterns that they alternatively disrupt and reform. Developed and prototyped as part of an art-science research collaboration, we define an MDE as a 4D virtual environment structured by quantitative and qualitative metadata describing multidimensional data collections. Entire data sets (e.g.10s of millions of records) can be visualized and sonified at multiple scales and at different levels of detail so they can be explored interactively in real-time within MDEs. They are designed to reflect similarities and differences in the underlying data or metadata such that patterns can be visually/aurally sorted in an exploratory fashion by an observer who is not familiar with the details of the mapping from data to visual, auditory or dynamic attributes. While many approaches for visual and auditory data mining exist, MDEs are distinct in that they utilize qualitative and quantitative data and metadata to construct multiple interrelated conceptual coordinate systems. These "regions" function as conceptual lattices for scalable auditory and visual representations within virtual environments computationally driven by multi-GPU CUDA-enabled fluid dyamics systems.
González-García, Nadia; Rendón, Pablo L
2017-05-23
The neural correlates of consonance and dissonance perception have been widely studied, but not the neural correlates of consonance and dissonance production. The most straightforward manner of musical production is singing, but, from an imaging perspective, it still presents more challenges than listening because it involves motor activity. The accurate singing of musical intervals requires integration between auditory feedback processing and vocal motor control in order to correctly produce each note. This protocol presents a method that permits the monitoring of neural activations associated with the vocal production of consonant and dissonant intervals. Four musical intervals, two consonant and two dissonant, are used as stimuli, both for an auditory discrimination test and a task that involves first listening to and then reproducing given intervals. Participants, all female vocal students at the conservatory level, were studied using functional Magnetic Resonance Imaging (fMRI) during the performance of the singing task, with the listening task serving as a control condition. In this manner, the activity of both the motor and auditory systems was observed, and a measure of vocal accuracy during the singing task was also obtained. Thus, the protocol can also be used to track activations associated with singing different types of intervals or with singing the required notes more accurately. The results indicate that singing dissonant intervals requires greater participation of the neural mechanisms responsible for the integration of external feedback from the auditory and sensorimotor systems than does singing consonant intervals.
Ard, Tyler; Carver, Frederick W; Holroyd, Tom; Horwitz, Barry; Coppola, Richard
2015-08-01
In typical magnetoencephalography and/or electroencephalography functional connectivity analysis, researchers select one of several methods that measure a relationship between regions to determine connectivity, such as coherence, power correlations, and others. However, it is largely unknown if some are more suited than others for various types of investigations. In this study, the authors investigate seven connectivity metrics to evaluate which, if any, are sensitive to audiovisual integration by contrasting connectivity when tracking an audiovisual object versus connectivity when tracking a visual object uncorrelated with the auditory stimulus. The authors are able to assess the metrics' performances at detecting audiovisual integration by investigating connectivity between auditory and visual areas. Critically, the authors perform their investigation on a whole-cortex all-to-all mapping, avoiding confounds introduced in seed selection. The authors find that amplitude-based connectivity measures in the beta band detect strong connections between visual and auditory areas during audiovisual integration, specifically between V4/V5 and auditory cortices in the right hemisphere. Conversely, phase-based connectivity measures in the beta band as well as phase and power measures in alpha, gamma, and theta do not show connectivity between audiovisual areas. The authors postulate that while beta power correlations detect audiovisual integration in the current experimental context, it may not always be the best measure to detect connectivity. Instead, it is likely that the brain utilizes a variety of mechanisms in neuronal communication that may produce differential types of temporal relationships.
Efficient transformation of an auditory population code in a small sensory system.
Clemens, Jan; Kutzki, Olaf; Ronacher, Bernhard; Schreiber, Susanne; Wohlgemuth, Sandra
2011-08-16
Optimal coding principles are implemented in many large sensory systems. They include the systematic transformation of external stimuli into a sparse and decorrelated neuronal representation, enabling a flexible readout of stimulus properties. Are these principles also applicable to size-constrained systems, which have to rely on a limited number of neurons and may only have to fulfill specific and restricted tasks? We studied this question in an insect system--the early auditory pathway of grasshoppers. Grasshoppers use genetically fixed songs to recognize mates. The first steps of neural processing of songs take place in a small three-layer feed-forward network comprising only a few dozen neurons. We analyzed the transformation of the neural code within this network. Indeed, grasshoppers create a decorrelated and sparse representation, in accordance with optimal coding theory. Whereas the neuronal input layer is best read out as a summed population, a labeled-line population code for temporal features of the song is established after only two processing steps. At this stage, information about song identity is maximal for a population decoder that preserves neuronal identity. We conclude that optimal coding principles do apply to the early auditory system of the grasshopper, despite its size constraints. The inputs, however, are not encoded in a systematic, map-like fashion as in many larger sensory systems. Already at its periphery, part of the grasshopper auditory system seems to focus on behaviorally relevant features, and is in this property more reminiscent of higher sensory areas in vertebrates.
Hunter, Eric J.; Švec, Jan G.; Titze, Ingo R.
2016-01-01
Frequency and intensity ranges (in true dB SPL re 20 μPa at 1 meter) of voice production in trained and untrained vocalists were compared to the perceived dynamic range (phons) and units of loudness (sones) of the ear. Results were reported in terms of standard Voice Range Profiles (VRPs), perceived VRPs (as predicted by accepted measures of auditory sensitivities), and a new metric labeled as an Overall Perceptual Level Construct. Trained classical singers made use of the most sensitive part of the hearing range (around 3–4 KHz) through the use of the singer’s formant. When mapped onto the contours of equal-loudness (depicting non-uniform spectral and dynamic sensitivities of the auditory system), the formant is perceived at an even higher sound level, as measured in phons, than a flat or A-weighted spectrum would indicate. The contributions of effects like the singer’s formant and the sensitivities of the auditory system helped the trained singers produce 20–40 percent more units of loudness, as measured in sones, than the untrained singers. Trained male vocalists had a maximum Overall Perceptual Level Construct that was 40% higher than the untrained male vocalists. While the A-weighted spectrum (commonly used in VRP measurement) is a reasonable first order approximation of auditory sensitivities, it misrepresents the most salient part of the sensitivities (where the singer’s formant is found) by nearly 10 dB. PMID:16325373
Two-dimensional ice mapping of molecular cores
NASA Astrophysics Data System (ADS)
Noble, J. A.; Fraser, H. J.; Pontoppidan, K. M.; Craigon, A. M.
2017-06-01
We present maps of the column densities of H2O, CO2 and CO ices towards the molecular cores B 35A, DC 274.2-00.4, BHR 59 and DC 300.7-01.0. These ice maps, probing spatial distances in molecular cores as low as 2200 au, challenge the traditional hypothesis that the denser the region observed, the more ice is present, providing evidence that the relationships between solid molecular species are more varied than the generic picture we often adopt to model gas-grain chemical processes and explain feedback between solid phase processes and gas phase abundances. We present the first combined solid-gas maps of a single molecular species, based upon observations of both CO ice and gas phase C18O towards B 35A, a star-forming dense core in Orion. We conclude that molecular species in the solid phase are powerful tracers of 'small-scale' chemical diversity, prior to the onset of star formation. With a component analysis approach, we can probe the solid phase chemistry of a region at a level of detail greater than that provided by statistical analyses or generic conclusions drawn from single pointing line-of-sight observations alone.
Rodriguez, R A; Edmonds, H L; Auden, S M; Austin, E H
1999-09-01
To examine the effects of temperature on auditory brainstem responses (ABRs) in infants during hypothermic cardiopulmonary bypass for total circulatory arrest (TCA). The relationship between ABRs (as a surrogate measure of core-brain temperature) and body temperature as measured at several temperature monitoring sites was determined. In a prospective, observational study, ABRs were recorded non-invasively at normothermia and at every 1 or 2 degrees C change in ear-canal temperature during cooling and rewarming in 15 infants (ages: 2 days to 14 months) that required TCA. The ABR latencies and amplitudes and the lowest temperatures at which an ABR was identified (the threshold) were measured during both cooling and rewarming. Temperatures from four standard temperature monitoring sites were simultaneously recorded. The latencies of ABRs increased and amplitudes decreased with cooling (P < 0.01), but rewarming reversed these effects. The ABR threshold temperature as related to each monitoring site (ear-canal, nasopharynx, esophagus and bladder) was respectively determined as 23 +/- 2.2 degrees C, 20.8 +/- 1.7 degrees C, 14.6 +/- 3.4 degrees C, and 21.5 +/- 3.8 degrees C during cooling and 21.8 +/- 1.6 degrees C, 22.4 +/- 2.0 degrees C, 27.6 +/- 3.6 degrees C, and 23.0 +/- 2.4 degrees C during rewarming. The rewarming latencies were shorter and Q10 latencies smaller than the corresponding cooling values (P < 0.01). Esophageal and bladder sites were more susceptible to temperature variations as compared with the ear-canal and nasopharynx. No temperature site reliably predicted an electrophysiological threshold. A faster latency recovery during rewarming suggests that body temperature monitoring underestimates the effects of rewarming in the core-brain. ABRs may be helpful to monitor the effects of cooling and rewarming on the core-brain during pediatric cardiopulmonary bypass.
Laser selective microablation of sensitized intracellular components within auditory receptor cells
NASA Astrophysics Data System (ADS)
Harris, David M.; Evans, Burt N.; Santos-Sacchi, Joseph
1995-05-01
A laser system can be coupled to a light microscope for laser microbeam ablation and trapping of single cells in vitro. We have extended this technology by sensitization of target structures with vital dyes to provide selective ablation of specific subcellular components. Isolated auditory receptor cells (outer hair cells, OHCs) are known to elongate and contract in response to electrical, chemical and mechanical stimulation. Various intracellular structures are candidate components mediating motility of OHCs, but the exact mechanism(s) is currently unknown. In ongoing studies of OHC motility, we have used the microbeam for selective ablation of lateral wall components and of an axial cytoskeletal core that extends from the nucleus to the cell apex. Both the area beneath the subsurface cistemae of the lateral wall and the core are rich in mitochondria. OHCs isolated from guinea pig cochlea are suspended in L- 15 medium containing 2.0 (mu) M Rhodamine 123, a porphyrin with an affinity for mitochondria. A spark-pumped nitrogen laser pumping a dye cell (Coumarin 500) was aligned on the optical axis of a Nikon Optiphot-2 to produce a 3 ns, 0.5 - 10 micrometers spot (diameter above ablation threshold w/50X water immersion, N.A. 0.8), and energy at the target approximately equals 10 (mu) J/pulse. At short incubation times in Rh123 irradiation caused local blebbing or bulging of cytoplastic membrane and thus loss of the OHC's cylindrical shape. At longer Rh123 incubation times when the central axis of the cell was targeted we observed cytoplasmic clearing, immediate cell elongation (approximately equals 5%) and clumping of core material at nuclear and apical attachments. Experiments are underway to examine the significance of these preliminary observations.
Testing and operating a multiprocessor chip with processor redundancy
Bellofatto, Ralph E; Douskey, Steven M; Haring, Rudolf A; McManus, Moyra K; Ohmacht, Martin; Schmunkamp, Dietmar; Sugavanam, Krishnan; Weatherford, Bryan J
2014-10-21
A system and method for improving the yield rate of a multiprocessor semiconductor chip that includes primary processor cores and one or more redundant processor cores. A first tester conducts a first test on one or more processor cores, and encodes results of the first test in an on-chip non-volatile memory. A second tester conducts a second test on the processor cores, and encodes results of the second test in an external non-volatile storage device. An override bit of a multiplexer is set if a processor core fails the second test. In response to the override bit, the multiplexer selects a physical-to-logical mapping of processor IDs according to one of: the encoded results in the memory device or the encoded results in the external storage device. On-chip logic configures the processor cores according to the selected physical-to-logical mapping.
Ewing, Samuel G; Grace, Anthony A
2013-02-01
Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. Copyright © 2012 Elsevier B.V. All rights reserved.
Ewing, Samuel G.; Grace, Anthony A.
2012-01-01
Existing antipsychotic drugs are most effective at treating the positive symptoms of schizophrenia, but their relative efficacy is low and they are associated with considerable side effects. In this study deep brain stimulation of the ventral hippocampus was performed in a rodent model of schizophrenia (MAM-E17) in an attempt to alleviate one set of neurophysiological alterations observed in this disorder. Bipolar stimulating electrodes were fabricated and implanted, bilaterally, into the ventral hippocampus of rats. High frequency stimulation was delivered bilaterally via a custom-made stimulation device and both spectral analysis (power and coherence) of resting state local field potentials and amplitude of auditory evoked potential components during a standard inhibitory gating paradigm were examined. MAM rats exhibited alterations in specific components of the auditory evoked potential in the infralimbic cortex, the core of the nucleus accumbens, mediodorsal thalamic nucleus, and ventral hippocampus in the left hemisphere only. DBS was effective in reversing these evoked deficits in the infralimbic cortex and the mediodorsal thalamic nucleus of MAM-treated rats to levels similar to those observed in control animals. In contrast stimulation did not alter evoked potentials in control rats. No deficits or stimulation-induced alterations were observed in the prelimbic and orbitofrontal cortices, the shell of the nucleus accumbens or ventral tegmental area. These data indicate a normalization of deficits in generating auditory evoked potentials induced by a developmental disruption by acute high frequency, electrical stimulation of the ventral hippocampus. PMID:23269227
Hearing Nano-Structures: A Case Study in Timbral Sonification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schedel, M.; Yager, K.
2012-06-18
We explore the sonification of x-ray scattering data, which are two-dimensional arrays of intensity whose meaning is obscure and non-intuitive. Direct mapping of the experimental data into sound is found to produce timbral sonifications that, while sacrificing conventional aesthetic appeal, provide a rich auditory landscape for exploration. We discuss the optimization of sonification variables, and speculate on potential real-world applications. We have presented a case study of sonifying x-ray scattering data. Direct mapping of the two-dimensional intensity values of a scattering dataset into the two-dimensional matrix of a sonogram is a natural and information-preserving operation that creates rich sounds. Ourmore » work supports the notion that many problems in understanding rather abstract scientific datasets can be ameliorated by adding the auditory modality of sonification. We further emphasize that sonification need not be limited to time-series data: any data matrix is amenable. Timbral sonification is less obviously aesthetic, than tonal sonification, which generate melody, harmony, or rhythm. However these musical sonifications necessarily sacrifice information content for beauty. Timbral sonification is useful because the entire dataset is represented. Non-musicians can understand the data through the overall color of the sound; audio experts can extract more detailed insight by studying all the features of the sound.« less
Corvin, Jaime A.; DeBate, Rita; Wolfe-Quintero, Kate; Petersen, Donna J.
2017-01-01
In the twenty-first century, the dynamics of health and health care are changing, necessitating a commitment to revising traditional public health curricula to better meet present day challenges. This article describes how the College of Public Health at the University of South Florida utilized the Intervention Mapping framework to translate revised core competencies into an integrated, theory-driven core curriculum to meet the training needs of the twenty-first century public health scholar and practitioner. This process resulted in the development of four sequenced courses: History and Systems of Public Health and Population Assessment I delivered in the first semester and Population Assessment II and Translation to Practice delivered in the second semester. While the transformation process, moving from traditional public health core content to an integrated and innovative curriculum, is a challenging and daunting task, Intervention Mapping provides the ideal framework for guiding this process. Intervention mapping walks the curriculum developers from the broad goals and objectives to the finite details of a lesson plan. Throughout this process, critical lessons were learned, including the importance of being open to new ideologies and frameworks and the critical need to involve key-stakeholders in every step of the decision-making process to ensure the sustainability of the resulting integrated and theory-based curriculum. Ultimately, as a stronger curriculum emerged, the developers and instructors themselves were changed, fostering a stronger public health workforce from within. PMID:29164094
Sound Stabilizes Locomotor-Respiratory Coupling and Reduces Energy Cost
Hoffmann, Charles P.; Torregrosa, Gérald; Bardy, Benoît G.
2012-01-01
A natural synchronization between locomotor and respiratory systems is known to exist for various species and various forms of locomotion. This Locomotor-Respiratory Coupling (LRC) is fundamental for the energy transfer between the two subsystems during long duration exercise and originates from mechanical and neurological interactions. Different methodologies have been used to compute LRC, giving rise to various and often diverging results in terms of synchronization, (de-)stabilization via information, and associated energy cost. In this article, the theory of nonlinear-coupled oscillators was adopted to characterize LRC, through the model of the sine circle map, and tested it in the context of cycling. Our specific focus was the sound-induced stabilization of LRC and its associated change in energy consumption. In our experimental study, participants were instructed during a cycling exercise to synchronize either their respiration or their pedaling rate with an external auditory stimulus whose rhythm corresponded to their individual preferential breathing or cycling frequencies. Results showed a significant reduction in energy expenditure with auditory stimulation, accompanied by a stabilization of LRC. The sound-induced effect was asymmetrical, with a better stabilizing influence of the metronome on the locomotor system than on the respiratory system. A modification of the respiratory frequency was indeed observed when participants cycled in synchrony with the tone, leading to a transition toward more stable frequency ratios as predicted by the sine circle map. In addition to the classical mechanical and neurological origins of LRC, here we demonstrated using the sine circle map model that information plays an important modulatory role of the synchronization, and has global energetic consequences. PMID:23028849
Role of semantic paradigms for optimization of language mapping in clinical FMRI studies.
Zacà, D; Jarso, S; Pillai, J J
2013-10-01
The optimal paradigm choice for language mapping in clinical fMRI studies is challenging due to the variability in activation among different paradigms, the contribution to activation of cognitive processes other than language, and the difficulties in monitoring patient performance. In this study, we compared language localization and lateralization between 2 commonly used clinical language paradigms and 3 newly designed dual-choice semantic paradigms to define a streamlined and adequate language-mapping protocol. Twelve healthy volunteers performed 5 language paradigms: Silent Word Generation, Sentence Completion, Visual Antonym Pair, Auditory Antonym Pair, and Noun-Verb Association. Group analysis was performed to assess statistically significant differences in fMRI percentage signal change and lateralization index among these paradigms in 5 ROIs: inferior frontal gyrus, superior frontal gyrus, middle frontal gyrus for expressive language activation, middle temporal gyrus, and superior temporal gyrus for receptive language activation. In the expressive ROIs, Silent Word Generation was the most robust and best lateralizing paradigm (greater percentage signal change and lateralization index than semantic paradigms at P < .01 and P < .05 levels, respectively). In the receptive region of interest, Sentence Completion and Noun-Verb Association were the most robust activators (greater percentage signal change than other paradigms, P < .01). All except Auditory Antonym Pair were good lateralizing tasks (the lateralization index was significantly lower than other paradigms, P < .05). The combination of Silent Word Generation and ≥1 visual semantic paradigm, such as Sentence Completion and Noun-Verb Association, is adequate to determine language localization and lateralization; Noun-Verb Association has the additional advantage of objective monitoring of patient performance.
Thalamic input to auditory cortex is locally heterogeneous but globally tonotopic
Vasquez-Lopez, Sebastian A; Weissenberger, Yves; Lohse, Michael; Keating, Peter; King, Andrew J
2017-01-01
Topographic representation of the receptor surface is a fundamental feature of sensory cortical organization. This is imparted by the thalamus, which relays information from the periphery to the cortex. To better understand the rules governing thalamocortical connectivity and the origin of cortical maps, we used in vivo two-photon calcium imaging to characterize the properties of thalamic axons innervating different layers of mouse auditory cortex. Although tonotopically organized at a global level, we found that the frequency selectivity of individual thalamocortical axons is surprisingly heterogeneous, even in layers 3b/4 of the primary cortical areas, where the thalamic input is dominated by the lemniscal projection. We also show that thalamocortical input to layer 1 includes collaterals from axons innervating layers 3b/4 and is largely in register with the main input targeting those layers. Such locally varied thalamocortical projections may be useful in enabling rapid contextual modulation of cortical frequency representations. PMID:28891466
Transcortical sensory aphasia: revisited and revised.
Boatman, D; Gordon, B; Hart, J; Selnes, O; Miglioretti, D; Lenz, F
2000-08-01
Transcortical sensory aphasia (TSA) is characterized by impaired auditory comprehension with intact repetition and fluent speech. We induced TSA transiently by electrical interference during routine cortical function mapping in six adult seizure patients. For each patient, TSA was associated with multiple posterior cortical sites, including the posterior superior and middle temporal gyri, in classical Wernicke's area. A number of TSA sites were immediately adjacent to sites where Wernicke's aphasia was elicited in the same patients. Phonological decoding of speech sounds was assessed by auditory syllable discrimination and found to be intact at all sites where TSA was induced. At a subset of electrode sites where the pattern of language deficits otherwise resembled TSA, naming and word reading remained intact. Language lateralization testing by intracarotid amobarbital injection showed no evidence of independent right hemisphere language. These results suggest that TSA may result from a one-way disruption between left hemisphere phonology and lexical-semantic processing.
Low power adder based auditory filter architecture.
Rahiman, P F Khaleelur; Jayanthi, V S
2014-01-01
Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.
When semantics aids phonology: A processing advantage for iconic word forms in aphasia.
Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella
2015-09-01
Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Identifying musical pieces from fMRI data using encoding and decoding models.
Hoefle, Sebastian; Engel, Annerose; Basilio, Rodrigo; Alluri, Vinoo; Toiviainen, Petri; Cagy, Maurício; Moll, Jorge
2018-02-02
Encoding models can reveal and decode neural representations in the visual and semantic domains. However, a thorough understanding of how distributed information in auditory cortices and temporal evolution of music contribute to model performance is still lacking in the musical domain. We measured fMRI responses during naturalistic music listening and constructed a two-stage approach that first mapped musical features in auditory cortices and then decoded novel musical pieces. We then probed the influence of stimuli duration (number of time points) and spatial extent (number of voxels) on decoding accuracy. Our approach revealed a linear increase in accuracy with duration and a point of optimal model performance for the spatial extent. We further showed that Shannon entropy is a driving factor, boosting accuracy up to 95% for music with highest information content. These findings provide key insights for future decoding and reconstruction algorithms and open new venues for possible clinical applications.
Characterization of active hair-bundle motility by a mechanical-load clamp
NASA Astrophysics Data System (ADS)
Salvi, Joshua D.; Maoiléidigh, Dáibhid Ó.; Fabella, Brian A.; Tobin, Mélanie; Hudspeth, A. J.
2015-12-01
Active hair-bundle motility endows hair cells with several traits that augment auditory stimuli. The activity of a hair bundle might be controlled by adjusting its mechanical properties. Indeed, the mechanical properties of bundles vary between different organisms and along the tonotopic axis of a single auditory organ. Motivated by these biological differences and a dynamical model of hair-bundle motility, we explore how adjusting the mass, drag, stiffness, and offset force applied to a bundle control its dynamics and response to external perturbations. Utilizing a mechanical-load clamp, we systematically mapped the two-dimensional state diagram of a hair bundle. The clamp system used a real-time processor to tightly control each of the virtual mechanical elements. Increasing the stiffness of a hair bundle advances its operating point from a spontaneously oscillating regime into a quiescent regime. As predicted by a dynamical model of hair-bundle mechanics, this boundary constitutes a Hopf bifurcation.
NASA Astrophysics Data System (ADS)
Simonnet, Mathieu; Jacobson, Dan; Vieilledent, Stephane; Tisseau, Jacques
Navigating consists of coordinating egocentric and allocentric spatial frames of reference. Virtual environments have afforded researchers in the spatial community with tools to investigate the learning of space. The issue of the transfer between virtual and real situations is not trivial. A central question is the role of frames of reference in mediating spatial knowledge transfer to external surroundings, as is the effect of different sensory modalities accessed in simulated and real worlds. This challenges the capacity of blind people to use virtual reality to explore a scene without graphics. The present experiment involves a haptic and auditory maritime virtual environment. In triangulation tasks, we measure systematic errors and preliminary results show an ability to learn configurational knowledge and to navigate through it without vision. Subjects appeared to take advantage of getting lost in an egocentric “haptic” view in the virtual environment to improve performances in the real environment.
Sonification of optical coherence tomography data and images
Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.
2010-01-01
Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846
Long-Term Simultaneous Localization and Mapping in Dynamic Environments
2015-01-01
core competencies required for autonomous mobile robotics is the ability to use sensors to perceive the environment. From this noisy sensor data, the...and mapping (SLAM), is a prerequisite for almost all higher-level autonomous behavior in mobile robotics. By associating the robot???s sensory...distributed stochastic neighbor embedding x ABSTRACT One of the core competencies required for autonomous mobile robotics is the ability to use sensors
Covariance Recovery from a Square Root Information Matrix for Data Association
2009-07-02
association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the...association is one of the core problems of simultaneous localization and mapping (SLAM), and it requires knowledge about the uncertainties of the...back-substitution as well as efficient access to marginal covariances, which is described next. 2.2. Recovering Marginal Covariances Knowledge of the
Exploring cosmic origins with CORE: Gravitational lensing of the CMB
NASA Astrophysics Data System (ADS)
Challinor, A.; Allison, R.; Carron, J.; Errard, J.; Feeney, S.; Kitching, T.; Lesgourgues, J.; Lewis, A.; Zubeldía, Í.; Achucarro, A.; Ade, P.; Ashdown, M.; Ballardini, M.; Banday, A. J.; Banerji, R.; Bartlett, J.; Bartolo, N.; Basak, S.; Baumann, D.; Bersanelli, M.; Bonaldi, A.; Bonato, M.; Borrill, J.; Bouchet, F.; Boulanger, F.; Brinckmann, T.; Bucher, M.; Burigana, C.; Buzzelli, A.; Cai, Z.-Y.; Calvo, M.; Carvalho, C.-S.; Castellano, G.; Chluba, J.; Clesse, S.; Colantoni, I.; Coppolecchia, A.; Crook, M.; d'Alessandro, G.; de Bernardis, P.; de Gasperis, G.; De Zotti, G.; Delabrouille, J.; Di Valentino, E.; Diego, J.-M.; Fernandez-Cobos, R.; Ferraro, S.; Finelli, F.; Forastieri, F.; Galli, S.; Genova-Santos, R.; Gerbino, M.; González-Nuevo, J.; Grandis, S.; Greenslade, J.; Hagstotz, S.; Hanany, S.; Handley, W.; Hernandez-Monteagudo, C.; Hervías-Caimapo, C.; Hills, M.; Hivon, E.; Kiiveri, K.; Kisner, T.; Kunz, M.; Kurki-Suonio, H.; Lamagna, L.; Lasenby, A.; Lattanzi, M.; Liguori, M.; Lindholm, V.; López-Caniego, M.; Luzzi, G.; Maffei, B.; Martinez-González, E.; Martins, C. J. A. P.; Masi, S.; Matarrese, S.; McCarthy, D.; Melchiorri, A.; Melin, J.-B.; Molinari, D.; Monfardini, A.; Natoli, P.; Negrello, M.; Notari, A.; Paiella, A.; Paoletti, D.; Patanchon, G.; Piat, M.; Pisano, G.; Polastri, L.; Polenta, G.; Pollo, A.; Poulin, V.; Quartin, M.; Remazeilles, M.; Roman, M.; Rubino-Martin, J.-A.; Salvati, L.; Tartari, A.; Tomasi, M.; Tramonte, D.; Trappe, N.; Trombetti, T.; Tucker, C.; Valiviita, J.; Van de Weijgaert, R.; van Tent, B.; Vennin, V.; Vielva, P.; Vittorio, N.; Young, K.; Zannoni, M.
2018-04-01
Lensing of the cosmic microwave background (CMB) is now a well-developed probe of the clustering of the large-scale mass distribution over a broad range of redshifts. By exploiting the non-Gaussian imprints of lensing in the polarization of the CMB, the CORE mission will allow production of a clean map of the lensing deflections over nearly the full-sky. The number of high-S/N modes in this map will exceed current CMB lensing maps by a factor of 40, and the measurement will be sample-variance limited on all scales where linear theory is valid. Here, we summarise this mission product and discuss the science that will follow from its power spectrum and the cross-correlation with other clustering data. For example, the summed mass of neutrinos will be determined to an accuracy of 17 meV combining CORE lensing and CMB two-point information with contemporaneous measurements of the baryon acoustic oscillation feature in the clustering of galaxies, three times smaller than the minimum total mass allowed by neutrino oscillation measurements. Lensing has applications across many other science goals of CORE, including the search for B-mode polarization from primordial gravitational waves. Here, lens-induced B-modes will dominate over instrument noise, limiting constraints on the power spectrum amplitude of primordial gravitational waves. With lensing reconstructed by CORE, one can "delens" the observed polarization internally, reducing the lensing B-mode power by 60 %. This can be improved to 70 % by combining lensing and measurements of the cosmic infrared background from CORE, leading to an improvement of a factor of 2.5 in the error on the amplitude of primordial gravitational waves compared to no delensing (in the null hypothesis of no primordial B-modes). Lensing measurements from CORE will allow calibration of the halo masses of the tens of thousands of galaxy clusters that it will find, with constraints dominated by the clean polarization-based estimators. The 19 frequency channels proposed for CORE will allow accurate removal of Galactic emission from CMB maps. We present initial findings that show that residual Galactic foreground contamination will not be a significant source of bias for lensing power spectrum measurements with CORE.
Cartographic sign as a core of multimedia map prepared by non-cartographers in free map services
NASA Astrophysics Data System (ADS)
Medyńska-Gulij, Beata
2014-06-01
The fundamental importance of cartographic signs in traditional maps is unquestionable, although in the case of multimedia maps their key function is not so obvious. Our aim was to search the problem of cartographic signs as a core of multimedia maps prepared by non-cartographer in on-line Map Services. First, preestablished rules for multimedia map designers were prepared emphasizing the key role of the cartographic signs and habits of Web-users. The comparison of projects completed by a group of designers led us to the general conclusion that a cartographic sign should determine the design of a multimedia map in on-line Map Services. Despite the selection of five different map topics, one may list the general characteristics of the maps with a cartographic sign in the core. Fundamentalne znaczenie znaków kartograficznych na tradycyjnej mapie nie budzi wątpliwości, jednak w przypadku multimedialnej mapy ich kluczowa funkcja nie jest już tak oczywista. W tych badaniach podjęto problem znaczenia znaku kartograficznego jako spoiwa mapy multimedialnej opracowanej przez nie-kartografa w darmowych serwisach mapowych. Zadaniem dla projektujących mapy stało się opracowanie mapy multimedialnej według ustalonych wstępnie zasad, w której kluczową rolę odgrywały znaki kartograficzne oraz przyzwyczajenia użytkowników Internetu. Porównanie wypełnionych arkuszy zadań przez uczestników badań skłania do wyciągnięcia generalnego wniosku, że znak kartograficzny powinien determinować projektowanie multimedialnej mapy w serwisach mapowych on-line. Pomimo opracowania pięciu różnych tematów map, można wymienić ogólne charakterystyki map, w których znak kartograficzny jest spoiwem.
21st Century Skills Map: English
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2008
2008-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of English.
21st Century Skills Map: Science
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2008
2008-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of Science.
21st Century Skills Map: Geography
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2009
2009-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of Geography.
Azar, Ali; Piccinelli, Chiara; Brown, Helen; Headon, Denis; Cheeseman, Michael
2016-01-01
Hypohidrotic ectodermal dysplasia (HED) results from mutation of the EDA, EDAR or EDARADD genes and is characterized by reduced or absent eccrine sweat glands, hair follicles and teeth, and defective formation of salivary, mammary and craniofacial glands. Mouse models with HED also carry Eda, Edar or Edaradd mutations and have defects that map to the same structures. Patients with HED have ear, nose and throat disease, but this has not been investigated in mice bearing comparable genetic mutations. We report that otitis media, rhinitis and nasopharyngitis occur at high frequency in Eda and Edar mutant mice and explore the pathogenic mechanisms related to glandular function, microbial and immune parameters in these lines. Nasopharynx auditory tube glands fail to develop in HED mutant mice and the functional implications include loss of lysozyme secretion, reduced mucociliary clearance and overgrowth of nasal commensal bacteria accompanied by neutrophil exudation. Heavy nasopharynx foreign body load and loss of gland protection alters the auditory tube gating function and the auditory tubes can become pathologically dilated. Accumulation of large foreign body particles in the bulla stimulates granuloma formation. Analysis of immune cell populations and myeloid cell function shows no evidence of overt immune deficiency in HED mutant mice. Our findings using HED mutant mice as a model for the human condition support the idea that ear and nose pathology in HED patients arises as a result of nasal and nasopharyngeal gland deficits, reduced mucociliary clearance and impaired auditory tube gating function underlies the pathological sequelae in the bulla. PMID:27378689
Zong, Liang; Guan, Jing; Ealy, Megan; Zhang, Qiujing; Wang, Dayong; Wang, Hongyang; Zhao, Yali; Shen, Zhirong; Campbell, Colleen A; Wang, Fengchao; Yang, Ju; Sun, Wei; Lan, Lan; Ding, Dalian; Xie, Linyi; Qi, Yue; Lou, Xin; Huang, Xusheng; Shi, Qiang; Chang, Suhua; Xiong, Wenping; Yin, Zifang; Yu, Ning; Zhao, Hui; Wang, Jun; Wang, Jing; Salvi, Richard J; Petit, Christine; Smith, Richard J H; Wang, Qiuju
2015-01-01
Background Auditory neuropathy spectrum disorder (ANSD) is a form of hearing loss in which auditory signal transmission from the inner ear to the auditory nerve and brain stem is distorted, giving rise to speech perception difficulties beyond that expected for the observed degree of hearing loss. For many cases of ANSD, the underlying molecular pathology and the site of lesion remain unclear. The X-linked form of the condition, AUNX1, has been mapped to Xq23-q27.3, although the causative gene has yet to be identified. Methods We performed whole-exome sequencing on DNA samples from the AUNX1 family and another small phenotypically similar but unrelated ANSD family. Results We identified two missense mutations in AIFM1 in these families: c.1352G>A (p.R451Q) in the AUNX1 family and c.1030C>T (p.L344F) in the second ANSD family. Mutation screening in a large cohort of 3 additional unrelated families and 93 sporadic cases with ANSD identified 9 more missense mutations in AIFM1. Bioinformatics analysis and expression studies support this gene as being causative of ANSD. Conclusions Variants in AIFM1 gene are a common cause of familial and sporadic ANSD and provide insight into the expanded spectrum of AIFM1-associated diseases. The finding of cochlear nerve hypoplasia in some patients was AIFM1-related ANSD implies that MRI may be of value in localising the site of lesion and suggests that cochlea implantation in these patients may have limited success. PMID:25986071
Lawo, Vera; Fels, Janina; Oberem, Josefa; Koch, Iring
2014-10-01
Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.
Musical experience sharpens human cochlear tuning.
Bidelman, Gavin M; Nelms, Caitlin; Bhagat, Shaum P
2016-05-01
The mammalian cochlea functions as a filter bank that performs a spectral, Fourier-like decomposition on the acoustic signal. While tuning can be compromised (e.g., broadened with hearing impairment), whether or not human cochlear frequency resolution can be sharpened through experiential factors (e.g., training or learning) has not yet been established. Previous studies have demonstrated sharper psychophysical tuning curves in trained musicians compared to nonmusicians, implying superior peripheral tuning. However, these findings are based on perceptual masking paradigms, and reflect engagement of the entire auditory system rather than cochlear tuning, per se. Here, by directly mapping physiological tuning curves from stimulus frequency otoacoustic emissions (SFOAEs)-cochlear emitted sounds-we show that estimates of human cochlear tuning in a high-frequency cochlear region (4 kHz) is further sharpened (by a factor of 1.5×) in musicians and improves with the number of years of their auditory training. These findings were corroborated by measurements of psychophysical tuning curves (PTCs) derived via simultaneous masking, which similarly showed sharper tuning in musicians. Comparisons between SFOAE and PTCs revealed closer correspondence between physiological and behavioral curves in musicians, indicating that tuning is also more consistent between different levels of auditory processing in trained ears. Our findings demonstrate an experience-dependent enhancement in the resolving power of the cochlear sensory epithelium and the spectral resolution of human hearing and provide a peripheral account for the auditory perceptual benefits observed in musicians. Both local and feedback (e.g., medial olivocochlear efferent) mechanisms are discussed as potential mechanisms for experience-dependent tuning. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory decision aiding in supervisory control of multiple unmanned aerial vehicles.
Donmez, Birsen; Cummings, M L; Graham, Hudson D
2009-10-01
This article is an investigation of the effectiveness of sonifications, which are continuous auditory alerts mapped to the state of a monitored task, in supporting unmanned aerial vehicle (UAV) supervisory control. UAV supervisory control requires monitoring a UAV across multiple tasks (e.g., course maintenance) via a predominantly visual display, which currently is supported with discrete auditory alerts. Sonification has been shown to enhance monitoring performance in domains such as anesthesiology by allowing an operator to immediately determine an entity's (e.g., patient) current and projected states, and is a promising alternative to discrete alerts in UAV control. However, minimal research compares sonification to discrete alerts, and no research assesses the effectiveness of sonification for monitoring multiple entities (e.g., multiple UAVs). The authors conducted an experiment with 39 military personnel, using a simulated setup. Participants controlled single and multiple UAVs and received sonifications or discrete alerts based on UAV course deviations and late target arrivals. Regardless of the number of UAVs supervised, the course deviation sonification resulted in reactions to course deviations that were 1.9 s faster, a 19% enhancement, compared with discrete alerts. However, course deviation sonifications interfered with the effectiveness of discrete late arrival alerts in general and with operator responses to late arrivals when supervising multiple vehicles. Sonifications can outperform discrete alerts when designed to aid operators to predict future states of monitored tasks. However, sonifications may mask other auditory alerts and interfere with other monitoring tasks that require divided attention. This research has implications for supervisory control display design.
Amin, Noopur; Gastpar, Michael; Theunissen, Frédéric E.
2013-01-01
Previous research has shown that postnatal exposure to simple, synthetic sounds can affect the sound representation in the auditory cortex as reflected by changes in the tonotopic map or other relatively simple tuning properties, such as AM tuning. However, their functional implications for neural processing in the generation of ethologically-based perception remain unexplored. Here we examined the effects of noise-rearing and social isolation on the neural processing of communication sounds such as species-specific song, in the primary auditory cortex analog of adult zebra finches. Our electrophysiological recordings reveal that neural tuning to simple frequency-based synthetic sounds is initially established in all the laminae independent of patterned acoustic experience; however, we provide the first evidence that early exposure to patterned sound statistics, such as those found in native sounds, is required for the subsequent emergence of neural selectivity for complex vocalizations and for shaping neural spiking precision in superficial and deep cortical laminae, and for creating efficient neural representations of song and a less redundant ensemble code in all the laminae. Our study also provides the first causal evidence for ‘sparse coding’, such that when the statistics of the stimuli were changed during rearing, as in noise-rearing, that the sparse or optimal representation for species-specific vocalizations disappeared. Taken together, these results imply that a layer-specific differential development of the auditory cortex requires patterned acoustic input, and a specialized and robust sensory representation of complex communication sounds in the auditory cortex requires a rich acoustic and social environment. PMID:23630587
Judging the urgency of non-verbal auditory alarms: a case study.
Arrabito, G Robert; Mondor, Todd; Kent, Kimberley
2004-06-22
When designed correctly, non-verbal auditory alarms can convey different levels of urgency to the aircrew, and thereby permit the operator to establish the appropriate level of priority to address the alarmed condition. The conveyed level of urgency of five non-verbal auditory alarms presently used in the Canadian Forces CH-146 Griffon helicopter was investigated. Pilots of the CH-146 Griffon helicopter and non-pilots rated the perceived urgency of the signals using a rating scale. The pilots also ranked the urgency of the alarms in a post-experiment questionnaire to reflect their assessment of the actual situation that triggers the alarms. The results of this investigation revealed that participants' ratings of perceived urgency appear to be based on the acoustic properties of the alarms which are known to affect the listener's perceived level of urgency. Although for 28% of the pilots the mapping of perceived urgency to the urgency of their perception of the triggering situation was statistically significant for three of the five alarms, the overall data suggest that the triggering situations are not adequately conveyed by the acoustic parameters inherent in the alarms. The pilots' judgement of the triggering situation was intended as a means of evaluating the reliability of the alerting system. These data will subsequently be discussed with respect to proposed enhancements in alerting systems as it relates to addressing the problem of phase of flight. These results call for more serious consideration of incorporating situational awareness in the design and assignment of auditory alarms in aircraft.
Stelzel, Christine; Schauenburg, Gesche; Rapp, Michael A.; Heinzel, Stephan; Granacher, Urs
2017-01-01
Age-related decline in executive functions and postural control due to degenerative processes in the central nervous system have been related to increased fall-risk in old age. Many studies have shown cognitive-postural dual-task interference in old adults, but research on the role of specific executive functions in this context has just begun. In this study, we addressed the question whether postural control is impaired depending on the coordination of concurrent response-selection processes related to the compatibility of input and output modality mappings as compared to impairments related to working-memory load in the comparison of cognitive dual and single tasks. Specifically, we measured total center of pressure (CoP) displacements in healthy female participants aged 19–30 and 66–84 years while they performed different versions of a spatial one-back working memory task during semi-tandem stance on an unstable surface (i.e., balance pad) while standing on a force plate. The specific working-memory tasks comprised: (i) modality compatible single tasks (i.e., visual-manual or auditory-vocal tasks), (ii) modality compatible dual tasks (i.e., visual-manual and auditory-vocal tasks), (iii) modality incompatible single tasks (i.e., visual-vocal or auditory-manual tasks), and (iv) modality incompatible dual tasks (i.e., visual-vocal and auditory-manual tasks). In addition, participants performed the same tasks while sitting. As expected from previous research, old adults showed generally impaired performance under high working-memory load (i.e., dual vs. single one-back task). In addition, modality compatibility affected one-back performance in dual-task but not in single-task conditions with strikingly pronounced impairments in old adults. Notably, the modality incompatible dual task also resulted in a selective increase in total CoP displacements compared to the modality compatible dual task in the old but not in the young participants. These results suggest that in addition to effects of working-memory load, processes related to simultaneously overcoming special linkages between input- and output modalities interfere with postural control in old but not in young female adults. Our preliminary data provide further evidence for the involvement of cognitive control processes in postural tasks. PMID:28484411
Map Metadata: Essential Elements for Search and Storage
ERIC Educational Resources Information Center
Beamer, Ashley
2009-01-01
Purpose: The purpose of this paper is to develop an understanding of the issues surrounding the cataloguing of maps in archives and libraries. An investigation into appropriate metadata formats, such as MARC21, EAD and Dublin Core with RDF, shows how particular map data can be stored. Mathematical map elements, specifically co-ordinates, are…
Mining e-Learning Domain Concept Map from Academic Articles
ERIC Educational Resources Information Center
Chen, Nian-Shing; Kinshuk; Wei, Chun-Wang; Chen, Hong-Jhe
2008-01-01
Recent researches have demonstrated the importance of concept map and its versatile applications especially in e-Learning. For example, while designing adaptive learning materials, designers need to refer to the concept map of a subject domain. Moreover, concept maps can show the whole picture and core knowledge about a subject domain. Research…
Yin, Pingbo; Mishkin, Mortimer; Sutter, Mitchell; Fritz, Jonathan B.
2008-01-01
To explore the effects of acoustic and behavioral context on neuronal responses in the core of auditory cortex (fields A1 and R), two monkeys were trained on a go/no-go discrimination task in which they learned to respond selectively to a four-note target (S+) melody and withhold response to a variety of other nontarget (S−) sounds. We analyzed evoked activity from 683 units in A1/R of the trained monkeys during task performance and from 125 units in A1/R of two naive monkeys. We characterized two broad classes of neural activity that were modulated by task performance. Class I consisted of tone-sequence–sensitive enhancement and suppression responses. Enhanced or suppressed responses to specific tonal components of the S+ melody were frequently observed in trained monkeys, but enhanced responses were rarely seen in naive monkeys. Both facilitatory and suppressive responses in the trained monkeys showed a temporal pattern different from that observed in naive monkeys. Class II consisted of nonacoustic activity, characterized by a task-related component that correlated with bar release, the behavioral response leading to reward. We observed a significantly higher percentage of both Class I and Class II neurons in field R than in A1. Class I responses may help encode a long-term representation of the behaviorally salient target melody. Class II activity may reflect a variety of nonacoustic influences, such as attention, reward expectancy, somatosensory inputs, and/or motor set and may help link auditory perception and behavioral response. Both types of neuronal activity are likely to contribute to the performance of the auditory task. PMID:18842950
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Impey, Danielle; Baddeley, Ashley; Nelson, Renee; Labelle, Alain; Knott, Verner
2017-11-01
Cognitive impairment has been proposed to be the core feature of schizophrenia (Sz). Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which can improve cognitive function in healthy participants and in psychiatric patients with cognitive deficits. tDCS has been shown to improve cognition and hallucination symptoms in Sz, a disorder also associated with marked sensory processing deficits. Recent findings in healthy controls demonstrate that anodal tDCS increases auditory deviance detection, as measured by the brain-based event-related potential, mismatch negativity (MMN), which is a putative biomarker of Sz that has been proposed as a target for treatment of Sz cognition. This pilot study conducted a randomized, double-blind assessment of the effects of pre- and post-tDCS on MMN-indexed auditory discrimination in 12 Sz patients, moderated by auditory hallucination (AH) presence, as well as working memory performance. Assessments were conducted in three sessions involving temporal and frontal lobe anodal stimulation (to transiently excite local brain activity), and one control session involving 'sham' stimulation (meaning with the device turned off, i.e., no stimulation). Results demonstrated a trend for pitch MMN amplitude to increase with anodal temporal tDCS, which was significant in a subgroup of Sz individuals with AHs. Anodal frontal tDCS significantly increased WM performance on the 2-back task, which was found to positively correlate with MMN-tDCS effects. The findings contribute to our understanding of tDCS effects for sensory processing deficits and working memory performance in Sz and may have implications for psychiatric disorders with sensory deficits.
The representation of order information in auditory-verbal short-term memory.
Kalm, Kristjan; Norris, Dennis
2014-05-14
Here we investigate how order information is represented in auditory-verbal short-term memory (STM). We used fMRI and a serial recall task to dissociate neural activity patterns representing the phonological properties of the items stored in STM from the patterns representing their order. For this purpose, we analyzed fMRI activity patterns elicited by different item sets and different orderings of those items. These fMRI activity patterns were compared with the predictions made by positional and chaining models of serial order. The positional models encode associations between items and their positions in a sequence, whereas the chaining models encode associations between successive items and retain no position information. We show that a set of brain areas in the postero-dorsal stream of auditory processing store associations between items and order as predicted by a positional model. The chaining model of order representation generates a different pattern similarity prediction, which was shown to be inconsistent with the fMRI data. Our results thus favor a neural model of order representation that stores item codes, position codes, and the mapping between them. This study provides the first fMRI evidence for a specific model of order representation in the human brain. Copyright © 2014 the authors 0270-6474/14/346879-08$15.00/0.
New perspectives on the auditory cortex: learning and memory.
Weinberger, Norman M
2015-01-01
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Brain Mapping in a Patient with Congenital Blindness – A Case for Multimodal Approaches
Roland, Jarod L.; Hacker, Carl D.; Breshears, Jonathan D.; Gaona, Charles M.; Hogan, R. Edward; Burton, Harold; Corbetta, Maurizio; Leuthardt, Eric C.
2013-01-01
Recent advances in basic neuroscience research across a wide range of methodologies have contributed significantly to our understanding of human cortical electrophysiology and functional brain imaging. Translation of this research into clinical neurosurgery has opened doors for advanced mapping of functionality that previously was prohibitively difficult, if not impossible. Here we present the case of a unique individual with congenital blindness and medically refractory epilepsy who underwent neurosurgical treatment of her seizures. Pre-operative evaluation presented the challenge of accurately and robustly mapping the cerebral cortex for an individual with a high probability of significant cortical re-organization. Additionally, a blind individual has unique priorities in one’s ability to read Braille by touch and sense the environment primarily by sound than the non-vision impaired person. For these reasons we employed additional measures to map sensory, motor, speech, language, and auditory perception by employing a number of cortical electrophysiologic mapping and functional magnetic resonance imaging methods. Our data show promising results in the application of these adjunctive methods in the pre-operative mapping of otherwise difficult to localize, and highly variable, functional cortical areas. PMID:23914170
The influence of gender on auditory and language cortical activation patterns: preliminary data.
Kocak, Mehmet; Ulmer, John L; Biswal, Bharat B; Aralasmak, Ayse; Daniels, David L; Mark, Leighton P
2005-10-01
Intersex cortical and functional asymmetry is an ongoing topic of investigation. In this pilot study, we sought to determine the influence of acoustic scanner noise and sex on auditory and language cortical activation patterns of the dominant hemisphere. Echoplanar functional MR imaging (fMRI; 1.5T) was performed on 12 healthy right-handed subjects (6 men and 6 women). Passive text listening tasks were employed in 2 different background acoustic scanner noise conditions (12 sections/2 seconds TR [6 Hz] and 4 sections/2 seconds TR [2 Hz]), with the first 4 sections in identical locations in the left hemisphere. Cross-correlation analysis was used to construct activation maps in subregions of auditory and language relevant cortex of the dominant (left) hemisphere, and activation areas were calculated by using coefficient thresholds of 0.5, 0.6, and 0.7. Text listening caused robust activation in anatomically defined auditory cortex, and weaker activation in language relevant cortex of all 12 individuals. As a whole, there was no significant difference in regional cortical activation between the 2 background acoustic scanner noise conditions. When sex was considered, men showed a significantly (P < .01) greater change in left hemisphere activation during the high scanner noise rate condition than did women. This effect was significant (P < .05) in the left superior temporal gyrus, the posterior aspect of the left middle temporal gyrus and superior temporal sulcus, and the left inferior frontal gyrus. Increase in the rate of background acoustic scanner noise caused increased activation in auditory and language relevant cortex of the dominant hemisphere in men compared with women where no such change in activation was observed. Our preliminary data suggest possible methodologic confounds of fMRI research and calls for larger investigations to substantiate our findings and further characterize sex-based influences on hemispheric activation patterns.
Nussbaum, Debra; Waddy-Smith, Bettie; Doyle, Jane
2012-11-01
There is a core body of knowledge, experience, and skills integral to facilitating auditory, speech, and spoken language development when working with the general population of students who are deaf and hard of hearing. There are additional issues, strategies, and challenges inherent in speech habilitation/rehabilitation practices essential to the population of deaf and hard of hearing students who also use sign language. This article will highlight philosophical and practical considerations related to practices used to facilitate spoken language development and associated literacy skills for children and adolescents who sign. It will discuss considerations for planning and implementing practices that acknowledge and utilize a student's abilities in sign language, and address how to link these skills to developing and using spoken language. Included will be considerations for children from early childhood through high school with a broad range of auditory access, language, and communication characteristics. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Hamm, Jordan P; Ethridge, Lauren E; Boutros, Nashaat N; Keshavan, Matcheri S; Sweeney, John A; Pearlson, Godfrey D; Tamminga, Carol A; Clementz, Brett A
2014-04-01
Disrupted sensory processing is a core feature of psychotic disorders. Auditory paired stimuli (PS) evoke a complex neural response, but it is uncertain which aspects reflect shared and/or distinct liability for the most common severe psychoses, schizophrenia (SZ) and psychotic bipolar disorder (BDP). Evoked time-voltage/time-frequency domain responses quantified with EEG during a typical PS paradigm (S1-S2) were compared among proband groups (SZ [n = 232], BDP [181]), their relatives (SZrel [259], BDPrel [220]), and healthy participants (H [228]). Early S1-evoked responses were reduced in SZ and BDP, while later/S2 abnormalities showed SZ/SZrel and BDP/BDPrel specificity. Relatives' effects were absent/small despite significant familiality of the entire auditorineural response. This pattern suggests general and divergent biological pathways associated with psychosis, yet may reflect complications with conditioning solely on clinical phenomenology. Copyright © 2014 Society for Psychophysiological Research.
Bilateral Alternating Auditory Stimulations Facilitate Fear Extinction and Retrieval.
Boukezzi, Sarah; Silva, Catarina; Nazarian, Bruno; Rousseau, Pierre-François; Guedj, Eric; Valenzuela-Moguillansky, Camila; Khalfa, Stéphanie
2017-01-01
Disruption of fear conditioning, its extinction and its retrieval are at the core of posttraumatic stress disorder (PTSD). Such deficits, especially fear extinction delay, disappear after alternating bilateral stimulations (BLS) during eye movement desensitization and reprocessing (EMDR) therapy. An animal model of fear recovery, based on auditory cued fear conditioning and extinction learning, recently showed that BLS facilitate fear extinction and fear extinction retrieval. Our goal was to determine if these previous results found in animals can be reproduced in humans. Twenty-two healthy participants took part in a classical fear conditioning, extinction, and extinction recall paradigm. Behavioral responses (fear expectations) as well as psychophysiological measures (skin conductance responses, SCRs) were recorded. The results showed a significant fear expectation decrease during fear extinction with BLS. Additionally, SCR for fear extinction retrieval were significantly lower with BLS. Our results demonstrate the importance of BLS to reduce negative emotions, and provide a successful model to further explore the neural mechanisms underlying the sole BLS effect in the EMDR.
Bilateral Alternating Auditory Stimulations Facilitate Fear Extinction and Retrieval
Boukezzi, Sarah; Silva, Catarina; Nazarian, Bruno; Rousseau, Pierre-François; Guedj, Eric; Valenzuela-Moguillansky, Camila; Khalfa, Stéphanie
2017-01-01
Disruption of fear conditioning, its extinction and its retrieval are at the core of posttraumatic stress disorder (PTSD). Such deficits, especially fear extinction delay, disappear after alternating bilateral stimulations (BLS) during eye movement desensitization and reprocessing (EMDR) therapy. An animal model of fear recovery, based on auditory cued fear conditioning and extinction learning, recently showed that BLS facilitate fear extinction and fear extinction retrieval. Our goal was to determine if these previous results found in animals can be reproduced in humans. Twenty-two healthy participants took part in a classical fear conditioning, extinction, and extinction recall paradigm. Behavioral responses (fear expectations) as well as psychophysiological measures (skin conductance responses, SCRs) were recorded. The results showed a significant fear expectation decrease during fear extinction with BLS. Additionally, SCR for fear extinction retrieval were significantly lower with BLS. Our results demonstrate the importance of BLS to reduce negative emotions, and provide a successful model to further explore the neural mechanisms underlying the sole BLS effect in the EMDR. PMID:28659851
Sight over sound in the judgment of music performance.
Tsay, Chia-Jung
2013-09-03
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content.
Sight over sound in the judgment of music performance
Tsay, Chia-Jung
2013-01-01
Social judgments are made on the basis of both visual and auditory information, with consequential implications for our decisions. To examine the impact of visual information on expert judgment and its predictive validity for performance outcomes, this set of seven experiments in the domain of music offers a conservative test of the relative influence of vision versus audition. People consistently report that sound is the most important source of information in evaluating performance in music. However, the findings demonstrate that people actually depend primarily on visual information when making judgments about music performance. People reliably select the actual winners of live music competitions based on silent video recordings, but neither musical novices nor professional musicians were able to identify the winners based on sound recordings or recordings with both video and sound. The results highlight our natural, automatic, and nonconscious dependence on visual cues. The dominance of visual information emerges to the degree that it is overweighted relative to auditory information, even when sound is consciously valued as the core domain content. PMID:23959902
Study of Structure and Small-Scale Fragmentation in TMC-1
NASA Technical Reports Server (NTRS)
Langer, W. D.; Velusamy, T.; Kuiper, T. B.; Levin, S.; Olsen, E.; Migenes, V.
1995-01-01
Large-scale C(sup 18)O maps show that the Taurus molecular cloud 1 (TMC-1) has numerous cores located along a ridge which extends about 12 minutes by at least 35 minutes. The cores traced by C(sup 18)O are about a few arcminutes (0.1-0.2 pc) in extent, typically contain about 0.5-3 solar mass, and are probably gravitationally bound. We present a detailed study of the small-scale fragmentary structure of one of these cores, called core D, within TMC-1 using very high spectral and spatial resolution maps of CCS and CS. The CCS lines are excellent tracers for investigating the density, temperature, and velocity structure in dense cores. The high spectral resolution, 0.008 km /s, data consist mainly of single-dish, Nyquist-sampled maps of CCS at 22 GHz with 45 sec spatial resolution taken with NASA's 70 m DSN antenna at Goldstone. The high spatial resolution spectral line maps were made with the Very Large Array (9 sec resolution) at 22 GHz and with the OVRO millimeter array in CCS and CS at 93 GHz and 98 GHz, respectively, with 6 sec resolution. These maps are supplemented with single-dish observations of CCS and CC(sup 34)S spectra at 33 GHz using a NASA 34 m DSN antenna, CCS 93 GHz, C(sup 34)S (2-1), and C(sup 18)O (1-0) single-dish observations made with the AT&T Bell Laboratories 7 m antenna. Our high spectral and spatial CCS and CS maps show that core D is highly fragmented. The single-dish CCS observations map out several clumps which range in size from approx. 45 sec to 90 sec (0.03-0.06 pc). These clumps have very narrow intrinsic line widths, 0.11-0.25 km/s, slightly larger than the thermal line width for CCS at 10 K, and masses about 0.03-0.2 solar mass. Interferometer observations of some of these clumps show that they have considerable additional internal structure, consisting of several condensations ranging in size from approx. 10 sec- 30 sec (0.007-0.021 pc), also with narrow line widths. The mass of these smallest fragments is of order 0.01 solar mass. These small-scale structures traced by CCS appear to be gravitationally unbound by a large factor. Most of these objects have masses that fall below those of the putative proto-brown dwarfs (approx. less than 0.1 solar mass). The presence of many small gravitationally unbound clumps suggests that fragmentation mechanisms other than a purely Jeans gravitational instability may be important for the dynamics of these cold dense cores.
21st Century Skills Map: The Arts
ERIC Educational Resources Information Center
Dean, Colleen; Ebert, Christie M. Lynch; McGreevy-Nichols, Susan; Quinn, Betsy; Sabol, F. Robert; Schmid, Dale; Shauck, R. Barry; Shuler, Scott C.
2010-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of the Arts.
21st Century Skills Map: Social Studies
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2007
2007-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of Social Studies.
NASA Astrophysics Data System (ADS)
2008-10-01
Based on bibliometric data from information-services provider Thomson Reuters, this map reveals "core areas" of physics, shown as coloured circular nodes, and the relationship between these subdisciplines, shown as lines.
Processing of band-passed noise in the lateral auditory belt cortex of the rhesus monkey.
Rauschecker, Josef P; Tian, Biao
2004-06-01
Neurons in the lateral belt areas of rhesus monkey auditory cortex were stimulated with band-passed noise (BPN) bursts of different bandwidths and center frequencies. Most neurons responded much more vigorously to these sounds than to tone bursts of a single frequency, and it thus became possible to elicit a clear response in 85% of lateral belt neurons. Tuning to center frequency and bandwidth of the BPN bursts was analyzed. Best center frequency varied along the rostrocaudal direction, with 2 reversals defining borders between areas. We confirmed the existence of 2 belt areas (AL and ML) that were laterally adjacent to the core areas (R and A1, respectively) and a third area (CL) adjacent to area CM on the supratemporal plane (STP). All 3 lateral belt areas were cochleotopically organized with their frequency gradients collinear to those of the adjacent STP areas. Although A1 neurons responded best to pure tones and their responses decreased with increasing bandwidth, 63% of the lateral belt neurons were tuned to bandwidths between 1/3 and 2 octaves and showed either one or multiple peaks. The results are compared with previous data from visual cortex and are discussed in the context of spectral integration, whereby the lateral belt forms a relatively early stage of processing in the cortical hierarchy, giving rise to parallel streams for the identification of auditory objects and their localization in space.
Barrowcliff, Alastair L; Haddock, Gillian
2010-12-01
Elements of voice content and characteristics of a hallucinatory voice are considered to be associated with compliance and resistance to auditory command hallucinations. However, a need for further exploration of such features remains. To explore the associations across different types of commands (benign, self-harm, harm-other) with a range of symptom measures and a trait measure of expressed compliance with compliance to the most recent command and command hallucinations over the previous 28 days. Participants meeting Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria for schizophrenia or schizoaffective disorder, with auditory hallucinations in the previous 28 days were screened. Where commands were reported a full-assessment of positive symptoms, social-rank, beliefs about voices and trait compliance was completed. Compliance with the last self-harm command was associated with elevated voice malevolence, heightened symptom presentation and perceived consequences for non-compliance. Compliance with the last harm-other command was associated with elevated symptom severity, higher perceived consequences for non-compliance and higher levels of voice social rank. However, these associations were not maintained for compliance during the previous 28 days. Findings indicate the importance of identifying the content of commands, overall symptom severity and core variables associated with compliance to specific command categories. The temporal stability of established mediating variables needs further examination.
Increased thalamic resting-state connectivity as a core driver of LSD-induced hallucinations.
Müller, F; Lenz, C; Dolder, P; Lang, U; Schmidt, A; Liechti, M; Borgwardt, S
2017-12-01
It has been proposed that the thalamocortical system is an important site of action of hallucinogenic drugs and an essential component of the neural correlates of consciousness. Hallucinogenic drugs such as LSD can be used to induce profoundly altered states of consciousness, and it is thus of interest to test the effects of these drugs on this system. 100 μg LSD was administrated orally to 20 healthy participants prior to fMRI assessment. Whole brain thalamic functional connectivity was measured using ROI-to-ROI and ROI-to-voxel approaches. Correlation analyses were used to explore relationships between thalamic connectivity to regions involved in auditory and visual hallucinations and subjective ratings on auditory and visual drug effects. LSD caused significant alterations in all dimensions of the 5D-ASC scale and significantly increased thalamic functional connectivity to various cortical regions. Furthermore, LSD-induced functional connectivity measures between the thalamus and the right fusiform gyrus and insula correlated significantly with subjective auditory and visual drug effects. Hallucinogenic drug effects might be provoked by facilitations of cortical excitability via thalamocortical interactions. Our findings have implications for the understanding of the mechanism of action of hallucinogenic drugs and provide further insight into the role of the 5-HT 2A -receptor in altered states of consciousness. © 2017 The Authors Acta Psychiatrica Scandinavica Published by John Wiley & Sons Ltd.
Jorge, João; Figueiredo, Patrícia; Gruetter, Rolf; van der Zwaag, Wietske
2018-06-01
External stimuli and tasks often elicit negative BOLD responses in various brain regions, and growing experimental evidence supports that these phenomena are functionally meaningful. In this work, the high sensitivity available at 7T was explored to map and characterize both positive (PBRs) and negative BOLD responses (NBRs) to visual checkerboard stimulation, occurring in various brain regions within and beyond the visual cortex. Recently-proposed accelerated fMRI techniques were employed for data acquisition, and procedures for exclusion of large draining vein contributions, together with ICA-assisted denoising, were included in the analysis to improve response estimation. Besides the visual cortex, significant PBRs were found in the lateral geniculate nucleus and superior colliculus, as well as the pre-central sulcus; in these regions, response durations increased monotonically with stimulus duration, in tight covariation with the visual PBR duration. Significant NBRs were found in the visual cortex, auditory cortex, default-mode network (DMN) and superior parietal lobule; NBR durations also tended to increase with stimulus duration, but were significantly less sustained than the visual PBR, especially for the DMN and superior parietal lobule. Responses in visual and auditory cortex were further studied for checkerboard contrast dependence, and their amplitudes were found to increase monotonically with contrast, linearly correlated with the visual PBR amplitude. Overall, these findings suggest the presence of dynamic neuronal interactions across multiple brain regions, sensitive to stimulus intensity and duration, and demonstrate the richness of information obtainable when jointly mapping positive and negative BOLD responses at a whole-brain scale, with ultra-high field fMRI. © 2018 Wiley Periodicals, Inc.
Attias, Joseph; Greenstein, Tally; Peled, Miriam; Ulanovski, David; Wohlgelernter, Jay; Raveh, Eyal
The aim of the study was to compare auditory and speech outcomes and electrical parameters on average 8 years after cochlear implantation between children with isolated auditory neuropathy (AN) and children with sensorineural hearing loss (SNHL). The study was conducted at a tertiary, university-affiliated pediatric medical center. The cohort included 16 patients with isolated AN with current age of 5 to 12.2 years who had been using a cochlear implant for at least 3.4 years and 16 control patients with SNHL matched for duration of deafness, age at implantation, type of implant, and unilateral/bilateral implant placement. All participants had had extensive auditory rehabilitation before and after implantation, including the use of conventional hearing aids. Most patients received Cochlear Nucleus devices, and the remainder either Med-El or Advanced Bionics devices. Unaided pure-tone audiograms were evaluated before and after implantation. Implantation outcomes were assessed by auditory and speech recognition tests in quiet and in noise. Data were also collected on the educational setting at 1 year after implantation and at school age. The electrical stimulation measures were evaluated only in the Cochlear Nucleus implant recipients in the two groups. Similar mapping and electrical measurement techniques were used in the two groups. Electrical thresholds, comfortable level, dynamic range, and objective neural response telemetry threshold were measured across the 22-electrode array in each patient. Main outcome measures were between-group differences in the following parameters: (1) Auditory and speech tests. (2) Residual hearing. (3) Electrical stimulation parameters. (4) Correlations of residual hearing at low frequencies with electrical thresholds at the basal, middle, and apical electrodes. The children with isolated AN performed equally well to the children with SNHL on auditory and speech recognition tests in both quiet and noise. More children in the AN group than the SNHL group were attending mainstream educational settings at school age, but the difference was not statistically significant. Significant between-group differences were noted in electrical measurements: the AN group was characterized by a lower current charge to reach subjective electrical thresholds, lower comfortable level and dynamic range, and lower telemetric neural response threshold. Based on pure-tone audiograms, the children with AN also had more residual hearing before and after implantation. Highly positive coefficients were found on correlation analysis between T levels across the basal and midcochlear electrodes and low-frequency acoustic thresholds. Prelingual children with isolated AN who fail to show expected oral and auditory progress after extensive rehabilitation with conventional hearing aids should be considered for cochlear implantation. Children with isolated AN had similar pattern as children with SNHL on auditory performance tests after cochlear implantation. The lower current charge required to evoke subjective and objective electrical thresholds in children with AN compared with children with SNHL may be attributed to the contribution to electrophonic hearing from the remaining neurons and hair cells. In addition, it is also possible that mechanical stimulation of the basilar membrane, as in acoustic stimulation, is added to the electrical stimulation of the cochlear implant.
Earliest phases of star formation (EPoS). Dust temperature distributions in isolated starless cores
NASA Astrophysics Data System (ADS)
Lippok, N.; Launhardt, R.; Henning, Th.; Balog, Z.; Beuther, H.; Kainulainen, J.; Krause, O.; Linz, H.; Nielbock, M.; Ragan, S. E.; Robitaille, T. P.; Sadavoy, S. I.; Schmiedeke, A.
2016-07-01
Context. Stars form by the gravitational collapse of cold and dense molecular cloud cores. Constraining the temperature and density structure of such cores is fundamental for understanding the initial conditions of star formation. We use Herschel observations of the thermal far-infrared (FIR) dust emission from nearby and isolated molecular cloud cores and combine them with ground-based submillimeter continuum data to derive observational constraints on their temperature and density structure. Aims: The aim of this study is to verify the validity of a ray-tracing inversion technique developed to derive the dust temperature and density structure of nearby and isolated starless cores directly from the dust emission maps and to test if the resulting temperature and density profiles are consistent with physical models. Methods: We have developed a ray-tracing inversion technique that can be used to derive the temperature and density structure of starless cores directly from the observed dust emission maps without the need to make assumptions about the physical conditions. Using this ray-tracing inversion technique, we derive the dust temperature and density structure of six isolated starless molecular cloud cores from dust emission maps in the wavelengths range 100 μm-1.2 mm. We then employ self-consistent radiative transfer modeling to the density profiles derived with the ray-tracing inversion method. In this model, the interstellar radiation field (ISRF) is the only heating source. The local strength of the ISRF as well as the total extinction provided by the outer envelope are treated as semi-free parameters which we scale within defined limits. The best-fit values of both parameters are derived by comparing the self-consistently calculated temperature profiles with those derived by the ray-tracing method. Results: We confirm earlier results and show that all starless cores are significantly colder inside than outside, with central core temperatures in the range 7.5-11.9 K and envelope temperatures that are 2.4 - 9.6 K higher. The core temperatures show a strong negative correlation with peak column density which suggests that the thermal structure of the cores is dominated by external heating from the ISRF and shielding by dusty envelopes. We find that temperature profiles derived with the ray-tracing inversion method can be well-reproduced with self-consistent radiative transfer models if the cores have geometry that is not too complex and good data coverage with spatially resolved maps at five or more wavelengths in range between 100 μm and 1.2 mm. We also confirm results from earlier studies that found that the usually adopted canonical value of the total strength of the ISRF in the solar neighbourhood is incompatible with the most widely used dust opacity models for dense cores. However, with the data available for this study, we cannot uniquely resolve the degeneracy between dust opacity law and strength of the ISRF. Final T maps (FITS format) are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/cgi-bin/qcat?J/A+A/592/A61
White matter anisotropy in the ventral language pathway predicts sound-to-word learning success
Wong, Francis C. K.; Chandrasekaran, Bharath; Garibaldi, Kyla; Wong, Patrick C. M.
2011-01-01
According to the dual stream model of auditory language processing, the dorsal stream is responsible for mapping sound to articulation while the ventral stream plays the role of mapping sound to meaning. Most researchers agree that the arcuate fasciculus (AF) is the neuroanatomical correlate of the dorsal steam, however, less is known about what constitutes the ventral one. Nevertheless two hypotheses exist, one suggests that the segment of the AF that terminates in middle temporal gyrus corresponds to the ventral stream and the other suggests that it is the extreme capsule that underlies this sound to meaning pathway. The goal of this study is to evaluate these two competing hypotheses. We trained participants with a sound-to-word learning paradigm in which they learned to use a foreign phonetic contrast for signaling word meaning. Using diffusion tensor imaging (DTI), a brain imaging tool to investigate white matter connectivity in humans, we found that fractional anisotropy in the left parietal-temporal region positively correlated with the performance in sound-to-word learning. In addition, fiber tracking revealed a ventral pathway, composed of the extreme capsule and the inferior longitudinal fasciculus, that mediated auditory comprehension. Our findings provide converging evidence supporting the importance of the ventral steam, an extreme capsule system, in the frontal-temporal language network. Implications for current models of speech processing will also be discussed. PMID:21677162
Wan, Catherine Y; Bazen, Loes; Baars, Rebecca; Libenson, Amanda; Zipse, Lauryn; Zuk, Jennifer; Norton, Andrea; Schlaug, Gottfried
2011-01-01
Although up to 25% of children with autism are non-verbal, there are very few interventions that can reliably produce significant improvements in speech output. Recently, a novel intervention called Auditory-Motor Mapping Training (AMMT) has been developed, which aims to promote speech production directly by training the association between sounds and articulatory actions using intonation and bimanual motor activities. AMMT capitalizes on the inherent musical strengths of children with autism, and offers activities that they intrinsically enjoy. It also engages and potentially stimulates a network of brain regions that may be dysfunctional in autism. Here, we report an initial efficacy study to provide 'proof of concept' for AMMT. Six non-verbal children with autism participated. Prior to treatment, the children had no intelligible words. They each received 40 individual sessions of AMMT 5 times per week, over an 8-week period. Probe assessments were conducted periodically during baseline, therapy, and follow-up sessions. After therapy, all children showed significant improvements in their ability to articulate words and phrases, with generalization to items that were not practiced during therapy sessions. Because these children had no or minimal vocal output prior to treatment, the acquisition of speech sounds and word approximations through AMMT represents a critical step in expressive language development in children with autism.
Crosse, Michael J; Lalor, Edmund C
2014-04-01
Visual speech can greatly enhance a listener's comprehension of auditory speech when they are presented simultaneously. Efforts to determine the neural underpinnings of this phenomenon have been hampered by the limited temporal resolution of hemodynamic imaging and the fact that EEG and magnetoencephalographic data are usually analyzed in response to simple, discrete stimuli. Recent research has shown that neuronal activity in human auditory cortex tracks the envelope of natural speech. Here, we exploit this finding by estimating a linear forward-mapping between the speech envelope and EEG data and show that the latency at which the envelope of natural speech is represented in cortex is shortened by >10 ms when continuous audiovisual speech is presented compared with audio-only speech. In addition, we use a reverse-mapping approach to reconstruct an estimate of the speech stimulus from the EEG data and, by comparing the bimodal estimate with the sum of the unimodal estimates, find no evidence of any nonlinear additive effects in the audiovisual speech condition. These findings point to an underlying mechanism that could account for enhanced comprehension during audiovisual speech. Specifically, we hypothesize that low-level acoustic features that are temporally coherent with the preceding visual stream may be synthesized into a speech object at an earlier latency, which may provide an extended period of low-level processing before extraction of semantic information.
McKetin, Rebecca; Baker, Amanda L; Dawe, Sharon; Voce, Alexandra; Lubman, Dan I
2017-05-01
We examined the lifetime experience of hallucinations and delusions associated with transient methamphetamine-related psychosis (MAP), persistent MAP and primary psychosis among a cohort of dependent methamphetamine users. Participants were classified as having (a) no current psychotic symptoms, (n=110); (b) psychotic symptoms only when using methamphetamine (transient MAP, n=85); (c) psychotic symptoms both when using methamphetamine and when abstaining from methamphetamine (persistent MAP, n=37), or (d) meeting DSM-IV criteria for lifetime schizophrenia or mania (primary psychosis, n=52). Current psychotic symptoms were classified as a score of 4 or more on any of the Brief Psychiatric Rating Scale items of suspiciousness, hallucinations or unusual thought content in the past month. Lifetime psychotic diagnoses and symptoms were assessed using the Composite International Diagnostic Interview. Transient MAP was associated with persecutory delusions and tactile hallucinations (compared to the no symptom group). Persistent MAP was additionally associated with delusions of reference, thought interference and complex auditory, visual, olfactory and tactile hallucinations, while primary psychosis was also associated with delusions of thought projection, erotomania and passivity. The presence of non-persecutory delusions and hallucinations across various modalities is a marker for persistent MAP or primary psychosis in people who use methamphetamine. Copyright © 2017. Published by Elsevier B.V.
2010-07-22
dependent , providing a natural bandwidth match between compute cores and the memory subsystem. • High Bandwidth Dcnsity. Waveguides crossing the chip...simulate this memory access architecture on a 2S6-core chip with a concentrated 64-node network lIsing detailed traces of high-performance embedded...memory modulcs, wc placc memory access poi nts (MAPs) around the pcriphery of the chip connected to thc nctwork. These MAPs, shown in Figure 4, contain
Of Ivory and Smurfs: Loxodontan MapReduce Experiments for Web Search
2009-11-01
i.e., index construction may involve multiple flushes to local disk and on-disk merge sorts outside of MapReduce). Once the local indexes have been...contained 198 cores, which, with current dual -processor quad-core con- figurations, could fit into 25 machines—a far more modest cluster with today’s...signifi- cant impact on effectiveness. Our simple pruning technique was performed at query time and hence could be adapted to query-dependent
Tubbing, Luuk; Harting, Janneke; Stronks, Karien
2015-06-01
While expectations of integrated public health policy (IPHP) promoting public health are high, assessment is hampered by the concept's ambiguity. This paper aims to contribute to conceptual clarification of IPHP as first step in further measurement development. In an online concept mapping procedure, we invited 237 Dutch experts, 62 of whom generated statements on characteristics of IPHP. Next, 100 experts were invited, 24 of whom sorted the statements into piles according to their perceived similarity and rated the statements on relevance and measurability. Data was analyzed using concept mapping software. The concept map consisted of 97 statements, grouped into 11 clusters and five themes. Core themes were 'integration', concerning 'policy coherence' and 'organizing connections', and 'health', concerning 'positioning health' and 'addressing determinants'. Peripheral themes were 'generic aspects', 'capacities', and 'goals and setting', which respectively addressed general notions of integrated policy making, conditions for IPHP, and the variety in manifestations of IPHP. Measurability ratings were low compared to relevance. The concept map gives an overview of interrelated themes, distinguishes core from peripheral dimensions, and provides pointers for theories of the policy process. While low measurability ratings indicate measurement difficulties, the core themes provide pointers for systematic insight into IPHP through measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Spatial representation of pitch height: the SMARC effect.
Rusconi, Elena; Kwan, Bonnie; Giordano, Bruno L; Umiltà, Carlo; Butterworth, Brian
2006-03-01
Through the preferential pairing of response positions to pitch, here we show that the internal representation of pitch height is spatial in nature and affects performance, especially in musically trained participants, when response alternatives are either vertically or horizontally aligned. The finding that our cognitive system maps pitch height onto an internal representation of space, which in turn affects motor performance even when this perceptual attribute is irrelevant to the task, extends previous studies on auditory perception and suggests an interesting analogy between music perception and mathematical cognition. Both the basic elements of mathematical cognition (i.e. numbers) and the basic elements of musical cognition (i.e. pitches), appear to be mapped onto a mental spatial representation in a way that affects motor performance.
Sarigiannis, Amy N.; Boulton, Matthew L.
2012-01-01
Objectives. We evaluated the utility of a competency mapping process for assessing the integration of clinical and public health skills in a newly developed Community Health Center (CHC) rotation at the University of Michigan School of Public Health Preventive Medicine residency. Methods. Learning objectives for the CHC rotation were derived from the Accreditation Council for Graduate Medical Education core clinical preventive medicine competencies. CHC learning objectives were mapped to clinical preventive medicine competencies specific to the specialty of public health and general preventive medicine. Objectives were also mapped to The Council on Linkages Between Academia and Public Health Practice’s tier 2 Core Competencies for Public Health Professionals. Results. CHC learning objectives mapped to all 4 (100%) of the public health and general preventive medicine clinical preventive medicine competencies. CHC population-level learning objectives mapped to 32 (94%) of 34 competencies for public health professionals. Conclusions. Utilizing competency mapping to assess clinical–public health integration in a new CHC rotation proved to be feasible and useful. Clinical preventive medicine learning objectives for a CHC rotation can also address public health competencies. PMID:22690972
Floresco, Stan B; Montes, David R; Tse, Maric M T; van Holstein, Mieke
2018-02-21
The nucleus accumbens (NAc) is a key node within corticolimbic circuitry for guiding action selection and cost/benefit decision making in situations involving reward uncertainty. Preclinical studies have typically assessed risk/reward decision making using assays where decisions are guided by internally generated representations of choice-outcome contingencies. Yet, real-life decisions are often influenced by external stimuli that inform about likelihoods of obtaining rewards. How different subregions of the NAc mediate decision making in such situations is unclear. Here, we used a novel assay colloquially termed the "Blackjack" task that models these types of situations. Male Long-Evans rats were trained to choose between one lever that always delivered a one-pellet reward and another that delivered four pellets with different probabilities [either 50% (good-odds) or 12.5% (poor-odds)], which were signaled by one of two auditory cues. Under control conditions, rats selected the large/risky option more often on good-odds versus poor-odds trials. Inactivation of the NAc core caused indiscriminate choice patterns. In contrast, NAc shell inactivation increased risky choice, more prominently on poor-odds trials. Additional experiments revealed that both subregions contribute to auditory conditional discrimination. NAc core or shell inactivation reduced Pavlovian approach elicited by an auditory CS+, yet shell inactivation also increased responding during presentation of a CS-. These data highlight distinct contributions for NAc subregions in decision making and reward seeking guided by discriminative stimuli. The core is crucial for implementation of conditional rules, whereas the shell refines reward seeking by mitigating the allure of larger, unlikely rewards and reducing expression of inappropriate or non-rewarded actions. SIGNIFICANCE STATEMENT Using external cues to guide decision making is crucial for adaptive behavior. Deficits in cue-guided behavior have been associated with neuropsychiatric disorders, such as attention deficit hyperactivity disorder and schizophrenia, which in turn has been linked to aberrant processing in the nucleus accumbens. However, many preclinical studies have often assessed risk/reward decision making in the absence of explicit cues. The current study fills that gap by using a novel task that allows for the assessment of cue-guided risk/reward decision making in rodents. Our findings identified distinct yet complementary roles for the medial versus lateral portions of this nucleus that provide a broader understanding of the differential contributions it makes to decision making and reward seeking guided by discriminative stimuli. Copyright © 2018 the authors 0270-6474/18/381901-14$15.00/0.
Evolutionary status of the pre-protostellar core L1498
NASA Technical Reports Server (NTRS)
Kuiper, T. B.; Langer, W. D.; Velusamy, T.; Levin, S. M. (Principal Investigator)
1996-01-01
L1498 is a classic example of a dense cold pre-protostellar core. To study the evolutionary status, the structure, dynamics, and chemical properties of this core we have obtained high spatial and high spectral resolution observations of molecules tracing densities of 10(3)-10(5) cm-3. We observed CCS, NH3, C3H2, and HC7N with NASA's DSN 70 m antennas. We also present large-scale maps of C18O and 13CO observed with the AT&T 7 m antenna. For the high spatial resolution maps of selected regions within the core we used the VLA for CCS at 22 GHz, and the Owens Valley Radio Observatory (OVRO) MMA for CCS at 94 GHz and CS (2-1). The 22 GHz CCS emission marks a high-density [n(H2) > 10(4) cm -3] core, which is elongated with a major axis along the SE-NW direction. NH3 and C3H2 emissions are located inside the boundary of the CCS emission. C18O emission traces a lower density gas extending beyond the CCS boundary. Along the major axis of the dense core, CCS, NH3 and C3H2 emission show evidence of limb brightening. The observations are consistent with a chemically differentiated onion-shell structure for the L1498 core, with NH3 in the inner and CCS in the outer parts of the core. The high angular resolution (9"-12") spectral line maps obtained by combining NASA Goldstone 70 m and VLA data resolve the CCS 22 GHz emission in the southeast and northwest boundaries into arclike enhancements, supporting the picture that CCS emission originates in a shell outside the NH3 emitting region. Interferometric maps of CCS at 94 GHz and CS at 98 GHz show that their emitting regions contain several small-scale dense condensations. We suggest that the differences between the CCS, CS, C3H2, and NH3 emission are caused by a time-dependent effect as the core evolves slowly. We interpret the chemical and physical properties of L1498 in terms of a quasi-static (or slowly contracting) dense core in which the outer envelope is still growing. The growth rate of the core is determined by the density increase in the CCS shell resulting from the accretion of the outer low-density gas traced by C18O. We conclude that L1498 could become unstable to rapid collapse to form a protostar in less than 5 x 10(6) yr.
Fluid flow near the surface of earth's outer core
NASA Technical Reports Server (NTRS)
Bloxham, Jeremy; Jackson, Andrew
1991-01-01
This review examines the recent attempts at extracting information on the pattern of fluid flow near the surface of the outer core from the geomagnetic secular variation. Maps of the fluid flow at the core surface are important as they may provide some insight into the process of the geodynamo and may place useful constraints on geodynamo models. In contrast to the case of mantle convection, only very small lateral variations in core density are necessary to drive the flow; these density variations are, by several orders of magnitude, too small to be imaged seismically; therefore, the geomagnetic secular variation is utilized to infer the flow. As substantial differences exist between maps developed by different researchers, the possible underlying reasons for these differences are examined with particular attention given to the inherent problems of nonuniqueness.
Davis, G L; McMullen, M D; Baysdorfer, C; Musket, T; Grant, D; Staebell, M; Xu, G; Polacco, M; Koster, L; Melia-Hancock, S; Houchins, K; Chao, S; Coe, E H
1999-01-01
We have constructed a 1736-locus maize genome map containing1156 loci probed by cDNAs, 545 probed by random genomic clones, 16 by simple sequence repeats (SSRs), 14 by isozymes, and 5 by anonymous clones. Sequence information is available for 56% of the loci with 66% of the sequenced loci assigned functions. A total of 596 new ESTs were mapped from a B73 library of 5-wk-old shoots. The map contains 237 loci probed by barley, oat, wheat, rice, or tripsacum clones, which serve as grass genome reference points in comparisons between maize and other grass maps. Ninety core markers selected for low copy number, high polymorphism, and even spacing along the chromosome delineate the 100 bins on the map. The average bin size is 17 cM. Use of bin assignments enables comparison among different maize mapping populations and experiments including those involving cytogenetic stocks, mutants, or quantitative trait loci. Integration of nonmaize markers in the map extends the resources available for gene discovery beyond the boundaries of maize mapping information into the expanse of map, sequence, and phenotype information from other grass species. This map provides a foundation for numerous basic and applied investigations including studies of gene organization, gene and genome evolution, targeted cloning, and dissection of complex traits. PMID:10388831
The snake geothermal drilling project. Innovative approaches to geothermal exploration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shervais, John W.; Evans, James P.; Liberty, Lee M.
2014-02-21
The goal of our project was to test innovative technologies using existing and new data, and to ground-truth these technologies using slim-hole core technology. The slim-hole core allowed us to understand subsurface stratigraphy and alteration in detail, and to correlate lithologies observed in core with surface based geophysical studies. Compiled data included geologic maps, volcanic vent distribution, structural maps, existing well logs and temperature gradient logs, groundwater temperatures, and geophysical surveys (resistivity, magnetics, gravity). New data included high-resolution gravity and magnetic surveys, high-resolution seismic surveys, three slimhole test wells, borehole wireline logs, lithology logs, water chemistry, alteration mineralogy, fracture distribution,more » and new thermal gradient measurements.« less
Core Knowledge and the Emergence of Symbols: The Case of Maps
ERIC Educational Resources Information Center
Huang, Yi; Spelke, Elizabeth S.
2015-01-01
Map reading is unique to humans but is present in people of diverse cultures, at ages as young as 4 years old. Here, we explore the nature and sources of this ability and ask both what geometric information young children use in maps and what nonsymbolic systems are associated with their map-reading performance. Four-year-old children were given…
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm
Höhne, Johannes; Tangermann, Michael
2014-01-01
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978
Kell, Christian A; Neumann, Katrin; Behrens, Marion; von Gudenberg, Alexander W; Giraud, Anne-Lise
2018-03-01
We previously reported speaking-related activity changes associated with assisted recovery induced by a fluency shaping therapy program and unassisted recovery from developmental stuttering (Kell et al., Brain 2009). While assisted recovery re-lateralized activity to the left hemisphere, unassisted recovery was specifically associated with the activation of the left BA 47/12 in the lateral orbitofrontal cortex. These findings suggested plastic changes in speaking-related functional connectivity between left hemispheric speech network nodes. We reanalyzed these data involving 13 stuttering men before and after fluency shaping, 13 men who recovered spontaneously from their stuttering, and 13 male control participants, and examined functional connectivity during overt vs. covert reading by means of psychophysiological interactions computed across left cortical regions involved in articulation control. Persistent stuttering was associated with reduced auditory-motor coupling and enhanced integration of somatosensory feedback between the supramarginal gyrus and the prefrontal cortex. Assisted recovery reduced this hyper-connectivity and increased functional connectivity between the articulatory motor cortex and the auditory feedback processing anterior superior temporal gyrus. In spontaneous recovery, both auditory-motor coupling and integration of somatosensory feedback were normalized. In addition, activity in the left orbitofrontal cortex and superior cerebellum appeared uncoupled from the rest of the speech production network. These data suggest that therapy and spontaneous recovery normalizes the left hemispheric speaking-related activity via an improvement of auditory-motor mapping. By contrast, long-lasting unassisted recovery from stuttering is additionally supported by a functional isolation of the superior cerebellum from the rest of the speech production network, through the pivotal left BA 47/12. Copyright © 2017 Elsevier Inc. All rights reserved.
An open-source java platform for automated reaction mapping.
Crabtree, John D; Mehta, Dinesh P; Kouri, Tina M
2010-09-27
This article presents software applications that have been built upon a modular, open-source, reaction mapping library that can be used in both cheminformatics and bioinformatics research. We first describe the theoretical underpinnings and modular architecture of the core software library. We then describe two applications that have been built upon that core. The first is a generic reaction viewer and mapper, and the second classifies reactions according to rules that can be modified by end users with little or no programming skills.
Immediate effects of AAF devices on the characteristics of stuttering: a clinical analysis.
Unger, Julia P; Glück, Christian W; Cholewa, Jürgen
2012-06-01
The present study investigated the immediate effects of altered auditory feedback (AAF) and one Inactive Condition (AAF parameters set to 0) on clinical attributes of stuttering during scripted and spontaneous speech. Two commercially available, portable AAF devices were used to create the combined delayed auditory feedback (DAF) and frequency altered feedback (FAF) effects. Thirty adults, who stutter, aged 18-68 years (M=36.5; SD=15.2), participated in this investigation. Each subject produced four sets of 5-min of oral reading, three sets of 5-min monologs as well as 10-min dialogs. These speech samples were analyzed to detect changes in descriptive features of stuttering (frequency, duration, speech/articulatory rate, core behaviors) across the various speech samples and within two SSI-4 (Riley, 2009) based severity ratings. A statistically significant difference was found in the frequency of stuttered syllables (%SS) during both Active Device conditions (p=.000) for all speech samples. The most sizable reductions in %SS occurred within scripted speech. In the analysis of stuttering type, it was found that blocks were reduced significantly (Device A: p=.017; Device B: p=.049). To evaluate the impact on severe and mild stuttering, participants were grouped into two SSI-4 based categories; mild and moderate-severe. During the Inactive Condition those participants within the moderate-severe group (p=.024) showed a statistically significant reduction in overall disfluencies. This result indicates, that active AAF parameters alone may not be the sole cause of a fluency-enhancement when using a technical speech aid. The reader will learn and be able to describe: (1) currently available scientific evidence on the use of altered auditory feedback (AAF) during scripted and spontaneous speech, (2) which characteristics of stuttering are impacted by an AAF device (frequency, duration, core behaviors, speech & articulatory rate, stuttering severity), (3) the effects of an Inactive Condition on people who stutter (PWS) falling into two severity groups, and (4) how the examined participants perceived the use of AAF devices. Copyright © 2012 Elsevier Inc. All rights reserved.
De Ridder, Dirk; Vanneste, Sven; Weisz, Nathan; Londero, Alain; Schlee, Winnie; Elgoyhen, Ana Belen; Langguth, Berthold
2014-07-01
Tinnitus is a considered to be an auditory phantom phenomenon, a persistent conscious percept of a salient memory trace, externally attributed, in the absence of a sound source. It is perceived as a phenomenological unified coherent percept, binding multiple separable clinical characteristics, such as its loudness, the sidedness, the type (pure tone, noise), the associated distress and so on. A theoretical pathophysiological framework capable of explaining all these aspects in one model is highly needed. The model must incorporate both the deafferentation based neurophysiological models and the dysfunctional noise canceling model, and propose a 'tinnitus core' subnetwork. The tinnitus core can be defined as the minimal set of brain areas that needs to be jointly activated (=subnetwork) for tinnitus to be consciously perceived, devoid of its affective components. The brain areas involved in the other separable characteristics of tinnitus can be retrieved by studies on spontaneous resting state magnetic and electrical activity in people with tinnitus, evaluated for the specific aspect investigated and controlled for other factors. By combining these functional imaging studies with neuromodulation techniques some of the correlations are turned into causal relationships. Thereof, a heuristic pathophysiological framework is constructed, integrating the tinnitus perceptual core with the other tinnitus related aspects. This phenomenological unified percept of tinnitus can be considered an emergent property of multiple, parallel, dynamically changing and partially overlapping subnetworks, each with a specific spontaneous oscillatory pattern and functional connectivity signature. Communication between these different subnetworks is proposed to occur at hubs, brain areas that are involved in multiple subnetworks simultaneously. These hubs can take part in each separable subnetwork at different frequencies. Communication between the subnetworks is proposed to occur at discrete oscillatory frequencies. As such, the brain uses multiple nonspecific networks in parallel, each with their own oscillatory signature, that adapt to the context to construct a unified percept possibly by synchronized activation integrated at hubs at discrete oscillatory frequencies. Copyright © 2013 Elsevier Ltd. All rights reserved.
Peripersonal space representation develops independently from visual experience.
Ricciardi, Emiliano; Menicagli, Dario; Leo, Andrea; Costantini, Marcello; Pietrini, Pietro; Sinigaglia, Corrado
2017-12-15
Our daily-life actions are typically driven by vision. When acting upon an object, we need to represent its visual features (e.g. shape, orientation, etc.) and to map them into our own peripersonal space. But what happens with people who have never had any visual experience? How can they map object features into their own peripersonal space? Do they do it differently from sighted agents? To tackle these questions, we carried out a series of behavioral experiments in sighted and congenitally blind subjects. We took advantage of a spatial alignment effect paradigm, which typically refers to a decrease of reaction times when subjects perform an action (e.g., a reach-to-grasp pantomime) congruent with that afforded by a presented object. To systematically examine peripersonal space mapping, we presented visual or auditory affording objects both within and outside subjects' reach. The results showed that sighted and congenitally blind subjects did not differ in mapping objects into their own peripersonal space. Strikingly, this mapping occurred also when objects were presented outside subjects' reach, but within the peripersonal space of another agent. This suggests that (the lack of) visual experience does not significantly affect the development of both one's own and others' peripersonal space representation.
NASA Astrophysics Data System (ADS)
Stokhof, Harry; de Vries, Bregje; Bastiaens, Theo; Martens, Rob
2018-01-01
Student questioning is an important learning strategy, but rare in many classrooms, because teachers have concerns if these questions contribute to attaining curricular objectives. Teachers face the challenge of making student questioning effective for learning the curriculum. To address this challenge, a principle-based scenario for guiding effective student questioning was developed and tested for its relevance and practicality in two previous studies. In the scenario, which consists of a sequence of pedagogical activities, mind maps support teachers and students to explore and elaborate upon a core curriculum, by raising, investigating, and exchanging student questions. In this paper, a follow-up study is presented that tested the effectiveness of the scenario on student outcomes in terms of attainment of curricular objectives. Ten teachers and their 231 students participated in the study. Pre- and posttest mind maps were used to measure individual and collective learning outcomes of student questioning. Findings show that a majority of students progressed in learning the core curriculum and elaborated upon it. The findings suggest that visualizing knowledge construction in a shared mind map supports students to learn a core curriculum and to refine their knowledge structures.
Skill dependent audiovisual integration in the fusiform induces repetition suppression.
McNorgan, Chris; Booth, James R
2015-02-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression
McNorgan, Chris; Booth, James R.
2015-01-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276
Secchi, Simone; Lauria, Antonio; Cellai, Gianfranco
2017-01-01
Acoustic wayfinding involves using a variety of auditory cues to create a mental map of the surrounding environment. For blind people, these auditory cues become the primary substitute for visual information in order to understand the features of the spatial context and orient themselves. This can include creating sound waves, such as tapping a cane. This paper reports the results of a research about the "acoustic contrast" parameter between paving materials functioning as a cue and the surrounding or adjacent surface functioning as a background. A number of different materials was selected in order to create a test path and a procedure was defined for the verification of the ability of blind people to distinguish different acoustic contrasts. A method is proposed for measuring acoustic contrast generated by the impact of a cane tip on the ground to provide blind people with environmental information on spatial orientation and wayfinding in urban places. Copyright © 2016 Elsevier Ltd. All rights reserved.
Soto-Cerda, Braulio J; Duguid, Scott; Booker, Helen; Rowland, Gordon; Diederichsen, Axel; Cloutier, Sylvie
2014-04-01
The identification of stable QTL for seed quality traits by association mapping of a diverse panel of linseed accessions establishes the foundation for assisted breeding and future fine mapping in linseed. Linseed oil is valued for its food and non-food applications. Modifying its oil content and fatty acid (FA) profiles to meet market needs in a timely manner requires clear understanding of their quantitative trait loci (QTL) architectures, which have received little attention to date. Association mapping is an efficient approach to identify QTL in germplasm collections. In this study, we explored the quantitative nature of seed quality traits including oil content (OIL), palmitic acid, stearic acid, oleic acid, linoleic acid (LIO) linolenic acid (LIN) and iodine value in a flax core collection of 390 accessions assayed with 460 microsatellite markers. The core collection was grown in a modified augmented design at two locations over 3 years and phenotypic data for all seven traits were obtained from all six environments. Significant phenotypic diversity and moderate to high heritability for each trait (0.73-0.99) were observed. Most of the candidate QTL were stable as revealed by multivariate analyses. Nine candidate QTL were identified, varying from one for OIL to three for LIO and LIN. Candidate QTL for LIO and LIN co-localized with QTL previously identified in bi-parental populations and some mapped nearby genes known to be involved in the FA biosynthesis pathway. Fifty-eight percent of the QTL alleles were absent (private) in the Canadian cultivars suggesting that the core collection possesses QTL alleles potentially useful to improve seed quality traits. The candidate QTL identified herein will establish the foundation for future marker-assisted breeding in linseed.
Clumps of Cold Stuff Across the Sky
2011-01-11
This map illustrates the numerous star-forming clouds, called cold cores, that European Space Agency Planck observed throughout our Milky Way galaxy. Planck detected around 10,000 of these cores, thousands of which had never been seen before.
Mapping carrier diffusion in single silicon core-shell nanowires with ultrafast optical microscopy.
Seo, M A; Yoo, J; Dayeh, S A; Picraux, S T; Taylor, A J; Prasankumar, R P
2012-12-12
Recent success in the fabrication of axial and radial core-shell heterostructures, composed of one or more layers with different properties, on semiconductor nanowires (NWs) has enabled greater control of NW-based device operation for various applications. (1-3) However, further progress toward significant performance enhancements in a given application is hindered by the limited knowledge of carrier dynamics in these structures. In particular, the strong influence of interfaces between different layers in NWs on transport makes it especially important to understand carrier dynamics in these quasi-one-dimensional systems. Here, we use ultrafast optical microscopy (4) to directly examine carrier relaxation and diffusion in single silicon core-only and Si/SiO(2) core-shell NWs with high temporal and spatial resolution in a noncontact manner. This enables us to reveal strong coherent phonon oscillations and experimentally map electron and hole diffusion currents in individual semiconductor NWs for the first time.
Frequency-specific attentional modulation in human primary auditory cortex and midbrain.
Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina
2018-07-01
Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Pollonini, Luca; Olds, Cristen; Abaya, Homer; Bortfeld, Heather; Beauchamp, Michael S; Oghalai, John S
2014-03-01
The primary goal of most cochlear implant procedures is to improve a patient's ability to discriminate speech. To accomplish this, cochlear implants are programmed so as to maximize speech understanding. However, programming a cochlear implant can be an iterative, labor-intensive process that takes place over months. In this study, we sought to determine whether functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging method which is safe to use repeatedly and for extended periods of time, can provide an objective measure of whether a subject is hearing normal speech or distorted speech. We used a 140 channel fNIRS system to measure activation within the auditory cortex in 19 normal hearing subjects while they listed to speech with different levels of intelligibility. Custom software was developed to analyze the data and compute topographic maps from the measured changes in oxyhemoglobin and deoxyhemoglobin concentration. Normal speech reliably evoked the strongest responses within the auditory cortex. Distorted speech produced less region-specific cortical activation. Environmental sounds were used as a control, and they produced the least cortical activation. These data collected using fNIRS are consistent with the fMRI literature and thus demonstrate the feasibility of using this technique to objectively detect differences in cortical responses to speech of different intelligibility. Copyright © 2013 Elsevier B.V. All rights reserved.
Anomal, Renata; de Villers-Sidani, Etienne; Merzenich, Michael M; Panizzutti, Rogerio
2013-01-01
Sensory experience powerfully shapes cortical sensory representations during an early developmental "critical period" of plasticity. In the rat primary auditory cortex (A1), the experience-dependent plasticity is exemplified by significant, long-lasting distortions in frequency representation after mere exposure to repetitive frequencies during the second week of life. In the visual system, the normal unfolding of critical period plasticity is strongly dependent on the elaboration of brain-derived neurotrophic factor (BDNF), which promotes the establishment of inhibition. Here, we tested the hypothesis that BDNF signaling plays a role in the experience-dependent plasticity induced by pure tone exposure during the critical period in the primary auditory cortex. Elvax resin implants filled with either a blocking antibody against BDNF or the BDNF protein were placed on the A1 of rat pups throughout the critical period window. These pups were then exposed to 7 kHz pure tone for 7 consecutive days and their frequency representations were mapped. BDNF blockade completely prevented the shaping of cortical tuning by experience and resulted in poor overall frequency tuning in A1. By contrast, BDNF infusion on the developing A1 amplified the effect of 7 kHz tone exposure compared to control. These results indicate that BDNF signaling participates in the experience-dependent plasticity induced by pure tone exposure during the critical period in A1.
Hart, Kristen M.; Zawada, David G.; Fujisaki, Ikuko; Lidz, Barbara H.
2010-01-01
The loggerhead sea turtle Caretta caretta faces declining nest numbers and bycatches from commercial longline fishing in the southeastern USA. Understanding spatial and temporal habitat-use patterns of these turtles, especially reproductive females in the neritic zone, is critical for guiding management decisions. To assess marine turtle habitat use within the Dry Tortugas National Park (DRTO), we used satellite telemetry to identify core-use areas for 7 loggerhead females inter-nesting and tracked in 2008 and 2009. This effort represents the first tracking of DRTO loggerheads, a distinct subpopulation that is 1 of 7 recently proposed for upgrading from threatened to endangered under the US Endangered Species Act. We also used a rapid, high-resolution, digital imaging system to map benthic habitats in turtle core-use areas (i.e. 50% kernel density zones). Loggerhead females were seasonal residents of DRTO for 19 to 51 d, and individual inter-nesting habitats were located within 1.9 km (2008) and 2.3 km (2009) of the nesting beach and tagging site. The core area common to all tagged turtles was 4.2 km2 in size and spanned a depth range of 7.6 to 11.5 m. Mapping results revealed the diversity and distributions of benthic cover available in the core-use area, as well as a heavily used corridor to/from the nesting beach. This combined tagging-mapping approach shows potential for planning and improving the effectiveness of marine protected areas and for developing spatially explicit conservation plans.
Looking beyond the Boundaries: Time to Put Landmarks Back on the Cognitive Map?
ERIC Educational Resources Information Center
Lew, Adina R.
2011-01-01
Since the proposal of Tolman (1948) that mammals form maplike representations of familiar environments, cognitive map theory has been at the core of debates on the fundamental mechanisms of animal learning and memory. Traditional formulations of cognitive map theory emphasize relations between landmarks and between landmarks and goal locations as…
21st Century Skills Map: World Languages
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2011
2011-01-01
This 21st Century Skills Map is the result of hundreds of hours of research, development and feedback from educators and business leaders across the nation. The Partnership for 21st Century Skills has issued this map for the core subject of World Languages. [Funding for this paper was provided by EF Education.
Leading to a New Paradigm: The Example of Bioregional Mapping.
ERIC Educational Resources Information Center
Shapiro, David W.
1996-01-01
Examines bioregional mapping as an example of how a different system (educational or otherwise) could be designed through shifting the focus of figure-ground gestalts and revisioning core metaphors. Discusses the notions of community and place, the potential for cognitive restructuring, literal and conceptual maps, and the potential of solving…
An Examination of the Effects of Argument Mapping on Students' Memory and Comprehension Performance
ERIC Educational Resources Information Center
Dwyer, Christopher P.; Hogan, Michael J.; Stewart, Ian
2013-01-01
Argument mapping (AM) is a method of visually diagramming arguments to allow for easy comprehension of core statements and relations. A series of three experiments compared argument map reading and construction with hierarchical outlining, text summarisation, and text reading as learning methods by examining subsequent memory and comprehension…
NASA Astrophysics Data System (ADS)
Bellini, A.; Anderson, J.; van der Marel, R. P.; King, I. R.; Piotto, G.; Bedin, L. R.
2017-06-01
We take advantage of the exquisite quality of the Hubble Space Telescope astro-photometric catalog of the core of ωCen presented in the first paper of this series to derive a high-resolution, high-precision, high-accuracy differential-reddening map of the field. The map has a spatial resolution of 2 × 2 arcsec2 over a total field of view of about 4.‧3 × 4.‧3. The differential reddening itself is estimated via an iterative procedure using five distinct color-magnitude diagrams, which provided consistent results to within the 0.1% level. Assuming an average reddening value E(B - V) = 0.12, the differential reddening within the cluster’s core can vary by up to ±10%, with a typical standard deviation of about 4%. Our differential-reddening map is made available to the astronomical community in the form of a multi-extension FITS file. This differential-reddening map is essential for a detailed understanding of the multiple stellar populations of ωCen, as presented in the next paper in this series. Moreover, it provides unique insight into the level of small spatial-scale extinction variations in the Galactic foreground. Based on archival observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.
Processing of harmonics in the lateral belt of macaque auditory cortex.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P
2014-01-01
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations ("coos"). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.
Brainstem transcription of speech is disrupted in children with autism spectrum disorders
Russo, Nicole; Nicol, Trent; Trommer, Barbara; Zecker, Steve; Kraus, Nina
2009-01-01
Language impairment is a hallmark of autism spectrum disorders (ASD). The origin of the deficit is poorly understood although deficiencies in auditory processing have been detected in both perception and cortical encoding of speech sounds. Little is known about the processing and transcription of speech sounds at earlier (brainstem) levels or about how background noise may impact this transcription process. Unlike cortical encoding of sounds, brainstem representation preserves stimulus features with a degree of fidelity that enables a direct link between acoustic components of the speech syllable (e.g., onsets) to specific aspects of neural encoding (e.g., waves V and A). We measured brainstem responses to the syllable /da/, in quiet and background noise, in children with and without ASD. Children with ASD exhibited deficits in both the neural synchrony (timing) and phase locking (frequency encoding) of speech sounds, despite normal click-evoked brainstem responses. They also exhibited reduced magnitude and fidelity of speech-evoked responses and inordinate degradation of responses by background noise in comparison to typically developing controls. Neural synchrony in noise was significantly related to measures of core and receptive language ability. These data support the idea that abnormalities in the brainstem processing of speech contribute to the language impairment in ASD. Because it is both passively-elicited and malleable, the speech-evoked brainstem response may serve as a clinical tool to assess auditory processing as well as the effects of auditory training in the ASD population. PMID:19635083
Ethridge, Lauren E; White, Stormi P; Mosconi, Matthew W; Wang, Jun; Pedapati, Ernest V; Erickson, Craig A; Byerly, Matthew J; Sweeney, John A
2017-01-01
Studies in the fmr1 KO mouse demonstrate hyper-excitability and increased high-frequency neuronal activity in sensory cortex. These abnormalities may contribute to prominent and distressing sensory hypersensitivities in patients with fragile X syndrome (FXS). The current study investigated functional properties of auditory cortex using a sensory entrainment task in FXS. EEG recordings were obtained from 17 adolescents and adults with FXS and 17 age- and sex-matched healthy controls. Participants heard an auditory chirp stimulus generated using a 1000-Hz tone that was amplitude modulated by a sinusoid linearly increasing in frequency from 0-100 Hz over 2 s. Single trial time-frequency analyses revealed decreased gamma band phase-locking to the chirp stimulus in FXS, which was strongly coupled with broadband increases in gamma power. Abnormalities in gamma phase-locking and power were also associated with theta-gamma amplitude-amplitude coupling during the pre-stimulus period and with parent reports of heightened sensory sensitivities and social communication deficits. This represents the first demonstration of neural entrainment alterations in FXS patients and suggests that fast-spiking interneurons regulating synchronous high-frequency neural activity have reduced functionality. This reduced ability to synchronize high-frequency neural activity was related to the total power of background gamma band activity. These observations extend findings from fmr1 KO models of FXS, characterize a core pathophysiological aspect of FXS, and may provide a translational biomarker strategy for evaluating promising therapeutics.
Processing of harmonics in the lateral belt of macaque auditory cortex
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P.
2014-01-01
Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (“coos”). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB. PMID:25100935
Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System.
Brodal, Hans P; Osnes, Berge; Specht, Karsten
2017-01-01
Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music.
Listening to Rhythmic Music Reduces Connectivity within the Basal Ganglia and the Reward System
Brodal, Hans P.; Osnes, Berge; Specht, Karsten
2017-01-01
Music can trigger emotional responses in a more direct way than any other stimulus. In particular, music-evoked pleasure involves brain networks that are part of the reward system. Furthermore, rhythmic music stimulates the basal ganglia and may trigger involuntary movements to the beat. In the present study, we created a continuously playing rhythmic, dance floor-like composition where the ambient noise from the MR scanner was incorporated as an additional instrument of rhythm. By treating this continuous stimulation paradigm as a variant of resting-state, the data was analyzed with stochastic dynamic causal modeling (sDCM), which was used for exploring functional dependencies and interactions between core areas of auditory perception, rhythm processing, and reward processing. The sDCM model was a fully connected model with the following areas: auditory cortex, putamen/pallidum, and ventral striatum/nucleus accumbens of both hemispheres. The resulting estimated parameters were compared to ordinary resting-state data, without an additional continuous stimulation. Besides reduced connectivity within the basal ganglia, the results indicated a reduced functional connectivity of the reward system, namely the right ventral striatum/nucleus accumbens from and to the basal ganglia and auditory network while listening to rhythmic music. In addition, the right ventral striatum/nucleus accumbens demonstrated also a change in its hemodynamic parameter, reflecting an increased level of activation. These converging results may indicate that the dopaminergic reward system reduces its functional connectivity and relinquishing its constraints on other areas when we listen to rhythmic music. PMID:28400717
Markl, Daniel; Wahl, Patrick; Pichler, Heinz; Sacher, Stephan; Khinast, Johannes G
2018-01-30
This study demonstrates the use of optical coherence tomography (OCT) to simultaneously characterize the roughness of the tablet core and coating of pharmaceutical tablets. OCT is a high resolution non-destructive and contactless imaging methodology to characterize structural properties of solid dosage forms. Besides measuring the coating thickness, it also facilitates the analysis of the tablet core and coating roughness. An automated data evaluation algorithm extracts information about coating thickness, as well as tablet core and coating roughness. Samples removed periodically from a pan coating process were investigated, on the basis of thickness and profile maps of the tablet core and coating computed from about 480,000 depth measurements (i.e., 3D data) per sample. This data enables the calculation of the root mean square deviation, the skewness and the kurtosis of the assessed profiles. Analyzing these roughness parameters revealed that, for the given coating formulation, small valleys in the tablet core are filled with coating, whereas coarse features of the tablet core are still visible on the final film-coated tablet. Moreover, the impact of the tablet core roughness on the coating thickness is analyzed by correlating the tablet core profile and the coating thickness map. The presented measurement method and processing could be in the future transferred to in-line OCT measurements, to investigate core and coating roughness during the production of film-coated tablets. Copyright © 2017. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Dwyer, Christopher P.; Hogan, Michael J.; Stewart, Ian
2010-01-01
The current study compared the effects on comprehension and memory of learning via text versus learning via argument map. Argument mapping is a method of diagrammatic representation of arguments designed to simplify the reading of an argument structure and allow for easy assimilation of core propositions and relations. In the current study, 400…
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Conde, Tatiana; Gonçalves, Oscar F; Pinheiro, Ana P
2016-01-01
Auditory verbal hallucinations (AVH) are a core symptom of schizophrenia. Like "real" voices, AVH carry a rich amount of linguistic and paralinguistic cues that convey not only speech, but also affect and identity, information. Disturbed processing of voice identity, affective, and speech information has been reported in patients with schizophrenia. More recent evidence has suggested a link between voice-processing abnormalities and specific clinical symptoms of schizophrenia, especially AVH. It is still not well understood, however, to what extent these dimensions are impaired and how abnormalities in these processes might contribute to AVH. In this review, we consider behavioral, neuroimaging, and electrophysiological data to investigate the speech, identity, and affective dimensions of voice processing in schizophrenia, and we discuss how abnormalities in these processes might help to elucidate the mechanisms underlying specific phenomenological features of AVH. Schizophrenia patients exhibit behavioral and neural disturbances in the three dimensions of voice processing. Evidence suggesting a role of dysfunctional voice processing in AVH seems to be stronger for the identity and speech dimensions than for the affective domain.
Dissociable meta-analytic brain networks contribute to coordinated emotional processing.
Riedel, Michael C; Yanes, Julio A; Ray, Kimberly L; Eickhoff, Simon B; Fox, Peter T; Sutherland, Matthew T; Laird, Angela R
2018-06-01
Meta-analytic techniques for mining the neuroimaging literature continue to exert an impact on our conceptualization of functional brain networks contributing to human emotion and cognition. Traditional theories regarding the neurobiological substrates contributing to affective processing are shifting from regional- towards more network-based heuristic frameworks. To elucidate differential brain network involvement linked to distinct aspects of emotion processing, we applied an emergent meta-analytic clustering approach to the extensive body of affective neuroimaging results archived in the BrainMap database. Specifically, we performed hierarchical clustering on the modeled activation maps from 1,747 experiments in the affective processing domain, resulting in five meta-analytic groupings of experiments demonstrating whole-brain recruitment. Behavioral inference analyses conducted for each of these groupings suggested dissociable networks supporting: (1) visual perception within primary and associative visual cortices, (2) auditory perception within primary auditory cortices, (3) attention to emotionally salient information within insular, anterior cingulate, and subcortical regions, (4) appraisal and prediction of emotional events within medial prefrontal and posterior cingulate cortices, and (5) induction of emotional responses within amygdala and fusiform gyri. These meta-analytic outcomes are consistent with a contemporary psychological model of affective processing in which emotionally salient information from perceived stimuli are integrated with previous experiences to engender a subjective affective response. This study highlights the utility of using emergent meta-analytic methods to inform and extend psychological theories and suggests that emotions are manifest as the eventual consequence of interactions between large-scale brain networks. © 2018 Wiley Periodicals, Inc.
Multimodality language mapping in patients with left-hemispheric language dominance on Wada test
Kojima, Katsuaki; Brown, Erik C.; Rothermel, Robert; Carlson, Alanna; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2012-01-01
Objective We determined the utility of electrocorticography (ECoG) and stimulation for detecting language-related sites in patients with left-hemispheric language-dominance on Wada test. Methods We studied 13 epileptic patients who underwent language mapping using event-related gamma-oscillations on ECoG and stimulation via subdural electrodes. Sites showing significant gamma-augmentation during an auditory-naming task were defined as language-related ECoG sites. Sites at which stimulation resulted in auditory perceptual changes, failure to verbalize a correct answer, or sensorimotor symptoms involving the mouth were defined as language-related stimulation sites. We determined how frequently these methods revealed language-related sites in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions. Results Language-related sites in the superior-temporal and inferior-frontal gyri were detected by ECoG more frequently than stimulation (p < 0.05), while those in the dorsolateral-premotor and inferior-Rolandic regions were detected by both methods equally. Stimulation of language-related ECoG sites, compared to the others, more frequently elicited language symptoms (p < 0.00001). One patient developed dysphasia requiring in-patient speech therapy following resection of the dorsolateral-premotor and inferior-Rolandic regions containing language-related ECoG sites not otherwise detected by stimulation. Conclusions Language-related gamma-oscillations may serve as an alternative biomarker of underlying language function in patients with left-hemispheric language-dominance. Significance Measurement of language-related gamma-oscillations is warranted in presurgical evaluation of epileptic patients. PMID:22503906
Phonological and orthographic influences in the bouba-kiki effect.
Cuskley, Christine; Simner, Julia; Kirby, Simon
2017-01-01
We examine a high-profile phenomenon known as the bouba-kiki effect, in which non-word names are assigned to abstract shapes in systematic ways (e.g. rounded shapes are preferentially labelled bouba over kiki). In a detailed evaluation of the literature, we show that most accounts of the effect point to predominantly or entirely iconic cross-sensory mappings between acoustic or articulatory properties of sound and shape as the mechanism underlying the effect. However, these accounts have tended to confound the acoustic or articulatory properties of non-words with another fundamental property: their written form. We compare traditional accounts of direct audio or articulatory-visual mapping with an account in which the effect is heavily influenced by matching between the shapes of graphemes and the abstract shape targets. The results of our two studies suggest that the dominant mechanism underlying the effect for literate subjects is matching based on aligning letter curvature and shape roundedness (i.e. non-words with curved letters are matched to round shapes). We show that letter curvature is strong enough to significantly influence word-shape associations even in auditory tasks, where written word forms are never presented to participants. However, we also find an additional phonological influence in that voiced sounds are preferentially linked with rounded shapes, although this arises only in a purely auditory word-shape association task. We conclude that many previous investigations of the bouba-kiki effect may not have given appropriate consideration or weight to the influence of orthography among literate subjects.
El Bakkali, Ahmed; Haouane, Hicham; Moukhli, Abdelmajid; Costes, Evelyne; Van Damme, Patrick; Khadari, Bouchaib
2013-01-01
Phenotypic characterisation of germplasm collections is a decisive step towards association mapping analyses, but it is particularly expensive and tedious for woody perennial plant species. Characterisation could be more efficient if focused on a reasonably sized subset of accessions, or so-called core collection (CC), reflecting the geographic origin and variability of the germplasm. The questions that arise concern the sample size to use and genetic parameters that should be optimized in a core collection to make it suitable for association mapping. Here we investigated these questions in olive (Olea europaea L.), a perennial fruit species. By testing different sampling methods and sizes in a worldwide olive germplasm bank (OWGB Marrakech, Morocco) containing 502 unique genotypes characterized by nuclear and plastid loci, a two-step sampling method was proposed. The Shannon-Weaver diversity index was found to be the best criterion to be maximized in the first step using the Core Hunter program. A primary core collection of 50 entries (CC50) was defined that captured more than 80% of the diversity. This latter was subsequently used as a kernel with the Mstrat program to capture the remaining diversity. 200 core collections of 94 entries (CC94) were thus built for flexibility in the choice of varieties to be studied. Most entries of both core collections (CC50 and CC94) were revealed to be unrelated due to the low kinship coefficient, whereas a genetic structure spanning the eastern and western/central Mediterranean regions was noted. Linkage disequilibrium was observed in CC94 which was mainly explained by a genetic structure effect as noted for OWGB Marrakech. Since they reflect the geographic origin and diversity of olive germplasm and are of reasonable size, both core collections will be of major interest to develop long-term association studies and thus enhance genomic selection in olive species. PMID:23667437
VARIATIONS IN MINERAL MATTER CONTENT OF A PEAT DEPOSIT IN MAINE RESTING ON GLACIO-MARINE SEDIMENTS.
Cameron, Cornelia C.; Schruben, Paul
1983-01-01
The Great Heath, Washington County, Maine, is an excellent example of a multidomed ombrotrophic peatland resting on a gently undulating surface of glacio-marine sediments and towering above modern streams. A comprehensive study sponsored by the Geological Survey of Maine in cooperation with the U. S. Geological Survey included preparation of a contoured surficial geology map on which are located 81 core sites. Eight cross sections accompany the map showing occurrence and thickness of three types of organic material and locations of cored sample analyses. Refs.
Fast and slow transitions in frontal ensemble activity during flexible sensorimotor behavior.
Siniscalchi, Michael J; Phoumthipphavong, Victoria; Ali, Farhan; Lozano, Marc; Kwan, Alex C
2016-09-01
The ability to shift between repetitive and goal-directed actions is a hallmark of cognitive control. Previous studies have reported that adaptive shifts in behavior are accompanied by changes of neural activity in frontal cortex. However, neural and behavioral adaptations can occur at multiple time scales, and their relationship remains poorly defined. Here we developed an adaptive sensorimotor decision-making task for head-fixed mice, requiring them to shift flexibly between multiple auditory-motor mappings. Two-photon calcium imaging of secondary motor cortex (M2) revealed different ensemble activity states for each mapping. When adapting to a conditional mapping, transitions in ensemble activity were abrupt and occurred before the recovery of behavioral performance. By contrast, gradual and delayed transitions accompanied shifts toward repetitive responding. These results demonstrate distinct ensemble signatures associated with the start versus end of sensory-guided behavior and suggest that M2 leads in engaging goal-directed response strategies that require sensorimotor associations.
Visual-auditory integration for visual search: a behavioral study in barn owls
Hazan, Yael; Kra, Yonatan; Yarin, Inna; Wagner, Hermann; Gutfreund, Yoram
2015-01-01
Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual-auditory integration at the neuronal level. However, behavioral data on visual-auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention toward salient stimuli. We attached miniature wireless video cameras on barn owls’ heads (OwlCam) to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors) is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam’s video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades). From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely toward the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search target. PMID:25762905
NASA Astrophysics Data System (ADS)
Mekik, F.
2016-12-01
Paleoceanographic work is based on calibrating paleo-environmental proxies using well-preserved core top sediments which represent the last one thousand years or less. However, core top sediments may be in places as old as 9000 years due to various sedimentary and diagenetic processes, such as chemical erosion, bioturbation and lateral sediment redistribution. We hypothesize that in regions with high surface ocean productivity, high organic carbon to calcite ratios reaching the seabed promote calcite dissolution in sediments, even in regions above the lysocline. This process may lead to chemical erosion of core tops which in turn may result in core top aging. The eastern equatorial Pacific (EEP), a popular site for calibration of paleoceanographic proxies, is such a place. Better understanding the relationship between core top age and dissolution will help correct biases inherent in proxy calibration because dissolution of foraminifers alters shell chemistry, and wholesale dissolution of sediments leads to core top aging and loss. We present both new and literature-based core top data of radiocarbon ages from the EEP. We created regional maps of both core top radiocarbon age and calcite preservation measured with the Globorotalia menardii Fragmentation Index (MFI; over 100 core tops). Our maps show a clear pattern of deep sea sedimentary calcite dissolution mimicking the pattern of surface ocean productivity observed from satellites and sediment traps in the EEP. Core top radiocarbon ages generally parallel the dissolution patterns observed in the region. Where this relationship does not hold true, bioturbation and/or lateral sediment redistribution may play a role. Down core radiocarbon and 230Th-normalized sediment accumulation rate data from several cores in the EEP support this hypothesis. Better understanding the role of diagenesis promotes the development of more reliable paleo-environmental proxies.
Mapping of a standard documentation template to the ICF core sets for arthritis and low back pain.
Escorpizo, Reuben; Davis, Kandace; Stumbo, Teri
2010-12-01
To identify the contents of a documentation template in The Guide to Physical Therapist Practice using the International Classification of Functioning, Disability, and Health (ICF) Core Sets for rheumatoid arthritis, osteoarthritis, and low back pain (LBP) as reference. Concepts were identified from items of an outpatient documentation template and mapped to the ICF using established linking rules. The ICF categories that were linked were compared with existing arthritis and LBP Core Sets. Based on the ICF, the template had the highest number (29%) of linked categories under Activities and participation while Body structures had the least (17%). ICF categories in the arthritis and LBP Core Sets had a 37-55% match with the ICF categories found in the template. We found 164 concepts that were not classified or not defined and 37 as personal factors. The arthritis and LBP Core Sets were reflected in the contents of the template. ICF categories in the Core Sets were reflected in the template (demonstrating up to 55% match). Potential integration of ICF in documentation templates could be explored and examined in the future to enhance clinical encounters and multidisciplinary communication. Copyright © 2010 John Wiley & Sons, Ltd.
Feasibility of and Design Parameters for a Computer-Based Attitudinal Research Information System
1975-08-01
Auditory Displays Auditory Evoked Potentials Auditory Feedback Auditory Hallucinations Auditory Localization Auditory Maski ng Auditory Neurons...surprising to hear these prob- lems e:qpressed once again and in the same old refrain. The Navy attitude surveyors were frustrated when they...Audiolcgy Audiometers Aud iometry Audiotapes Audiovisual Communications Media Audiovisual Instruction Auditory Cortex Auditory
Jiang, Guoqian; Kiefer, Richard; Prud'hommeaux, Eric; Solbrig, Harold R
2017-01-01
The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.
Mapping the literature of nurse practitioners.
Shams, Marie-Lise Antoun
2006-04-01
This study was designed to identify core journals for the nurse practitioner specialty and to determine the extent of their indexing in bibliographic databases. As part of a larger project for mapping the literature of nursing, this study followed a common methodology based on citation analysis. Four journals designated by nurse practitioners as sources for their practice information were selected. All cited references were analyzed to determine format types and publication years. Bradford's Law of Scattering was applied to identify core journals. Nine bibliographic databases were searched to estimate the index coverage of the core titles. The findings indicate that nurse practitioners rely primarily on journals (72.0%) followed by books (20.4%) for their professional knowledge. The majority of the identified core journals belong to non-nursing disciplines. This is reflected in the indexing coverage results: PubMed/MEDLINE more comprehensively indexes the core titles than CINAHL does. Nurse practitioners, as primary care providers, consult medical as well as nursing sources for their information. The implications of the citation analysis findings are significant for collection development librarians and indexing services.
NASA Astrophysics Data System (ADS)
Williams, Michael L.; Jercinovic, Michael J.; Terry, Michael P.
1999-11-01
High-resolution X-ray mapping and dating of monazite on the electron microprobe are powerful geochronological tools for structural, metamorphic, and tectonic analysis. X-ray maps commonly show complex Th, U, and Pb zoning that reflects monazite growth and overgrowth events. Age maps constructed from the X-ray maps simplify the zoning and highlight age domains. Microprobe dating offers a rapid, in situ method for estimating ages of mapped domains. Application of these techniques has placed new constraints on the tectonic history of three areas. In western Canada, age mapping has revealed multiphase monazite, with older cores and younger rims, included in syntectonic garnet. Microprobe ages show that tectonism occurred ca. 1.9 Ga, 700 m.y. later than mylonitization in the adjacent Snowbird tectonic zone. In New Mexico, age mapping and dating show that the dominant fabric and triple-point metamorphism occurred during a 1.4 Ga reactivation, not during the 1.7 Ga Yavapai-Mazatzal orogeny. In Norway, monazite inclusions in garnet constrain high-pressure metamorphism to ca. 405 Ma, and older cores indicate a previously unrecognized component of ca. 1.0 Ga monazite. In all three areas, microprobe dating and age mapping have provided a critical textural context for geochronologic data and a better understanding of the complex age spectra of these multistage orogenic belts.
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.
von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H
2016-10-26
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. Copyright © 2016 the authors 0270-6474/16/3611097-10$15.00/0.
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance
Buran, Bradley N.; Sen, Kamal; Semple, Malcolm N.; Sanes, Dan H.
2016-01-01
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. PMID:27798189
Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta
2015-01-01
Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap separating two successive stimuli necessary for a subject to report their temporal order correctly, thus the relation 'before-after'. Neuropsychological evidence has indicated elevated TOT values (corresponding to deteriorated time perception) in different clinical groups, such as aphasic patients, dyslexic subjects or children with specific language impairment. To test relationships between elevated TOT and declined cognitive functions in brain-injured patients suffering from post-stroke aphasia. We tested 30 aphasic patients (13 male, 17 female), aged between 50 and 81 years. TIP comprised assessment of TOT. Auditory comprehension was assessed with the selected language tests, i.e. Token Test, Phoneme Discrimination Test (PDT) and Voice-Onset-Time Test (VOT), while two aspects of attentional resources (i.e. alertness and vigilance) were measured using the Test of Attentional Performance (TAP) battery. Significant correlations were indicated between elevated values of TOT and deteriorated performance on all applied language tests. Moreover, significant correlations were evidenced between elevated TOT and alertness. Finally, positive correlations were found between particular language tests, i.e. (1) Token Test and PDT; (2) Token Test and VOT Test; and (3) PDT and VOT Test, as well as between PDT and both attentional tasks. These results provide further clinical evidence supporting the thesis that TIP constitutes the core process incorporated in both language and attentional resources. The novel value of the present study is the indication for the first time in Slavic language users a clear coexistence of the 'timing-auditory comprehension-attention' relationships. © 2015 Royal College of Speech and Language Therapists.
NASA Technical Reports Server (NTRS)
Alsdorf, Douglas E.; Vonfrese, Ralph R. B.
1994-01-01
The FORTRAN programs supplied in this document provide a complete processing package for statistically extracting residual core, external field and lithospheric components in Magsat observations. To process the individual passes: (1) orbits are separated into dawn and dusk local times and by altitude, (2) passes are selected based on the variance of the magnetic field observations after a least-squares fit of the core field is removed from each pass over the study area, and (3) spatially adjacent passes are processed with a Fourier correlation coefficient filter to separate coherent and non-coherent features between neighboring tracks. In the second state of map processing: (1) data from the passes are normalized to a common altitude and gridded into dawn and dusk maps with least squares collocation, (2) dawn and dusk maps are correlated with a Fourier correlation efficient filter to separate coherent and non-coherent features; the coherent features are averaged to produce a total field grid, (3) total field grids from all altitudes are continued to a common altitude, correlation filtered for coherent anomaly features, and subsequently averaged to produce the final total field grid for the study region, and (4) the total field map is differentially reduced to the pole.
Impact of socioeconomic factors on paediatric cochlear implant outcomes.
Sharma, Shalabh; Bhatia, Khyati; Singh, Satinder; Lahiri, Asish Kumar; Aggarwal, Asha
2017-11-01
The study was aimed at evaluating the impact of certain socioeconomic factors such as family income, level of parents' education, distance between the child's home and auditory verbal therapy clinic, and age of the child at implantation on postoperative cochlear implant outcomes. Children suffering from congenital bilateral profound sensorineural hearing loss and a chronologic age of 4 years or younger at the time of implantation were included in the study. Children who were able to complete a prescribed period of a 1-year follow-up were included in the study. These children underwent cochlear implantation surgery, and their postoperative outcomes were measured and documented using categories of auditory perception (CAP), meaningful auditory integration (MAIS), and speech intelligibility rating (SIR) scores. Children were divided into three groups based on the level of parental education, family income, and distance of their home from the rehabilitation-- auditory verbal therapy clinic. A total of 180 children were studied. The age at implantation had a significant impact on the postoperative outcomes, with an inverse correlation. The younger the child's age at the time of implantation, the better were the postoperative outcomes. However, there were no significant differences among the CAP, MAIS, and SIR scores and each of the three subgroups. Children from families with an annual income of less than $7,500, between $7,500 and $15,000, and more than $15,000 performed equally well, except for significantly higher SIR scores in children with family incomes more than $15,000. Children with of parents who had attended high school or possessed a bachelor's or Master's master's degree had similar scores, with no significant difference. Also, distance from the auditory verbal therapy clinic failed to have any significantimpact on a child's performance. These results have been variable, similar to those of previously published studies. A few of the earlier studies concurred with our results, but most of the studies had suggested that children in families of higher socioeconomic status had have better speech and language acquisition. Cochlear implantation significantly improves auditory perception and speech intelligibility of children suffering from profound sensorineural hearing loss. Younger The younger the age at implantation, the better are the results. Hence, early implantation should be promoted and encouraged. Our study suggests that children who followed the designated program of postoperative mapping and auditory verbal therapy for a minimum period of 1 year seemed to do equally well in terms of hearing perception and speech intelligibility, irrespective of the socioeconomic status of the family. Further studies are essential to assess the impact of these factors on long-term speech acquisition andlanguage development. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Lunar map showing traverse plans for Apollo 14 lunar landing mission
1970-09-01
This lunar map shows the traverse plans for the Apollo 14 lunar landing mission. Areas marked include Lunar module landing site, areas for the Apollo Lunar Surface Experiment Package (ALSEP) and areas for gathering of core samples.
Generation of a Maize B Centromere Minimal Map Containing the Central Core Domain.
Ellis, Nathanael A; Douglas, Ryan N; Jackson, Caroline E; Birchler, James A; Dawe, R Kelly
2015-10-28
The maize B centromere has been used as a model for centromere epigenetics and as the basis for building artificial chromosomes. However, there are no sequence resources for this important centromere. Here we used transposon display for the centromere-specific retroelement CRM2 to identify a collection of 40 sequence tags that flank CRM2 insertion points on the B chromosome. These were confirmed to lie within the centromere by assaying deletion breakpoints from centromere misdivision derivatives (intracentromere breakages caused by centromere fission). Markers were grouped together on the basis of their association with other markers in the misdivision series and assembled into a pseudocontig containing 10.1 kb of sequence. To identify sequences that interact directly with centromere proteins, we carried out chromatin immunoprecipitation using antibodies to centromeric histone H3 (CENH3), a defining feature of functional centromeric sequences. The CENH3 chromatin immunoprecipitation map was interpreted relative to the known transmission rates of centromere misdivision derivatives to identify a centromere core domain spanning 33 markers. A subset of seven markers was mapped in additional B centromere misdivision derivatives with the use of unique primer pairs. A derivative previously shown to have no canonical centromere sequences (Telo3-3) lacks these core markers. Our results provide a molecular map of the B chromosome centromere and identify key sequences within the map that interact directly with centromeric histone H3. Copyright © 2015 Ellis et al.
[The "aphasia" article in Villaret's Handwörterbuch].
Menninger, Anneliese
2016-01-01
Freud's authorship is founded on three arguments: 1) the reasoning of the article is close to Charcot's lectures which Freud had just translated; 2) there is a specific Freudian core thesis, common to the article and his later writings, namely the notion of an associative speech area extending between the "motor fields of the cortex and those of the optic and auditory nerves" and touching them like "corners" of a continuous field; 3) general observations on the revision or non- revision of articles taken over from the 1st to the 2nd edition of Villaret.
A soft X-ray map of the Perseus cluster of galaxies
NASA Technical Reports Server (NTRS)
Cash, W.; Malina, R. F.; Wolff, R. S.
1976-01-01
A 0.5-3-keV X-ray map of the Perseus cluster of galaxies is presented. The map shows a region of strong emission centered near NGC 1275 plus a highly elongated emission region which lies along the line of bright galaxies that dominates the core of the cluster. The data are compared with various models that include point and diffuse sources. One model which adequately represents the data is the superposition of a point source at NGC 1275 and an isothermal ellipsoid resulting from the bremsstrahlung emission of cluster gas. The ellipsoid has a major core radius of 20.5 arcmin and a minor core radius of 5.5 arcmin, consistent with the values obtained from galaxy counts. All acceptable models provide evidence for a compact source (less than 3 arcmin FWHM) at NGC 1275 containing about 25% of the total emission. Since the diffuse X-ray and radio components have radically different morphologies, it is unlikely that the emissions arise from a common source, as proposed in inverse-Compton models.
Daliri, Ayoub; Max, Ludo
2018-02-01
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mapping the literature of nursing informatics.
Guenther, Johanna T
2006-04-01
This study was part of the Medical Library Association's Nursing and Allied Health Resources Section's project to map the nursing literature. It identified core journals in nursing informatics and the journals referenced in them and analyzed coverage of those journals in selected indexes. Five core journals were chosen and analyzed for 1996, 1997, and 1998. The references in the core journal articles were examined for type and number of formats cited during the selected time period. Bradford's Law of Scattering divided the journals into frequency zones. The time interval, 1990 to 1998, produced 71% of the references. Internet references could not be tracked by date before 1990. Twelve journals were the most productive, 119 journals were somewhat productive, and 897 journals were the least productive. Journal of the American Medical Informatics Association was the most prolific core journal. The 1998 journal references were compared in CINAHL, PubMed/MEDLINE, Science Citation Index, and OCLC Article First. PubMed/MEDLINE had the highest indexing score.
Mapping the literature of nursing informatics
Guenther, Johanna T.
2006-01-01
Objective: This study was part of the Medical Library Association's Nursing and Allied Health Resources Section's project to map the nursing literature. It identified core journals in nursing informatics and the journals referenced in them and analyzed coverage of those journals in selected indexes. Method: Five core journals were chosen and analyzed for 1996, 1997, and 1998. The references in the core journal articles were examined for type and number of formats cited during the selected time period. Bradford's Law of Scattering divided the journals into frequency zones. Results: The time interval, 1990 to 1998, produced 71% of the references. Internet references could not be tracked by date before 1990. Twelve journals were the most productive, 119 journals were somewhat productive, and 897 journals were the least productive. Journal of the American Medical Informatics Association was the most prolific core journal. The 1998 journal references were compared in CINAHL, PubMed/MEDLINE, Science Citation Index, and OCLC Article First. PubMed/MEDLINE had the highest indexing score. PMID:16710469
Magnetic space-based field measurements
NASA Technical Reports Server (NTRS)
Langel, R. A.
1981-01-01
Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.
NASA Technical Reports Server (NTRS)
Carral, Patricia; Welch, William J.
1992-01-01
This study presents high-resolution observations of the molecular core in the star-forming region G34.3 + 0.2. Maps at 6-arcsec resolution of emission and absorption of the J = 1 - 0 transitions of HCO(+), H (C-13)N, H(C-15)N, and of the 2(2) - 1(1) transition of SO were obtained in addition to a map of the 3.4-mm continuum emission from the compact H II component. The HCL(+) emission toward G34.3 + 0.2 traces a warm molecular core about 0.9 pc in size. Emission from H (C-13)N is detected over about 0.3 pc. The cometary H II region lies near the edge of the molecular core. The blueshift of the radio recombination lines with respect to the molecular emission suggests that gas from the H II region is accelerated in a champagne flow caused by a steep gradient in the ambient gas density.
Catalog of Dense Cores in the Orion A Giant Molecular Cloud
NASA Astrophysics Data System (ADS)
Shimajiri, Yoshito; Kitamura, Y.; Nakamura, F.; Momose, M.; Saito, M.; Tsukagoshi, T.; Hiramatsu, M.; Shimoikura, T.; Dobashi, K.; Hara, C.; Kawabe, R.
2015-03-01
We present Orion A giant molecular cloud core catalogs, which are based on a 1.1 mm map with an angular resolution of 36″ (˜0.07 pc) and C18O (J = 1-0) data with an angular resolution of 26.4″ (˜0.05 pc). We have cataloged 619 dust cores in the 1.1 mm map using the Clumpfind method. The ranges of the radius, mass, and density of these cores are estimated to be 0.01-0.20 pc, 0.6-1.2 × 102 {{M}⊙ }, and 0.3 × 104-9.2 × 106 cm-3, respectively. We have identified 235 cores from the C18O data. The ranges of the radius, velocity width, LTE mass, and density are 0.13-0.34 pc, 0.31-1.31 km s-1, 1.0-61.8 {{M}⊙ }, and (0.8-17.5) × 103 cm-3, respectively. From the comparison of the spatial distributions between the dust and C18O cores, four types of spatial relations were revealed: (1) the peak positions of the dust and C18O cores agree with each other (32.4% of the C18O cores), (2) two or more C18O cores are distributed around the peak position of one dust core (10.8% of the C18O cores), (3) 56.8% of the C18O cores are not associated with any dust cores, and (4) 69.3% of the dust cores are not associated with any C18O cores. The data sets and analysis are public. The data sets and annotation files for MIRIAD and KARMA of Tables 2 and 4 are available at the NRO star formation project web site via http://th.nao.ac.jp/MEMBER/nakamrfm/sflegacy/data.html
Acquired hearing loss and brain plasticity.
Eggermont, Jos J
2017-01-01
Acquired hearing loss results in an imbalance of the cochlear output across frequency. Central auditory system homeostatic processes responding to this result in frequency specific gain changes consequent to the emerging imbalance between excitation and inhibition. Several consequences thereof are increased spontaneous firing rates, increased neural synchrony, and (in adults) potentially restricted to the auditory thalamus and cortex a reorganization of tonotopic areas. It does not seem to matter much whether the hearing loss is acquired neonatally or in adulthood. In humans, no clear evidence of tonotopic map changes with hearing loss has so far been provided, but frequency specific gain changes are well documented. Unilateral hearing loss in addition makes brain activity across hemispheres more symmetrical and more synchronous. Molecular studies indicate that in the brainstem, after 2-5 days post trauma, the glutamatergic activity is reduced, whereas glycinergic and GABAergic activity is largely unchanged. At 2 months post trauma, excitatory activity remains decreased but the inhibitory one is significantly increased. In contrast protein assays related to inhibitory transmission are all decreased or unchanged in the brainstem, midbrain and auditory cortex. Comparison of neurophysiological data with the molecular findings during a time-line of changes following noise trauma suggests that increases in spontaneous firing rates are related to decreases in inhibition, and not to increases in excitation. Because noise-induced hearing loss in cats resulted in a loss of cortical temporal processing capabilities, this may also underlie speech understanding in humans. Copyright © 2016 Elsevier B.V. All rights reserved.
Electrical stimulation of the midbrain excites the auditory cortex asymmetrically.
Quass, Gunnar Lennart; Kurt, Simone; Hildebrandt, Jannis; Kral, Andrej
2018-05-17
Auditory midbrain implant users cannot achieve open speech perception and have limited frequency resolution. It remains unclear whether the spread of excitation contributes to this issue and how much it can be compensated by current-focusing, which is an effective approach in cochlear implants. The present study examined the spread of excitation in the cortex elicited by electric midbrain stimulation. We further tested whether current-focusing via bipolar and tripolar stimulation is effective with electric midbrain stimulation and whether these modes hold any advantage over monopolar stimulation also in conditions when the stimulation electrodes are in direct contact with the target tissue. Using penetrating multielectrode arrays, we recorded cortical population responses to single pulse electric midbrain stimulation in 10 ketamine/xylazine anesthetized mice. We compared monopolar, bipolar, and tripolar stimulation configurations with regard to the spread of excitation and the characteristic frequency difference between the stimulation/recording electrodes. The cortical responses were distributed asymmetrically around the characteristic frequency of the stimulated midbrain region with a strong activation in regions tuned up to one octave higher. We found no significant differences between monopolar, bipolar, and tripolar stimulation in threshold, evoked firing rate, or dynamic range. The cortical responses to electric midbrain stimulation are biased towards higher tonotopic frequencies. Current-focusing is not effective in direct contact electrical stimulation. Electrode maps should account for the asymmetrical spread of excitation when fitting auditory midbrain implants by shifting the frequency-bands downward and stimulating as dorsally as possible. Copyright © 2018 Elsevier Inc. All rights reserved.
How may the basal ganglia contribute to auditory categorization and speech perception?
Lim, Sung-Joo; Fiez, Julie A.; Holt, Lori L.
2014-01-01
Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood. PMID:25136291
Burunat, Iballa; Tsatsishvili, Valeri; Brattico, Elvira; Toiviainen, Petri
2017-01-01
Our sense of rhythm relies on orchestrated activity of several cerebral and cerebellar structures. Although functional connectivity studies have advanced our understanding of rhythm perception, this phenomenon has not been sufficiently studied as a function of musical training and beyond the General Linear Model (GLM) approach. Here, we studied pulse clarity processing during naturalistic music listening using a data-driven approach (independent component analysis; ICA). Participants' (18 musicians and 18 controls) functional magnetic resonance imaging (fMRI) responses were acquired while listening to music. A targeted region of interest (ROI) related to pulse clarity processing was defined, comprising auditory, somatomotor, basal ganglia, and cerebellar areas. The ICA decomposition was performed under different model orders, i.e., under a varying number of assumed independent sources, to avoid relying on prior model order assumptions. The components best predicted by a measure of the pulse clarity of the music, extracted computationally from the musical stimulus, were identified. Their corresponding spatial maps uncovered a network of auditory (perception) and motor (action) areas in an excitatory-inhibitory relationship at lower model orders, while mainly constrained to the auditory areas at higher model orders. Results revealed (a) a strengthened functional integration of action-perception networks associated with pulse clarity perception hidden from GLM analyses, and (b) group differences between musicians and non-musicians in pulse clarity processing, suggesting lifelong musical training as an important factor that may influence beat processing.
Pasqualotto, Achille; Esenkaya, Tayfun
2016-01-01
Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or "soundscapes". Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localizing sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgment of relative direction task (JRD) was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s). Moreover, our results have practical implications to improve training methods with sensory substitution devices (SSD).
Robbers, Lourens F H J; Nijveldt, Robin; Beek, Aernout M; Teunissen, Paul F A; Hollander, Maurits R; Biesbroek, P Stefan; Everaars, Henk; van de Ven, Peter M; Hofman, Mark B M; van Royen, Niels; van Rossum, Albert C
2018-02-01
Native T1 mapping and late gadolinium enhancement (LGE) imaging offer detailed characterisation of the myocardium after acute myocardial infarction (AMI). We evaluated the effects of microvascular injury (MVI) and intramyocardial haemorrhage on local T1 and T2* values in patients with a reperfused AMI. Forty-three patients after reperfused AMI underwent cardiovascular magnetic resonance imaging (CMR) at 4 [3-5] days, including native MOLLI T1 and T2* mapping, STIR, cine imaging and LGE. T1 and T2* values were determined in LGE-defined regions of interest: the MI core incorporating MVI when present, the core-adjacent MI border zone (without any areas of MVI), and remote myocardium. Average T1 in the MI core was higher than in the MI border zone and remote myocardium. However, in the 20 (47%) patients with MVI, MI core T1 was lower than in patients without MVI (MVI 1048±78ms, no MVI 1111±89ms, p=0.02). MI core T2* was significantly lower in patients with MVI than in those without (MVI 20 [18-23]ms, no MVI 31 [26-39]ms, p<0.001). The presence of MVI profoundly affects MOLLI-measured native T1 values. T2* mapping suggested that this may be the result of intramyocardial haemorrhage. These findings have important implications for the interpretation of native T1 values shortly after AMI. • Microvascular injury after acute myocardial infarction affects local T1 and T2* values. • Infarct zone T1 values are lower if microvascular injury is present. • T2* mapping suggests that low infarct T1 values are likely haemorrhage. • T1 and T2* values are complimentary for correctly assessing post-infarct myocardium.
Huang, Xin; Gollin, Susanne M.; Raja, Siva; Godfrey, Tony E.
2002-01-01
Amplification of chromosomal band 11q13 is a common event in human cancer. It has been reported in about 45% of head and neck carcinomas and in other cancers including esophageal, breast, liver, lung, and bladder cancer. To understand the mechanism of 11q13 amplification and to identify the potential oncogene(s) driving it, we have fine-mapped the structure of the amplicon in oral squamous cell carcinoma cell lines and localized the proximal and distal breakpoints. A 5-Mb physical map of the region has been prepared from which sequence is available. We quantified copy number of sequence-tagged site markers at 42–550 kb intervals along the length of the amplicon and defined the amplicon core and breakpoints by using TaqMan-based quantitative microsatellite analysis. The core of the amplicon maps to a 1.5-Mb region. The proximal breakpoint localizes to two intervals between sequence-tagged site markers, 550 kb and 160 kb in size, and the distal breakpoint maps to a 250 kb interval. The cyclin D1 gene maps to the amplicon core, as do two new expressed sequence tag clusters. We have analyzed one of these expressed sequence tag clusters and now report that it contains a previously uncharacterized gene, TAOS1 (tumor amplified and overexpressed sequence 1), which is both amplified and overexpressed in oral cancer cells. The data suggest that TAOS1 may be an amplification-dependent candidate oncogene with a role in the development and/or progression of human tumors, including oral squamous cell carcinomas. The approach described here should be useful for characterizing amplified genomic regions in a wide variety of tumors. PMID:12172009
Smyth, Christopher C
2007-05-01
Developers of future forces are implementing automated aiding for driving tasks. In designing such systems, the effect of cognitive task interference on driving performance is important. The crew of such vehicles may have to occasionally perform communication and planning tasks while driving. Subjective questionnaires may aid researchers to parse out the sources of task interference in crew station designs. In this preliminary study, sixteen participants drove a vehicle simulator with automated road-turn cues (i.e., visual, audio, combined, or neither) along a course marked on a map display while replying to spoken test questions (i.e., repeating sentences, math and logical puzzles, route planning, or none) and reporting other vehicles in the scenario. Following each trial, a battery of subjective questionnaires was administered to determine the perceived effects of the loading on their cognitive functionality. Considering the performance, the participants drove significantly faster with the road-turn cues than with just the map. They recalled fewer vehicle sightings with the cognitive tests than without them. Questionnaire results showed that their reasoning was more straightforward, the quantity of information for understanding higher, and the trust greater with the combined cues than the map-only. They reported higher perceived workload with the cognitive tests. The capacity for maintaining situational awareness was reduced with the cognitive tests because of the increased division of attention and the increase in the instability, variability, and complexity of the demands. The association and intuitiveness of cognitive processing were lowest and the subjective stress highest for the route planning test. Finally, the confusability in reasoning was greater for the auditory cue with the route planning than the auditory cue without the cognitive tests. The subjective questionnaires are sensitive to the effects of the cognitive loading and, therefore, may be useful for guiding the development of automated aid designs.
Eytan, Danny; Pang, Elizabeth W; Doesburg, Sam M; Nenadovic, Vera; Gavrilovic, Bojan; Laussen, Peter; Guerguerian, Anne-Marie
2016-01-01
Acute brain injury is a common cause of death and critical illness in children and young adults. Fundamental management focuses on early characterization of the extent of injury and optimizing recovery by preventing secondary damage during the days following the primary injury. Currently, bedside technology for measuring neurological function is mainly limited to using electroencephalography (EEG) for detection of seizures and encephalopathic features, and evoked potentials. We present a proof of concept study in patients with acute brain injury in the intensive care setting, featuring a bedside functional imaging set-up designed to map cortical brain activation patterns by combining high density EEG recordings, multi-modal sensory stimulation (auditory, visual, and somatosensory), and EEG source modeling. Use of source-modeling allows for examination of spatiotemporal activation patterns at the cortical region level as opposed to the traditional scalp potential maps. The application of this system in both healthy and brain-injured participants is demonstrated with modality-specific source-reconstructed cortical activation patterns. By combining stimulation obtained with different modalities, most of the cortical surface can be monitored for changes in functional activation without having to physically transport the subject to an imaging suite. The results in patients in an intensive care setting with anatomically well-defined brain lesions suggest a topographic association between their injuries and activation patterns. Moreover, we report the reproducible application of a protocol examining a higher-level cortical processing with an auditory oddball paradigm involving presentation of the patient's own name. This study reports the first successful application of a bedside functional brain mapping tool in the intensive care setting. This application has the potential to provide clinicians with an additional dimension of information to manage critically-ill children and adults, and potentially patients not suited for magnetic resonance imaging technologies.
GABAergic Local Interneurons Shape Female Fruit Fly Response to Mating Songs.
Yamada, Daichi; Ishimoto, Hiroshi; Li, Xiaodong; Kohashi, Tsunehiko; Ishikawa, Yuki; Kamikouchi, Azusa
2018-05-02
Many animals use acoustic signals to attract a potential mating partner. In fruit flies ( Drosophila melanogaster ), the courtship pulse song has a species-specific interpulse interval (IPI) that activates mating. Although a series of auditory neurons in the fly brain exhibit different tuning patterns to IPIs, it is unclear how the response of each neuron is tuned. Here, we studied the neural circuitry regulating the activity of antennal mechanosensory and motor center (AMMC)-B1 neurons, key secondary auditory neurons in the excitatory neural pathway that relay song information. By performing Ca 2+ imaging in female flies, we found that the IPI selectivity observed in AMMC-B1 neurons differs from that of upstream auditory sensory neurons [Johnston's organ (JO)-B]. Selective knock-down of a GABA A receptor subunit in AMMC-B1 neurons increased their response to short IPIs, suggesting that GABA suppresses AMMC-B1 activity at these IPIs. Connection mapping identified two GABAergic local interneurons that synapse with AMMC-B1 and JO-B. Ca 2+ imaging combined with neuronal silencing revealed that these local interneurons, AMMC-LN and AMMC-B2, shape the response pattern of AMMC-B1 neurons at a 15 ms IPI. Neuronal silencing studies further suggested that both GABAergic local interneurons suppress the behavioral response to artificial pulse songs in flies, particularly those with a 15 ms IPI. Altogether, we identified a circuit containing two GABAergic local interneurons that affects the temporal tuning of AMMC-B1 neurons in the song relay pathway and the behavioral response to the courtship song. Our findings suggest that feedforward inhibitory pathways adjust the behavioral response to courtship pulse songs in female flies. SIGNIFICANCE STATEMENT To understand how the brain detects time intervals between sound elements, we studied the neural pathway that relays species-specific courtship song information in female Drosophila melanogaster We demonstrate that the signal transmission from auditory sensory neurons to key secondary auditory neurons antennal mechanosensory and motor center (AMMC)-B1 is the first-step to generate time interval selectivity of neurons in the song relay pathway. Two GABAergic local interneurons are suggested to shape the interval selectivity of AMMC-B1 neurons by receiving auditory inputs and in turn providing feedforward inhibition onto AMMC-B1 neurons. Furthermore, these GABAergic local interneurons suppress the song response behavior in an interval-dependent manner. Our results provide new insights into the neural circuit basis to adjust neuronal and behavioral responses to a species-specific communication sound. Copyright © 2018 the authors 0270-6474/18/384329-19$15.00/0.
Anima, Roberto J.; Clifton, H. Edward; Reiss, Carol; Wong, Florence L.
2005-01-01
A project to study San Francisco Bay sediments collected over 300 sediment gravity cores; six push cores, and three box cores in San Francisco Bay during the years 1990-91. The purpose of the sampling effort is to establish a database on the Holocene sediment history of the bay. The samples described and mapped are the first effort to catalog and present the data collected. Thus far the cores have been utilized in various cooperative studies with state colleges and universities, and other USGS divisions. These cores serve as a base for ongoing multidisciplinary studies. The sediment studies project has initiated subsequent coring efforts within the bay using refined coring techniques to attain deeper cores.
Rockwell, B.W.; Cunningham, C.G.; Breit, G.N.; Rye, R.O.
2006-01-01
Previous studies have demonstrated that the replacement alunite deposits just north of the town of Marysvale, Utah, USA, were formed primarily by low-temperature (100??-170?? C), steam-heated processes near the early Miocene paleoground surface, immediately above convecting hydrothermal plumes. Pyrite-bearing propylitically altered rocks occur mainly beneath the steam-heated alunite and represent the sulfidized feeder zone of the H2S-dominated hydrothermal fluids, the oxidation of which at higher levels led to the formation of the alunite. Maps of surface mineralogy at the White Horse deposit generated from Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were used in conjunction with X-ray diffraction studies of field samples to test the accuracy and precision of AVIRIS-based mineral mapping of altered rocks and demonstrate the utility of spectroscopic mapping for ore deposit characterization. The mineral maps identified multiple core zones of alunite that grade laterally outward to kaolinite. Surrounding the core zones are dominantly propylitically altered rocks containing illite, montmorillonite, and chlorite, with minor pyrite, kaolinite, gypsum, and remnant potassium feldspar from the parent rhyodacitic ash-flow tuff. The AVIRIS mapping also identified fracture zones expressed by ridge-forming selvages of quartz + dickite + kaolinite that form a crude ring around the advanced argillic core zones. Laboratory analyses identified the aluminum phosphate-sulfate (APS) minerals woodhouseite and svanbergite in one sample from these dickite-bearing argillic selvages. Reflectance spectroscopy determined that the outer edges of the selvages contain more dickite than do the medial regions. The quartz + dickite ?? kaolinite ?? APS-mineral selvages demonstrate that fracture control of replacement processes is more prevalent away from the advanced argillic core zones. Although not exposed at the White Horse deposit, pyrophyllite ?? ordered illite was identified using AVIRIS in localized, superimposed conduits within propylitically altered rocks in nearby alteration systems of similar age and genesis that have been eroded to deeper levels. The fracture zones bearing pyrophyllite, illite, dickite, natroalunite, and/or APS minerals indicate a magmatic component in the dominantly steam-heated system. ?? 2006 Society of Economic Geologists, Inc.
Does Core Area Theory Apply to STIs in Rural Environments?
Gesink, Dionne C; Sullivan, Ashleigh B; Norwood, Todd; Serre, Marc L; Miller, William C
2012-01-01
Background Our objective was to determine the extent to which geographical core areas for gonorrhea and syphilis are located in rural areas, as compared to urban areas. Methods Incident gonorrhea (January 1, 2005 to December 31, 2010) and syphilis (January 1, 1999 to December 31, 2010) rates were estimated and mapped by census tract and quarter. Rurality was measured using percent rural and rural-urban commuting area (RUCA; rural, small town, micropolitan, or urban). SaTScan was used to identify spatiotemporal clusters of significantly elevated rates of infection. Clusters lasting five years or longer were considered core areas; clusters of shorter duration were considered outbreaks. Clusters were overlaid on maps of rurality and qualitatively assessed for correlation. Results Twenty gonorrhea core areas were identified; 65% in urban centers, 25% in micropolitan areas, and the remaining 10% were geographically large capturing combinations of urban, micropolitan, small town and rural environments. Ten syphilis core areas were identified with 80% in urban centers and 20% capturing two or more RUCAs. All ten of the syphilis core areas (100%) overlapped with gonorrhea core areas. Conclusions Gonorrhea and syphilis rates were high for rural parts of North Carolina; however, no core areas were identified exclusively for small towns or rural areas. The main pathway of rural STI transmission may be through the interconnectedness of urban, micropolitan, small town and rural areas. Directly addressing STIs in urban and micropolitan communities may also indirectly help address STI rates in rural and small town communities. PMID:23254115
Advanced Wireless Integrated Navy Network - AWINN
2005-09-30
progress report No. 3 on AWINN hardware and software configurations of smart , wideband, multi-function antennas, secure configurable platform, close-in...results to the host PC via a UART soft core. The UART core used is a proprietary Xilinx core which incorporates features described in National...current software uses wheel odometry and visual landmarks to create a map and estimate position on an internal x, y grid . The wheel odometry provides a
Code of Federal Regulations, 2011 CFR
2011-07-01
... fossil content, core analyses, laboratory analyses of physical and chemical properties, logs or charts of... geological information means knowledge, often in the form of schematic cross sections and maps, developed by... geophysical information means knowledge, often in the form of schematic cross sections and maps, developed by...
Dickinson, William R.; digital database by Hirschberg, Douglas M.; Pitts, G. Stephen; Bolm, Karen S.
2002-01-01
The geologic map of Catalina Core Complex and San Pedro Trough by Dickinson (1992) was digitized for input into a geographic information system (GIS) by the U.S. Geological Survey staff and contractors in 2000-2001. This digital geospatial database is one of many being created by the U.S. Geological Survey as an ongoing effort to provide geologic information in a geographic information system (GIS) for use in spatial analysis. The resulting digital geologic map database data can be queried in many ways to produce a variety of geologic maps and derivative products. Digital base map data (topography, roads, towns, rivers, lakes, and so forth) are not included; they may be obtained from a variety of commercial and government sources. This database is not meant to be used or displayed at any scale larger than 1:125,000 (for example, 1:100,000 or 1:24,000). The digital geologic map plot files that are provided herein are representations of the database. The map area is located in southern Arizona. This report lists the geologic map units, the methods used to convert the geologic map data into a digital format, the ArcInfo GIS file structures and relationships, and explains how to download the digital files from the U.S. Geological Survey public access World Wide Web site on the Internet. The manuscript and digital data review by Lorre Moyer (USGS) is greatly appreciated.
Precise Maps of RNA Polymerase Reveal How Promoters Direct Initiation and Pausing
Kwak, Hojoong; Fuda, Nicholas J.; Core, Leighton J.; Lis, John T.
2014-01-01
Transcription regulation occurs frequently through promoter-associated pausing of RNA polymerase II (Pol II). We developed a Precision nuclear Run-On and sequencing assay (PRO-seq) to map the genome-wide distribution of transcriptionally-engaged Pol II at base-pair resolution. Pol II accumulates immediately downstream of promoters, at intron-exon junctions that are efficiently used for splicing, and over 3' poly-adenylation sites. Focused analyses of promoters reveal that pausing is not fixed relative to initiation sites nor is it specified directly by the position of a particular core promoter element or the first nucleosome. Core promoter elements function beyond initiation, and when optimally positioned they act collectively to dictate the position and strength of pausing. We test this ‘Complex Interaction’ model with insertional mutagenesis of the Drosophila Hsp70 core promoter. PMID:23430654
Mice with reduced NMDA receptor expression: more consistent with autism than schizophrenia?
Gandal, M J; Anderson, R L; Billingslea, E N; Carlson, G C; Roberts, T P L; Siegel, S J
2012-08-01
Reduced NMDA-receptor (NMDAR) function has been implicated in the pathophysiology of neuropsychiatric disease, most strongly in schizophrenia but also recently in autism spectrum disorders (ASD). To determine the direct contribution of NMDAR dysfunction to disease phenotypes, a mouse model with constitutively reduced expression of the obligatory NR1 subunit has been developed and extensively investigated. Adult NR1(neo-/-) mice show multiple abnormal behaviors, including reduced social interactions, locomotor hyperactivity, self-injury, deficits in prepulse inhibition (PPI) and sensory hypersensitivity, among others. Whereas such phenotypes have largely been interpreted in the context of schizophrenia, these behavioral abnormalities are rather non-specific and are frequently present across models of diseases characterized by negative symptom domains. This study investigated auditory electrophysiological and behavioral paradigms relevant to autism, to determine whether NMDAR hypofunction may be more consistent with adult ASD-like phenotypes. Indeed, transgenic mice showed behavioral deficits relevant to all core ASD symptoms, including decreased social interactions, altered ultrasonic vocalizations and increased repetitive behaviors. NMDAR disruption recapitulated clinical endophenotypes including reduced PPI, auditory-evoked response N1 latency delay and reduced gamma synchrony. Auditory electrophysiological abnormalities more closely resembled those seen in clinical studies of autism than schizophrenia. These results suggest that NMDAR hypofunction may be associated with a continuum of neuropsychiatric diseases, including schizophrenia and autism. Neural synchrony abnormalities suggest an imbalance of glutamatergic and GABAergic coupling and may provide a target, along with behavioral phenotypes, for preclinical screening of novel therapeutics. © 2012 The Authors. Genes, Brain and Behavior © 2012 Blackwell Publishing Ltd and International Behavioural and Neural Genetics Society.
NASA Astrophysics Data System (ADS)
Tan, Jonathan
We describe a research plan to develop and extend the mid-infrared (MIR) extinction mapping technique presented by Butler & Tan (2009), who studied Infrared Dark Clouds (IRDCs) using Spitzer Space Telescope Infrared Array Camera (IRAC) 8 micron images. This method has the ability to probe the detailed spatial structure of very high column density regions, i.e. the gas clouds thought to represent the initial conditions for massive star and star cluster formation. We will analyze the data Spitzer obtained at other wavelengths, i.e. the IRAC bands at 3.6, 4.5 and 5.8 microns, and the Multiband Imaging Photometer (MIPS) bands, especially at 24 microns. This will allow us to measure the dust extinction law across the MIR and search for evidence of dust grain evolution, e.g. grain growth and ice mantle formation, as a function of gas density and column density. We will also study the detailed structure of the extinction features, including individual cores that may form single stars or close binaries, especially focusing on those cores that may form massive stars. By studying independent dark cores in a given IRDC, we will be able to test if they have a common minimum observed intensity, which we will then attribute to the foreground. This is a new method that should allow us to more accurately map distant, high column density IRDCs, probing more extreme regimes of star formation. We will combine MIR extinction mapping, which works best at high column densities, with near- IR mapping based on 2MASS images of star fields, which is most useful at lower columns that probe the extended giant molecular cloud structure. This information is crucial to help understand the formation process of IRDCs, which may be the rate limiting step for global galactic star formation rates. We will use our new extinction mapping methods to analyze large samples of IRDCs and thus search the Galaxy for the most extreme examples of high column density cores and assess the global star formation efficiency in dense gas. We will estimate the ability of future NASA missions, such as JWST, to carry out MIR extinction mapping science. We will develop the results of this research into an E/PO presentation to be included in the various public outreach events organized and courses taught by the PI.
Reversing pathological neural activity using targeted plasticity.
Engineer, Navzer D; Riley, Jonathan R; Seale, Jonathan D; Vrana, Will A; Shetake, Jai A; Sudanagunta, Sindhu P; Borland, Michael S; Kilgard, Michael P
2011-02-03
Brain changes in response to nerve damage or cochlear trauma can generate pathological neural activity that is believed to be responsible for many types of chronic pain and tinnitus. Several studies have reported that the severity of chronic pain and tinnitus is correlated with the degree of map reorganization in somatosensory and auditory cortex, respectively. Direct electrical or transcranial magnetic stimulation of sensory cortex can temporarily disrupt these phantom sensations. However, there is as yet no direct evidence for a causal role of plasticity in the generation of pain or tinnitus. Here we report evidence that reversing the brain changes responsible can eliminate the perceptual impairment in an animal model of noise-induced tinnitus. Exposure to intense noise degrades the frequency tuning of auditory cortex neurons and increases cortical synchronization. Repeatedly pairing tones with brief pulses of vagus nerve stimulation completely eliminated the physiological and behavioural correlates of tinnitus in noise-exposed rats. These improvements persisted for weeks after the end of therapy. This method for restoring neural activity to normal may be applicable to a variety of neurological disorders.
Reversing pathological neural activity using targeted plasticity
Engineer, Navzer D.; Riley, Jonathan R.; Seale, Jonathan D.; Vrana, Will A.; Shetake, Jai A.; Sudanagunta, Sindhu P.; Borland, Michael S.; Kilgard, Michael P.
2012-01-01
Brain changes in response to nerve damage or cochlear trauma can generate pathological neural activity that is believed to be responsible for many types of chronic pain and tinnitus1–3. Several studies have reported that the severity of chronic pain and tinnitus is correlated with the degree of map reorganization in somatosensory and auditory cortex, respectively1,4. Direct electrical or transcranial magnetic stimulation of sensory cortex can temporarily disrupt these phantom sensations5. However, there is as yet no direct evidence for a causal role of plasticity in the generation of pain or tinnitus. Here we report evidence that reversing the brain changes responsible can eliminate the perceptual impairment in an animal model of noise-induced tinnitus. Exposure to intense noise degrades the frequency tuning of auditory cortex neurons and increases cortical synchronization. Repeatedly pairing tones with brief pulses of vagus nerve stimulation completely eliminated the physiological and behavioural correlates of tinnitus in noise-exposed rats. These improvements persisted for weeks after the end of therapy. This method for restoring neural activity to normal may be applicable to a variety of neurological disorders. PMID:21228773
Genetic Otx2 mis-localization delays critical period plasticity across brain regions.
Lee, H H C; Bernard, C; Ye, Z; Acampora, D; Simeone, A; Prochiantz, A; Di Nardo, A A; Hensch, T K
2017-05-01
Accumulation of non-cell autonomous Otx2 homeoprotein in postnatal mouse visual cortex (V1) has been implicated in both the onset and closure of critical period (CP) plasticity. Here, we show that a genetic point mutation in the glycosaminoglycan recognition motif of Otx2 broadly delays the maturation of pivotal parvalbumin-positive (PV+) interneurons not only in V1 but also in the primary auditory (A1) and medial prefrontal cortex (mPFC). Consequently, not only visual, but also auditory plasticity is delayed, including the experience-dependent expansion of tonotopic maps in A1 and the acquisition of acoustic preferences in mPFC, which mitigates anxious behavior. In addition, Otx2 mis-localization leads to dynamic turnover of selected perineuronal net (PNN) components well beyond the normal CP in V1 and mPFC. These findings reveal widespread actions of Otx2 signaling in the postnatal cortex controlling the maturational trajectory across modalities. Disrupted PV+ network function and deficits in PNN integrity are implicated in a variety of psychiatric illnesses, suggesting a potential global role for Otx2 function in establishing mental health.
En1 directs superior olivary complex neuron positioning, survival, and expression of FoxP1.
Altieri, Stefanie C; Jalabi, Walid; Zhao, Tianna; Romito-DiGiacomo, Rita R; Maricich, Stephen M
2015-12-01
Little is known about the genetic pathways and transcription factors that control development and maturation of central auditory neurons. En1, a gene expressed by a subset of developing and mature superior olivary complex (SOC) cells, encodes a homeodomain transcription factor important for neuronal development in the midbrain, cerebellum, hindbrain and spinal cord. Using genetic fate-mapping techniques, we show that all En1-lineal cells in the SOC are neurons and that these neurons are glycinergic, cholinergic and GABAergic in neurotransmitter phenotype. En1 deletion does not interfere with specification or neural fate of these cells, but does cause aberrant positioning and subsequent death of all En1-lineal SOC neurons by early postnatal ages. En1-null cells also fail to express the transcription factor FoxP1, suggesting that FoxP1 lies downstream of En1. Our data define important roles for En1 in the development and maturation of a diverse group of brainstem auditory neurons. Copyright © 2015 Elsevier Inc. All rights reserved.
"Let Me Hear Your Handwriting!" Evaluating the Movement Fluency from Its Sonification.
Danna, Jérémy; Paz-Villagrán, Vietminh; Gondre, Charles; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc
2015-01-01
The quality of handwriting is evaluated from the visual inspection of its legibility and not from the movement that generates the trace. Although handwriting is achieved in silence, adding sounds to handwriting movement might help towards its perception, provided that these sounds are meaningful. This study evaluated the ability to judge handwriting quality from the auditory perception of the underlying sonified movement, without seeing the written trace. In a first experiment, samples of a word written by children with dysgraphia, proficient children writers, and proficient adult writers were collected with a graphic tablet. Then, the pen velocity, the fluency, and the axial pen pressure were sonified in order to create forty-five audio files. In a second experiment, these files were presented to 48 adult listeners who had to mark the underlying unseen handwriting. In order to evaluate the relevance of the sonification strategy, two experimental conditions were compared. In a first 'implicit' condition, the listeners made their judgment without any knowledge of the mapping between the sounds and the handwriting variables. In a second 'explicit' condition, they knew what the sonified variables corresponded to and the evaluation criteria. Results showed that, under the implicit condition, two thirds of the listeners marked the three groups of writers differently. In the explicit condition, all listeners marked the dysgraphic handwriting lower than that of the two other groups. In a third experiment, the scores given from the auditory evaluation were compared to the scores given by 16 other adults from the visual evaluation of the trace. Results revealed that auditory evaluation was more relevant than the visual evaluation for evaluating a dysgraphic handwriting. Handwriting sonification might therefore be a relevant tool allowing a therapist to complete the visual assessment of the written trace by an auditory control of the handwriting movement quality.
“Let Me Hear Your Handwriting!” Evaluating the Movement Fluency from Its Sonification
Danna, Jérémy; Paz-Villagrán, Vietminh; Gondre, Charles; Aramaki, Mitsuko; Kronland-Martinet, Richard; Ystad, Sølvi; Velay, Jean-Luc
2015-01-01
The quality of handwriting is evaluated from the visual inspection of its legibility and not from the movement that generates the trace. Although handwriting is achieved in silence, adding sounds to handwriting movement might help towards its perception, provided that these sounds are meaningful. This study evaluated the ability to judge handwriting quality from the auditory perception of the underlying sonified movement, without seeing the written trace. In a first experiment, samples of a word written by children with dysgraphia, proficient children writers, and proficient adult writers were collected with a graphic tablet. Then, the pen velocity, the fluency, and the axial pen pressure were sonified in order to create forty-five audio files. In a second experiment, these files were presented to 48 adult listeners who had to mark the underlying unseen handwriting. In order to evaluate the relevance of the sonification strategy, two experimental conditions were compared. In a first ‘implicit’ condition, the listeners made their judgment without any knowledge of the mapping between the sounds and the handwriting variables. In a second ‘explicit’ condition, they knew what the sonified variables corresponded to and the evaluation criteria. Results showed that, under the implicit condition, two thirds of the listeners marked the three groups of writers differently. In the explicit condition, all listeners marked the dysgraphic handwriting lower than that of the two other groups. In a third experiment, the scores given from the auditory evaluation were compared to the scores given by 16 other adults from the visual evaluation of the trace. Results revealed that auditory evaluation was more relevant than the visual evaluation for evaluating a dysgraphic handwriting. Handwriting sonification might therefore be a relevant tool allowing a therapist to complete the visual assessment of the written trace by an auditory control of the handwriting movement quality. PMID:26083384
Decoding Multiple Sound Categories in the Human Temporal Cortex Using High Resolution fMRI
Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C. M.
2015-01-01
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain’s representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases. PMID:25692885
Decoding multiple sound categories in the human temporal cortex using high resolution fMRI.
Zhang, Fengqing; Wang, Ji-Ping; Kim, Jieun; Parrish, Todd; Wong, Patrick C M
2015-01-01
Perception of sound categories is an important aspect of auditory perception. The extent to which the brain's representation of sound categories is encoded in specialized subregions or distributed across the auditory cortex remains unclear. Recent studies using multivariate pattern analysis (MVPA) of brain activations have provided important insights into how the brain decodes perceptual information. In the large existing literature on brain decoding using MVPA methods, relatively few studies have been conducted on multi-class categorization in the auditory domain. Here, we investigated the representation and processing of auditory categories within the human temporal cortex using high resolution fMRI and MVPA methods. More importantly, we considered decoding multiple sound categories simultaneously through multi-class support vector machine-recursive feature elimination (MSVM-RFE) as our MVPA tool. Results show that for all classifications the model MSVM-RFE was able to learn the functional relation between the multiple sound categories and the corresponding evoked spatial patterns and classify the unlabeled sound-evoked patterns significantly above chance. This indicates the feasibility of decoding multiple sound categories not only within but across subjects. However, the across-subject variation affects classification performance more than the within-subject variation, as the across-subject analysis has significantly lower classification accuracies. Sound category-selective brain maps were identified based on multi-class classification and revealed distributed patterns of brain activity in the superior temporal gyrus and the middle temporal gyrus. This is in accordance with previous studies, indicating that information in the spatially distributed patterns may reflect a more abstract perceptual level of representation of sound categories. Further, we show that the across-subject classification performance can be significantly improved by averaging the fMRI images over items, because the irrelevant variations between different items of the same sound category are reduced and in turn the proportion of signals relevant to sound categorization increases.
A comparison of the IGBP DISCover and University of Maryland 1 km global land cover products
Hansen, M.C.; Reed, B.
2000-01-01
Two global 1 km land cover data sets derived from 1992-1993 Advanced Very High Resolution Radiometer (AVHRR) data are currently available, the International Geosphere-Biosphere Programme Data and Information System (IGBP-DIS) DISCover and the University of Maryland (UMd) 1 km land cover maps. This paper makes a preliminary comparison of the methodologies and results of the two products. The DISCover methodology employed an unsupervised clustering classification scheme on a per-continent basis using 12 monthly maximum NDVI composites as inputs. The UMd approach employed a supervised classification tree method in which temporal metrics derived from all AVHRR bands and the NDVI were used to predict class membership across the entire globe. The DISCover map uses the IGBP classification scheme, while the UMd map employs a modified IGBP scheme minus the classes of permanent wetlands, cropland/natural vegetation mosaic and ice and snow. Global area totals of aggregated vegetation types are very similar and have a per-pixel agreement of 74%. For tall versus short/no vegetation, the per-pixel agreement is 84%. For broad vegetation types, core areas map similarly, while transition zones around core areas differ significantly. This results in high regional variability between the maps. Individual class agreement between the two 1 km maps is 49%. Comparison of the maps at a nominal 0.5 resolution with two global ground-based maps shows an improvement of thematic concurrency of 46% when viewing average class agreement. The absence of the cropland mosaic class creates a difficulty in comparing the maps, due to its significant extent in the DISCover map. The DISCover map, in general, has more forest, while the UMd map has considerably more area in the intermediate tree cover classes of woody savanna/ woodland and savanna/wooded grassland.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dill, R.F.; Slosson, J.E.; McEachen, D.B.
1990-05-01
A Macintosh II{sup TM} computer and commercially available software were used to analyze and depict the topography, construct an isopach sediment thickness map, plot core positions, and locate the geology of an offshore area facing an active landslide on the southern side of Palos Verdes Peninsula California. Profile data from side scan sonar, 3.5 kHz, and Boomer subbottom, high-resolution seismic, diving, echo sounder traverses, and cores - all controlled with a mini Ranger II navigation system - were placed in MacGridzo{sup TM} and WingZ{sup TM} software programs. The computer-plotted data from seven sources were used to construct maps with overlaysmore » for evaluating the possibility of a shoreside landslide extending offshore. The poster session describes the offshore survey system and demonstrates the development of the computer data base, its placement into the MacGridzo{sup TM} gridding program, and transfer of gridded navigational locations to the WingZ{sup TM} data base and graphics program. Data will be manipulated to show how sea-floor features are enhanced and how isopach data were used to interpret the possibility of landslide displacement and Holocene sea level rise. The software permits rapid assessment of data using computerized overlays and a simple, inexpensive means of constructing and evaluating information in map form and the preparation of final written reports. This system could be useful in many other areas where seismic profiles, precision navigational locations, soundings, diver observations, and core provide a great volume of information that must be compared on regional plots to develop of field maps for geological evaluation and reports.« less
NASA Astrophysics Data System (ADS)
Phoenix, V. R.; Shukla, M.; Vallatos, A.; Riley, M. S.; Tellam, J. H.; Holmes, W. M.
2015-12-01
Manufactured nanoparticles (NPs) are already utilized in a diverse array of applications, including cosmetics, optics, medical technology, textiles and catalysts. Problematically, once in the natural environment, NPs can have a wide range of toxic effects. To protect groundwater from detrimental NPs we must be able to predict nanoparticle movement within the aquifer. The often complex transport behavior of nanoparticles ensures the development of NP transport models is not a simple task. To enhance our understanding of NP transport processes, we utilize novel magnetic resonance imaging (MRI) which enables us to look inside the rock and image the movement of nanoparticles within. For this, we use nanoparticles that are paramagnetic, making them visible to the MRI and enabling us to collect spatially resolved data from which we can develop more robust transport models. In this work, a core of Bentheimer sandstone (3 x 7 cm) was saturated with water and imaged inside a 7Tesla Bruker Biospec MRI. Firstly the porosity of the core was mapped using a MSME MRI sequence. Prior to imaging NP transport, the velocity of water (in absence on nanoparticles) was mapped using an APGSTE-RARE sequence. Nano-magnetite nanoparticles were then pumped into the core and their transport through the core was imaged using a RARE sequence. These images were calibrated using T2 parameter maps to provide fully quantitative maps of nanoparticle concentration at regular time intervals throughout the column (T2 being the spin-spin relaxation time of 1H nuclei). This work demonstrated we are able to spatially resolve porosity, water velocity and nanoparticle movement, inside rock, using a single technique (MRI). Significantly, this provides us with a unique and powerful dataset from which we are now developing new models of nanoparticle transport.
Mapping the literature of cytotechnology
Stevens, Sheryl R.
2000-01-01
The major purpose of this study was to identify and assess indexing coverage of core journals in cytotechnology. It was part of a larger project sponsored by the Nursing and Allied Health Resources Section of the Medical Library Association to map the literature of allied health. Three representative journals in cytotechnology were selected and subjected to citation analysis to determine what journals, other publication types, and years were cited and how often. Bradford's Law of Scattering was applied to the resulting list of cited journals to identify core titles in the discipline, and five indexes were searched to assess coverage of these core titles. Results indicated that the cytotechnology journal literature had a small core but wide dispersion: one third of the 21,021 journal citations appeared in only 3 titles; another third appeared in an additional 26 titles; the remaining third were scattered in 1,069 different titles. Science Citation Index Expanded rated highest in indexing coverage of the core titles, followed by MEDLINE, EMBASE/Excerpta Medica, HealthSTAR, and Cumulative Index to Nursing and Allied Health Literature (CINAHL). The study's results also showed that journals were the predominantly cited format and that citing authors relied strongly on more recent literature. PMID:10783973
Shared protection based virtual network mapping in space division multiplexing optical networks
NASA Astrophysics Data System (ADS)
Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie
2018-05-01
Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.
Topographic mapping of a hierarchy of temporal receptive windows using a narrated story
Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.
2011-01-01
Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912
Initial Results With Image-guided Cochlear Implant Programming in Children.
Noble, Jack H; Hedley-Williams, Andrea J; Sunderhaus, Linsey; Dawant, Benoit M; Labadie, Robert F; Camarata, Stephen M; Gifford, René H
2016-02-01
Image-guided cochlear implant (CI) programming can improve hearing outcomes for pediatric CI recipients. CIs have been highly successful for children with severe-to-profound hearing loss, offering potential for mainstreamed education and auditory-oral communication. Despite this, a significant number of recipients still experience poor speech understanding, language delay, and, even among the best performers, restoration to normal auditory fidelity is rare. Although significant research efforts have been devoted to improving stimulation strategies, few developments have led to significant hearing improvement over the past two decades. Recently introduced techniques for image-guided CI programming (IGCIP) permit creating patient-customized CI programs by making it possible, for the first time, to estimate the position of implanted CI electrodes relative to the nerves they stimulate using CT images. This approach permits identification of electrodes with high levels of stimulation overlap and to deactivate them from a patient's map. Previous studies have shown that IGCIP can significantly improve hearing outcomes for adults with CIs. The IGCIP technique was tested for 21 ears of 18 pediatric CI recipients. Participants had long-term experience with their CI (5 mo to 13 yr) and ranged in age from 5 to 17 years old. Speech understanding was assessed after approximately 4 weeks of experience with the IGCIP map. Using a two-tailed Wilcoxon signed-rank test, statistically significant improvement (p < 0.05) was observed for word and sentence recognition in quiet and noise, as well as pediatric self-reported quality-of-life (QOL) measures. Our results indicate that image guidance significantly improves hearing and QOL outcomes for pediatric CI recipients.
Multimodality language mapping in patients with left-hemispheric language dominance on Wada test.
Kojima, Katsuaki; Brown, Erik C; Rothermel, Robert; Carlson, Alanna; Matsuzaki, Naoyuki; Shah, Aashit; Atkinson, Marie; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2012-10-01
We determined the utility of electrocorticography (ECoG) and stimulation for detecting language-related sites in patients with left-hemispheric language-dominance on Wada test. We studied 13 epileptic patients who underwent language mapping using event-related gamma-oscillations on ECoG and stimulation via subdural electrodes. Sites showing significant gamma-augmentation during an auditory-naming task were defined as language-related ECoG sites. Sites at which stimulation resulted in auditory perceptual changes, failure to verbalize a correct answer, or sensorimotor symptoms involving the mouth were defined as language-related stimulation sites. We determined how frequently these methods revealed language-related sites in the superior-temporal, inferior-frontal, dorsolateral-premotor, and inferior-Rolandic regions. Language-related sites in the superior-temporal and inferior-frontal gyri were detected by ECoG more frequently than stimulation (p < 0.05), while those in the dorsolateral-premotor and inferior-Rolandic regions were detected by both methods equally. Stimulation of language-related ECoG sites, compared to the others, more frequently elicited language symptoms (p < 0.00001). One patient developed dysphasia requiring in-patient speech therapy following resection of the dorsolateral-premotor and inferior-Rolandic regions containing language-related ECoG sites not otherwise detected by stimulation. Language-related gamma-oscillations may serve as an alternative biomarker of underlying language function in patients with left-hemispheric language-dominance. Measurement of language-related gamma-oscillations is warranted in presurgical evaluation of epileptic patients. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that allow the selection of as many hearing skills as possible, validated by comparison with the battery of tests used in the diagnosis. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
A Case For Free-free Absorption In The GPS Sources 1321+410 And 0026+346
NASA Astrophysics Data System (ADS)
Marr, Jonathan M.; Perry, T. M.; Read, J. W.; Taylor, G. B.
2010-05-01
We report on the results of VLBI observations of two gigahertz-peaked spectrum sources, 1321+410 and 0026+346, at five frequencies bracketing the spectral peaks. By comparing the three lower-frequency flux-density maps with extrapolations of the high frequency spectra we obtained maps of the optical depths as a function of frequency. The morphologies of the optical depth maps of 1321+410, at all frequencies, are strikingly uniform, consistent with there being a foreground screen of absorbing gas. We also find that the flux densities across the map fit free-free absorption spectra within the uncertainties. The required free-free optical depths are satisfied with reasonable gas parameters (ne 4000 cm-3, T 104 K, and L 1 pc). We conclude that the case for free-free absorption in 1321+410 is strong. In 0026+346, there is a compact feature with an inverted spectrum at the highest frequencies which we take to be the core. The optical depth maps, even excluding the possible core component, exhibit a noticeable amount of structure, but the morphology does not correlate with that in the flux-density maps, as would be expected if the absorption was due to synchrotron self-absorption. Additionally, the spectra (except at the core component) are consistent with free-free absorption, to within the uncertainties, and require column depths about one half of that in 1321+410. We conclude that free-free absorption by a relatively thin amount of gas with structure apparent on the scale of our maps in 0026+346 is likely, although the case is weaker than in 1321+410. This research was supported by an award from the Research Corporation, a NASA NY Space Grant, and by a Booth-Ferris Research Fellowship. The VLBA is operated by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.
ERIC Educational Resources Information Center
Bornstein, Joan L.
The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…
Experience and information loss in auditory and visual memory.
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
2017-07-01
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
ERIC Educational Resources Information Center
Partnership for 21st Century Skills, 2011
2011-01-01
The Partnership for 21st Century Skills (P21) has forged alliances with key national organizations representing the core academic subjects, including Social Studies, English, Math, Science, Geography, World Languages and the Arts. These collaborations have resulted in the development of 21st Century Skills Maps that illustrate the essential…
Mapping spatial patterns with morphological image processing
Peter Vogt; Kurt H. Riitters; Christine Estreguil; Jacek Kozak; Timothy G. Wade; James D. Wickham
2006-01-01
We use morphological image processing for classifying spatial patterns at the pixel level on binary land-cover maps. Land-cover pattern is classified as 'perforated,' 'edge,' 'patch,' and 'core' with higher spatial precision and thematic accuracy compared to a previous approach based on image convolution, while retaining the...
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…