Milner, Rafał; Rusiniak, Mateusz; Lewandowska, Monika; Wolak, Tomasz; Ganc, Małgorzata; Piątkowska-Janko, Ewa; Bogorodzki, Piotr; Skarżyński, Henryk
2014-01-01
Background The neural underpinnings of auditory information processing have often been investigated using the odd-ball paradigm, in which infrequent sounds (deviants) are presented within a regular train of frequent stimuli (standards). Traditionally, this paradigm has been applied using either high temporal resolution (EEG) or high spatial resolution (fMRI, PET). However, used separately, these techniques cannot provide information on both the location and time course of particular neural processes. The goal of this study was to investigate the neural correlates of auditory processes with a fine spatio-temporal resolution. A simultaneous auditory evoked potentials (AEP) and functional magnetic resonance imaging (fMRI) technique (AEP-fMRI), together with an odd-ball paradigm, were used. Material/Methods Six healthy volunteers, aged 20–35 years, participated in an odd-ball simultaneous AEP-fMRI experiment. AEP in response to acoustic stimuli were used to model bioelectric intracerebral generators, and electrophysiological results were integrated with fMRI data. Results fMRI activation evoked by standard stimuli was found to occur mainly in the primary auditory cortex. Activity in these regions overlapped with intracerebral bioelectric sources (dipoles) of the N1 component. Dipoles of the N1/P2 complex in response to standard stimuli were also found in the auditory pathway between the thalamus and the auditory cortex. Deviant stimuli induced fMRI activity in the anterior cingulate gyrus, insula, and parietal lobes. Conclusions The present study showed that neural processes evoked by standard stimuli occur predominantly in subcortical and cortical structures of the auditory pathway. Deviants activate areas non-specific for auditory information processing. PMID:24413019
Morlet, Dominique; Ruby, Perrine; André-Obadia, Nathalie; Fischer, Catherine
2017-11-01
Active paradigms requiring subjects to engage in a mental task on request have been developed to detect consciousness in behaviorally unresponsive patients. Using auditory ERPs, the active condition consists in orienting patient's attention toward oddball stimuli. In comparison with passive listening, larger P300 in the active condition identifies voluntary processes. However, contrast between these two conditions is usually too weak to be detected at the individual level. To improve test sensitivity, we propose as a control condition to actively divert the subject's attention from the auditory stimuli with a mental imagery task that has been demonstrated to be within the grasp of the targeted patients: navigate in one's home. Twenty healthy subjects were presented with a two-tone oddball paradigm in the three following condition: (a) passive listening, (b) mental imagery, (c) silent counting of deviant stimuli. Mental imagery proved to be more efficient than passive listening to lessen P300 response to deviant tones as compared with the active counting condition. An effect of attention manipulation (oriented vs. diverted) was observed in 19/20 subjects, of whom 18 showed the expected P300 effect and 1 showed an effect restricted to the N2 component. The only subject showing no effect also proved insufficient engagement in the tasks. Our study demonstrated the efficiency of diverting attention using mental imagery to improve the sensitivity of the active oddball paradigm. Using recorded instructions and requiring a small number of electrodes, the test was designed to be conveniently and economically used at the patient's bedside. © 2017 Society for Psychophysiological Research.
Stefanics, G; Thuróczy, G; Kellényi, L; Hernádi, I
2008-11-19
We investigated the potential effects of 20 min irradiation from a new generation Universal Mobile Telecommunication System (UMTS) 3G mobile phone on human event related potentials (ERPs) in an auditory oddball paradigm. In a double-blind task design, subjects were exposed to either genuine or sham irradiation in two separate sessions. Before and after irradiation subjects were presented with a random series of 50 ms tone burst (frequent standards: 1 kHz, P=0.8, rare deviants: 1.5 kHz, P=0.2) at a mean repetition rate of 1500 ms while electroencephalogram (EEG) was recorded. The subjects' task was to silently count the appearance of targets. The amplitude and latency of the N100, N200, P200 and P300 components for targets and standards were analyzed in 29 subjects. We found no significant effects of electromagnetic field (EMF) irradiation on the amplitude and latency of the above ERP components. In order to study possible effects of EMF on attentional processes, we applied a wavelet-based time-frequency method to analyze the early gamma component of brain responses to auditory stimuli. We found that the early evoked gamma activity was insensitive to UMTS RF exposition. Our results support the notion, that a single 20 min irradiation from new generation 3G mobile phones does not induce measurable changes in latency or amplitude of ERP components or in oscillatory gamma-band activity in an auditory oddball paradigm.
An auditory oddball brain-computer interface for binary choices.
Halder, S; Rea, M; Andreoni, R; Nijboer, F; Hammer, E M; Kleih, S C; Birbaumer, N; Kübler, A
2010-04-01
Brain-computer interfaces (BCIs) provide non-muscular communication for individuals diagnosed with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)). In the final stages of the disease, a BCI cannot rely on the visual modality. This study examined a method to achieve high accuracies using auditory stimuli only. We propose an auditory BCI based on a three-stimulus paradigm. This paradigm is similar to the standard oddball but includes an additional target (i.e. two target stimuli, one frequent stimulus). Three versions of the task were evaluated in which the target stimuli differed in loudness, pitch or direction. Twenty healthy participants achieved an average information transfer rate (ITR) of up to 2.46 bits/min and accuracies of 78.5%. Most subjects (14 of 20) achieved their best performance with targets differing in pitch. With this study, the viability of the paradigm was shown for healthy participants and will next be evaluated with individuals diagnosed with ALS or locked-in syndrome (LIS) after stroke. The here presented BCI offers communication with binary choices (yes/no) independent of vision. As it requires only little time per selection, it may constitute a reliable means of communication for patients who lost all motor function and have a short attention span. 2009 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Diminished n1 auditory evoked potentials to oddball stimuli in misophonia patients.
Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan
2014-01-01
Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain's early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.
Diminished N1 Auditory Evoked Potentials to Oddball Stimuli in Misophonia Patients
Schröder, Arjan; van Diepen, Rosanne; Mazaheri, Ali; Petropoulos-Petalas, Diamantis; Soto de Amesti, Vicente; Vulink, Nienke; Denys, Damiaan
2014-01-01
Misophonia (hatred of sound) is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study, we investigated if a dysfunction in the brain’s early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs) during an oddball task. Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1, and P2 components locked to the onset of the tones. For misophonia patients, the N1 peak evoked by the oddball tones had smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones. The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients. PMID:24782731
MANGALATHU-ARUMANA, J.; BEARDSLEY, S. A.; LIEBENTHAL, E.
2012-01-01
The integration of event-related potential (ERP) and functional magnetic resonance imaging (fMRI) can contribute to characterizing neural networks with high temporal and spatial resolution. This research aimed to determine the sensitivity and limitations of applying joint independent component analysis (jICA) within-subjects, for ERP and fMRI data collected simultaneously in a parametric auditory frequency oddball paradigm. In a group of 20 subjects, an increase in ERP peak amplitude ranging 1–8 μV in the time window of the P300 (350–700ms), and a correlated increase in fMRI signal in a network of regions including the right superior temporal and supramarginal gyri, was observed with the increase in deviant frequency difference. JICA of the same ERP and fMRI group data revealed activity in a similar network, albeit with stronger amplitude and larger extent. In addition, activity in the left pre- and post- central gyri, likely associated with right hand somato-motor response, was observed only with the jICA approach. Within-subject, the jICA approach revealed significantly stronger and more extensive activity in the brain regions associated with the auditory P300 than the P300 linear regression analysis. The results suggest that with the incorporation of spatial and temporal information from both imaging modalities, jICA may be a more sensitive method for extracting common sources of activity between ERP and fMRI. PMID:22377443
Simultaneous ERP and fMRI of the auditory cortex in a passive oddball paradigm.
Liebenthal, Einat; Ellingson, Michael L; Spanaki, Marianna V; Prieto, Thomas E; Ropella, Kristina M; Binder, Jeffrey R
2003-08-01
Infrequent occurrences of a deviant sound within a sequence of repetitive standard sounds elicit the automatic mismatch negativity (MMN) event-related potential (ERP). The main MMN generators are located in the superior temporal cortex, but their number, precise location, and temporal sequence of activation remain unclear. In this study, ERP and functional magnetic resonance imaging (fMRI) data were obtained simultaneously during a passive frequency oddball paradigm. There were three conditions, a STANDARD, a SMALL deviant, and a LARGE deviant. A clustered image acquisition technique was applied to prevent contamination of the fMRI data by the acoustic noise of the scanner and to limit contamination of the electroencephalogram (EEG) by the gradient-switching artifact. The ERP data were used to identify areas in which the blood oxygenation (BOLD) signal varied with the magnitude of the negativity in each condition. A significant ERP MMN was obtained, with larger peaks to LARGE deviants and with frontocentral scalp distribution, consistent with the MMN reported outside the magnetic field. This result validates the experimental procedures for simultaneous ERP/fMRI of the auditory cortex. Main foci of increased BOLD signal were observed in the right superior temporal gyrus [STG; Brodmann area (BA) 22] and right superior temporal plane (STP; BA 41 and 42). The imaging results provide new information supporting the idea that generators in the right lateral aspect of the STG are implicated in processes of frequency deviant detection, in addition to generators in the right and left STP.
Tavakoli, Paniz; Campbell, Kenneth
2016-10-01
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
Lifespan differences in nonlinear dynamics during rest and auditory oddball performance.
Müller, Viktor; Lindenberger, Ulman
2012-07-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an indicator of cortical reactivity. During rest, both nonlinear coupling and spectral alpha power decreased with age, whereas dimensional complexity increased. In contrast, when attending to the deviant stimulus, nonlinear coupling increased with age, and complexity decreased. Correlational analyses showed that nonlinear measures assessed during auditory oddball performance were reliably related to an independently assessed measure of perceptual speed. We conclude that cortical dynamics during rest and stimulus processing undergo substantial reorganization from childhood to old age, and propose that lifespan age differences in nonlinear dynamics during stimulus processing reflect lifespan changes in the functional organization of neuronal cell assemblies. © 2012 Blackwell Publishing Ltd.
Lifespan Differences in Nonlinear Dynamics during Rest and Auditory Oddball Performance
ERIC Educational Resources Information Center
Muller, Viktor; Lindenberger, Ulman
2012-01-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an…
Justen, Christoph; Herbert, Cornelia
2018-04-19
Numerous studies have investigated the neural underpinnings of passive and active deviance and target detection in the well-known auditory oddball paradigm by means of event-related potentials (ERPs) or functional magnetic resonance imaging (fMRI). The present auditory oddball study investigates the spatio-temporal dynamics of passive versus active deviance and target detection by analyzing amplitude modulations of early and late ERPs while at the same time exploring the neural sources underling this modulation with standardized low-resolution brain electromagnetic tomography (sLORETA) . A 64-channel EEG was recorded from twelve healthy right-handed participants while listening to 'standards' and 'deviants' (500 vs. 1000 Hz pure tones) during a passive (block 1) and an active (block 2) listening condition. During passive listening, participants had to simply listen to the tones. During active listening they had to attend and press a key in response to the deviant tones. Passive and active listening elicited an N1 component, a mismatch negativity (MMN) as difference potential (whose amplitudes were temporally overlapping with the N1) and a P3 component. N1/MMN and P3 amplitudes were significantly more pronounced for deviants as compared to standards during both listening conditions. Active listening augmented P3 modulation to deviants significantly compared to passive listening, whereas deviance detection as indexed by N1/MMN modulation was unaffected by the task. During passive listening, sLORETA contrasts (deviants > standards) revealed significant activations in the right superior temporal gyrus (STG) and the lingual gyri bilaterally (N1/MMN) as well as in the left and right insulae (P3). During active listening, significant activations were found for the N1/MMN in the right inferior parietal lobule (IPL) and for the P3 in multiple cortical regions (e.g., precuneus). The results provide evidence for the hypothesis that passive as well as active deviance and
ERIC Educational Resources Information Center
Beauchamp, Chris M.; Stelmack, Robert M.
2006-01-01
The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…
Prediction of Auditory and Visual P300 Brain-Computer Interface Aptitude
Halder, Sebastian; Hammer, Eva Maria; Kleih, Sonja Claudia; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea
2013-01-01
Objective Brain-computer interfaces (BCIs) provide a non-muscular communication channel for patients with late-stage motoneuron disease (e.g., amyotrophic lateral sclerosis (ALS)) or otherwise motor impaired people and are also used for motor rehabilitation in chronic stroke. Differences in the ability to use a BCI vary from person to person and from session to session. A reliable predictor of aptitude would allow for the selection of suitable BCI paradigms. For this reason, we investigated whether P300 BCI aptitude could be predicted from a short experiment with a standard auditory oddball. Methods Forty healthy participants performed an electroencephalography (EEG) based visual and auditory P300-BCI spelling task in a single session. In addition, prior to each session an auditory oddball was presented. Features extracted from the auditory oddball were analyzed with respect to predictive power for BCI aptitude. Results Correlation between auditory oddball response and P300 BCI accuracy revealed a strong relationship between accuracy and N2 amplitude and the amplitude of a late ERP component between 400 and 600 ms. Interestingly, the P3 amplitude of the auditory oddball response was not correlated with accuracy. Conclusions Event-related potentials recorded during a standard auditory oddball session moderately predict aptitude in an audiory and highly in a visual P300 BCI. The predictor will allow for faster paradigm selection. Significance Our method will reduce strain on patients because unsuccessful training may be avoided, provided the results can be generalized to the patient population. PMID:23457444
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms.
Rutkowski, Tomasz M
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Robotic and Virtual Reality BCIs Using Spatial Tactile and Auditory Oddball Paradigms
Rutkowski, Tomasz M.
2016-01-01
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms. PMID:27999538
Calhoun, V D; Adali, T; Giuliani, N R; Pekar, J J; Kiehl, K A; Pearlson, G D
2006-01-01
The acquisition of both structural MRI (sMRI) and functional MRI (fMRI) data for a given study is a very common practice. However, these data are typically examined in separate analyses, rather than in a combined model. We propose a novel methodology to perform independent component analysis across image modalities, specifically, gray matter images and fMRI activation images as well as a joint histogram visualization technique. Joint independent component analysis (jICA) is used to decompose a matrix with a given row consisting of an fMRI activation image resulting from auditory oddball target stimuli and an sMRI gray matter segmentation image, collected from the same individual. We analyzed data collected on a group of schizophrenia patients and healthy controls using the jICA approach. Spatially independent joint-components are estimated and resulting components were further analyzed only if they showed a significant difference between patients and controls. The main finding was that group differences in bilateral parietal and frontal as well as posterior temporal regions in gray matter were associated with bilateral temporal regions activated by the auditory oddball target stimuli. A finding of less patient gray matter and less hemodynamic activity for target detection in these bilateral anterior temporal lobe regions was consistent with previous work. An unexpected corollary to this finding was that, in the regions showing the largest group differences, gray matter concentrations were larger in patients vs. controls, suggesting that more gray matter may be related to less functional connectivity in the auditory oddball fMRI task. Hum Brain Mapp, 2005. (c) 2005 Wiley-Liss, Inc.
Xin, Zhao; Ting, Liu X.; Yi, Zan X.; Li, Dai; Bao, Zhou A.
2015-01-01
Behavioral inhibitory control has been shown to play an important role in a variety of addictive behaviors. A number of studies involving the use of Go/NoGo and stop-signal paradigms have shown that smokers have reduced response inhibition for cigarette-related cues. However, it is not known whether male light smokers’ response inhibition for cigarette-related cues is lower than that of non-smokers in the two-choice oddball paradigm. The objective of the current study was to provide further behavioral evidence of male light smokers’ impaired response inhibition for cigarette-related cues, using the two-choice oddball paradigm. Sixty-two male students (31 smokers, 31 non-smokers), who were recruited via an advertisement, took part in this two-choice oddball experiment. Cigarette-related pictures (deviant stimuli) and pictures unrelated to cigarettes (standard stimuli) were used. Response inhibition for cigarette-related cues was measured by comparing accuracy (ACC) and reaction time (RT) for deviant and standard stimuli in the two groups of subjects. An analysis of variance (ANOVA) showed that in all the participants, ACC was significantly lower for deviant stimuli than for standard stimuli. For deviant stimuli, the RTs were significantly longer for male light smokers than for male non-smokers; however, there was no significant difference in RTs for standard stimuli. Compared to male non-smokers, male light smokers seem to have a reduced ability to inhibit responses to cigarette-related cues. PMID:26528200
Yang, Ming-Tao; Hsu, Chun-Hsien; Yeh, Pei-Wen; Lee, Wang-Tso; Liang, Jao-Shwann; Fu, Wen-Mei; Lee, Chia-Ying
2015-01-01
Inattention (IA) has been a major problem in children with attention deficit/hyperactivity disorder (ADHD), accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN), P3a, and late discriminative negativity (LDN) as event-related potential (ERP) markers, under the passive auditory oddball paradigm. Two types of stimuli-pure tones and Mandarin lexical tones-were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years). Two passive auditory oddball paradigms (lexical tones and pure tones) were applied. The pure tone oddball paradigm included a standard stimulus (1000 Hz, 80%) and two deviant stimuli (1015 and 1090 Hz, 10% each). The Mandarin lexical tone oddball paradigm's standard stimulus was /yi3/ (80%) and two deviant stimuli were /yi1/ and /yi2/ (10% each). The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents' and teachers' ratings on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for the evaluation of anti-ADHD drugs that aim to
NASA Astrophysics Data System (ADS)
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
2015-02-01
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
2011-01-01
Background The electrical signals measuring method is recommended to examine the relationship between neuronal activities and measure with the event related potentials (ERPs) during an auditory and a visual oddball paradigm between schizophrenic patients and normal subjects. The aim of this study is to discriminate the activation changes of different stimulations evoked by auditory and visual ERPs between schizophrenic patients and normal subjects. Methods Forty-three schizophrenic patients were selected as experimental group patients, and 40 healthy subjects with no medical history of any kind of psychiatric diseases, neurological diseases, or drug abuse, were recruited as a control group. Auditory and visual ERPs were studied with an oddball paradigm. All the data were analyzed by SPSS statistical software version 10.0. Results In the comparative study of auditory and visual ERPs between the schizophrenic and healthy patients, P300 amplitude at Fz, Cz, and Pz and N100, N200, and P200 latencies at Fz, Cz, and Pz were shown significantly different. The cognitive processing reflected by the auditory and the visual P300 latency to rare target stimuli was probably an indicator of the cognitive function in schizophrenic patients. Conclusions This study shows the methodology of application of auditory and visual oddball paradigm identifies task-relevant sources of activity and allows separation of regions that have different response properties. Our study indicates that there may be slowness of automatic cognitive processing and controlled cognitive processing of visual ERPs compared to auditory ERPs in schizophrenic patients. The activation changes of visual evoked potentials are more regionally specific than auditory evoked potentials. PMID:21542917
Hamm, Jordan P; Ethridge, Lauren E; Shapiro, John R; Pearlson, Godfrey D; Tamminga, Carol A; Sweeney, John A; Keshavan, Matcheri S; Thaker, Gunvant K; Clementz, Brett A
2017-01-01
Objectives Bipolar I disorder is a disabling illness affecting 1% of people worldwide. Family and twin studies suggest that psychotic bipolar disorder (BDP) represents a homogenous subgroup with an etiology distinct from non-psychotic bipolar disorder (BDNP) and partially shared with schizophrenia. Studies of auditory electrophysiology [e.g., paired-stimulus and oddball measured with electroencephalography (EEG)] consistently report deviations in psychotic groups (schizophrenia, BDP), yet such studies comparing BDP and BDNP are sparse and, in some cases, conflicting. Auditory EEG responses are significantly reduced in unaffected relatives of psychosis patients, suggesting that they may relate to both psychosis liability and expression. Methods While 64-sensor EEGs were recorded, age- and gender-matched samples of 70 BDP, 35 BDNP {20 with a family history of psychosis [BDNP(+)]}, and 70 psychiatrically healthy subjects were presented typical auditory paired-stimuli and auditory oddball paradigms. Results Oddball P3b reductions were present and indistinguishable across all patient groups. P2s to paired-stimuli were abnormal only in BDP and BDNP(+). Conversely, N1 reductions to stimuli in both paradigms and P3a reductions were present in both BDP and BDNP(−) groups but were absent in BDNP(+). Conclusions While nearly all auditory neural response components studied were abnormal in BDP, BDNP abnormalities at early- and mid-latencies were moderated by family psychosis history. The relationship between psychosis expression, heritable psychosis risk, and neurophysiology within bipolar disorder, therefore, may be complex. Consideration of such clinical disease heterogeneity may be important for future investigations of the pathophysiology of major psychiatric disturbance. PMID:23941660
Anderson, Nathaniel E; Maurer, J Michael; Steele, Vaughn R; Kiehl, Kent A
2018-06-01
Psychopathy is a personality disorder accompanied by abnormalities in emotional processing and attention. Recent theoretical applications of network-based models of cognition have been used to explain the diverse range of abnormalities apparent in psychopathy. Still, the physiological basis for these abnormalities is not well understood. A significant body of work has examined psychopathy-related abnormalities in simple attention-based tasks, but these studies have largely been performed using electrocortical measures, such as event-related potentials (ERPs), and they often have been carried out among individuals with low levels of psychopathic traits. In this study, we examined neural activity during an auditory oddball task using functional magnetic resonance imaging (fMRI) during a simple auditory target detection (oddball) task among 168 incarcerated adult males, with psychopathic traits assessed via the Hare Psychopathy Checklist-Revised (PCL-R). Event-related contrasts demonstrated that the largest psychopathy-related effects were apparent between the frequent standard stimulus condition and a task-off, implicit baseline. Negative correlations with interpersonal-affective dimensions (Factor 1) of the PCL-R were apparent in regions comprising default mode and salience networks. These findings support models of psychopathy describing impaired integration across functional networks. They additionally corroborate reports which have implicated failures of efficient transition between default mode and task-positive networks. Finally, they demonstrate a neurophysiological basis for abnormal mobilization of attention and reduced engagement with stimuli that have little motivational significance among those with high psychopathic traits.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Cortical processing of speech in individuals with auditory neuropathy spectrum disorder.
Apeksha, Kumari; Kumar, U Ajith
2018-06-01
Auditory neuropathy spectrum disorder (ANSD) is a condition where cochlear amplification function (involving outer hair cells) is normal but neural conduction in the auditory pathway is disordered. This study was done to investigate the cortical representation of speech in individuals with ANSD and to compare it with the individuals with normal hearing. Forty-five participants including 21 individuals with ANSD and 24 individuals with normal hearing were considered for the study. Individuals with ANSD had hearing thresholds ranging from normal hearing to moderate hearing loss. Auditory cortical evoked potentials-through odd ball paradigm-were recorded using 64 electrodes placed on the scalp for /ba/-/da/ stimulus. Onset cortical responses were also recorded in repetitive paradigm using /da/ stimuli. Sensitivity and reaction time required to identify the oddball stimuli were also obtained. Behavioural results indicated that individuals in ANSD group had significantly lower sensitivity and longer reaction times compared to individuals with normal hearing sensitivity. Reliable P300 could be elicited in both the groups. However, a significant difference in scalp topographies was observed between the two groups in both repetitive and oddball paradigms. Source localization using local auto regressive analyses revealed that activations were more diffuses in individuals with ANSD when compared to individuals with normal hearing sensitivity. Results indicated that the brain networks and regions activated in individuals with ANSD during detection and discrimination of speech sounds are different from normal hearing individuals. In general, normal hearing individuals showed more focused activations while in individuals with ANSD activations were diffused.
Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A
2011-01-01
Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.
Action-related auditory ERP attenuation: Paradigms and hypotheses.
Horváth, János
2015-11-11
A number studies have shown that the auditory N1 event-related potential (ERP) is attenuated when elicited by self-induced or self-generated sounds. Because N1 is a correlate of auditory feature- and event-detection, it was generally assumed that N1-attenuation reflected the cancellation of auditory re-afference, enabled by the internal forward modeling of the predictable sensory consequences of the given action. Focusing on paradigms utilizing non-speech actions, the present review summarizes recent progress on action-related auditory attenuation. Following a critical analysis of the most widely used, contingent paradigm, two further hypotheses on the possible causes of action-related auditory ERP attenuation are presented. The attention hypotheses suggest that auditory ERP attenuation is brought about by a temporary division of attention between the action and the auditory stimulation. The pre-activation hypothesis suggests that the attenuation is caused by the activation of a sensory template during the initiation of the action, which interferes with the incoming stimulation. Although each hypothesis can account for a number of findings, none of them can accommodate the whole spectrum of results. It is suggested that a better understanding of auditory ERP attenuation phenomena could be achieved by systematic investigations of the types of actions, the degree of action-effect contingency, and the temporal characteristics of action-effect contingency representation-buildup and -deactivation. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015. Published by Elsevier B.V.
The human auditory evoked response
NASA Technical Reports Server (NTRS)
Galambos, R.
1974-01-01
Figures are presented of computer-averaged auditory evoked responses (AERs) that point to the existence of a completely endogenous brain event. A series of regular clicks or tones was administered to the ear, and 'odd-balls' of different intensity or frequency respectively were included. Subjects were asked either to ignore the sounds (to read or do something else) or to attend to the stimuli. When they listened and counted the odd-balls, a P3 wave occurred at 300msec after stimulus. When the odd-balls consisted of omitted clicks or tone bursts, a similar response was observed. This could not have come from auditory nerve, but only from cortex. It is evidence of recognition, a conscious process.
Usage of drip drops as stimuli in an auditory P300 BCI paradigm.
Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu
2018-02-01
Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p < 0.05, Wilcoxon signed test; p < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.
Tağluk, M E; Cakmak, E D; Karakaş, S
2005-04-30
Cognitive brain responses to external stimuli, as measured by event related potentials (ERPs), have been analyzed from a variety of perspectives to investigate brain dynamics. Here, the brain responses of healthy subjects to auditory oddball paradigms, standard and deviant stimuli, recorded on an Fz electrode site were studied using a short-term version of the smoothed Wigner-Ville distribution (STSW) method. A smoothing kernel was designed to preserve the auto energy of the signal with maximum time and frequency resolutions. Analysis was conducted mainly on the time-frequency distributions (TFDs) of sweeps recorded during successive trials including the TFD of averaged single sweeps as the evoked time-frequency (ETF) brain response and the average of TFDs of single sweeps as the time-frequency (TF) brain response. Also the power entropy and the phase angles of the signal at frequency f and time t locked to the stimulus onset were studied across single trials as the TF power-locked and the TF phase-locked brain responses, respectively. TFDs represented in this way demonstrated the ERP spectro-temporal characteristics from multiple perspectives. The time-varying energy of the individual components manifested interesting TF structures in the form of amplitude modulated (AM) and frequency modulated (FM) energy bursts. The TF power-locked and phase-locked brain responses provoked ERP energies in a manner modulated by cognitive functions, an observation requiring further investigation. These results may lead to a better understanding of integrative brain dynamics.
Pre-Attentive Auditory Processing of Lexicality
ERIC Educational Resources Information Center
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
2004-01-01
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Do resting brain dynamics predict oddball evoked-potential?
2011-01-01
Background The oddball paradigm is widely applied to the investigation of cognitive function in neuroscience and in neuropsychiatry. Whether cortical oscillation in the resting state can predict the elicited oddball event-related potential (ERP) is still not clear. This study explored the relationship between resting electroencephalography (EEG) and oddball ERPs. The regional powers of 18 electrodes across delta, theta, alpha and beta frequencies were correlated with the amplitude and latency of N1, P2, N2 and P3 components of oddball ERPs. A multivariate analysis based on partial least squares (PLS) was applied to further examine the spatial pattern revealed by multiple correlations. Results Higher synchronization in the resting state, especially at the alpha spectrum, is associated with higher neural responsiveness and faster neural propagation, as indicated by the higher amplitude change of N1/N2 and shorter latency of P2. None of the resting quantitative EEG indices predict P3 latency and amplitude. The PLS analysis confirms that the resting cortical dynamics which explains N1/N2 amplitude and P2 latency does not show regional specificity, indicating a global property of the brain. Conclusions This study differs from previous approaches by relating dynamics in the resting state to neural responsiveness in the activation state. Our analyses suggest that the neural characteristics carried by resting brain dynamics modulate the earlier/automatic stage of target detection. PMID:22114868
Emri, Miklós; Glaub, Teodóra; Berecz, Roland; Lengyel, Zsolt; Mikecz, Pál; Repa, Imre; Bartók, Eniko; Degrell, István; Trón, Lajos
2006-05-01
Cognitive deficit is an essential feature of schizophrenia. One of the generally used simple cognitive tasks to characterize specific cognitive dysfunctions is the auditory "oddball" paradigm. During this task, two different tones are presented with different repetition frequencies and the subject is asked to pay attention and to respond to the less frequent tone. The aim of the present study was to apply positron emission tomography (PET) to measure the regional brain blood flow changes induced by an auditory oddball task in healthy volunteers and in stable schizophrenic patients in order to detect activation differences between the two groups. Eight healthy volunteers and 11 schizophrenic patients were studied. The subjects carried out a specific auditory oddball task, while cerebral activation measured via the regional distribution of [15O]-butanol activity changes in the PET camera was recorded. Task-related activation differed significantly across the patients and controls. The healthy volunteers displayed significant activation in the anterior cingulate area (Brodman Area - BA32), while in the schizophrenic patients the area was wider, including the mediofrontal regions (BA32 and BA10). The distance between the locations of maximal activation of the two populations were 33 mm and the cluster size was about twice as large in the patient group. The present results demonstrate that the perfusion changes induced in the schizophrenic patients by this cognitive task extends over a larger part of the mediofrontal cortex than in the healthy volunteers. The different pattern of activation observed during the auditory oddball task in the schizophrenic patients suggests that a larger cortical area - and consequently a larger variety of neuronal networks--is involved in the cognitive processes in these patients. The dispersion of stimulus processing during a cognitive task requiring sustained attention and stimulus discrimination may play an important role in the
Aliakbaryhosseinabadi, Susan; Kostic, Vladimir; Pavlovic, Aleksandra; Radovanovic, Sasa; Nlandu Kamavuako, Ernest; Jiang, Ning; Petrini, Laura; Dremstrup, Kim; Farina, Dario; Mrachacz-Kersting, Natalie
2017-01-01
In this study, we analyzed the influence of artificially imposed attention variations using the auditory oddball paradigm on the cortical activity associated to motor preparation/execution. EEG signals from Cz and its surrounding channels were recorded during three sets of ankle dorsiflexion movements. Each set was interspersed with either a complex or a simple auditory oddball task for healthy participants and a complex auditory oddball task for stroke patients. The amplitude of the movement-related cortical potentials (MRCPs) decreased with the complex oddball paradigm, while MRCP variability increased. Both oddball paradigms increased the detection latency significantly (p<0.05) and the complex paradigm decreased the true positive rate (TPR) (p=0.04). In patients, the negativity of the MRCP decreased while pre-phase variability increased, and the detection latency and accuracy deteriorated with attention diversion. Attention diversion has a significant influence on MRCP features and detection parameters, although these changes were counteracted by the application of the laplacian method. Brain-computer interfaces for neuromodulation that use the MRCP as the control signal are robust to changes in attention. However, attention must be monitored since it plays a key role in plasticity induction. Here we demonstrate that this can be achieved using the single channel Cz. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Auditory attention strategy depends on target linguistic properties and spatial configurationa)
McCloy, Daniel R.; Lee, Adrian K. C.
2015-01-01
Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011
Delorme, Arnaud; Polich, John
2013-01-01
Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958
Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen
2014-10-01
Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical
Verleger, Rolf; Śmigasiewicz, Kamila
2016-01-01
The P3 component of event-related potentials increases when stimuli are rarely presented. It has been assumed that this oddball effect (rare-frequent difference) reflects the unexpectedness of rare stimuli. The assumption of unexpectedness and its link to P3 amplitude were tested here. A standard- oddball task requiring alternative key-press responses to frequent and rare stimuli was compared with an oddball-prediction task where stimuli had to be first predicted and then confirmed by key-pressing. Oddball effects in the prediction task depended on whether the frequent or the rare stimulus had been predicted. Oddball effects on P3 amplitudes and error rates in the standard oddball task closely resembled effects after frequent predictions. This corroborates the notion that these effects occur because frequent stimuli are expected and rare stimuli are unexpected. However, a closer look at the prediction task put this notion into doubt because the modifications of oddball effects on P3 by expectancies were entirely due to effects on frequent stimuli, whereas the large P3 amplitudes evoked by rare stimuli were insensitive to predictions (unlike response times and error rates). Therefore, rare stimuli cannot be said to evoke large P3 amplitudes because they are unexpected. We discuss these diverging effects of frequency and expectancy, as well as general differences between tasks, with respect to concepts and hypotheses about P3b’s function and conclude that each discussed concept or hypothesis encounters some problems, with a conception in terms of subjective relevance assigned to stimuli offering the most consistent account of these basic effects. PMID:27512527
Nieto-Diego, Javier; Malmierca, Manuel S.
2016-01-01
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
Melodic multi-feature paradigm reveals auditory profiles in music-sound encoding.
Tervaniemi, Mari; Huotilainen, Minna; Brattico, Elvira
2014-01-01
Musical expertise modulates preattentive neural sound discrimination. However, this evidence up to great extent originates from paradigms using very simple stimulation. Here we use a novel melody paradigm (revealing the auditory profile for six sound parameters in parallel) to compare memory-related mismatch negativity (MMN) and attention-related P3a responses recorded from non-musicians and Finnish Folk musicians. MMN emerged in both groups of participants for all sound changes (except for rhythmic changes in non-musicians). In Folk musicians, the MMN was enlarged for mistuned sounds when compared with non-musicians. This is taken to reflect their familiarity with pitch information which is in key position in Finnish folk music when compared with e.g., rhythmic information. The MMN was followed by P3a after timbre changes, rhythm changes, and melody transposition. The MMN and P3a topographies differentiated the groups for all sound changes. Thus, the melody paradigm offers a fast and cost-effective means for determining the auditory profile for music-sound encoding and also, importantly, for probing the effects of musical expertise on it.
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman
2016-01-01
Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies.
Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman
2016-01-01
Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies. PMID:27799906
Language impairment is reflected in auditory evoked fields.
Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit
2008-05-01
Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.
Involvement of the human midbrain and thalamus in auditory deviance detection.
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
2015-02-01
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Donkers, Franc C.L.; Schipul, Sarah E.; Baranek, Grace T.; Cleary, Katherine M.; Willoughby, Michael T.; Evans, Anna M.; Bulluck, John C.; Lovmo, Jeanne E.; Belger, Aysenil
2015-01-01
Neurobiological underpinnings of unusual sensory features in individuals with autism are unknown. Event-related potentials (ERPs) elicited by task-irrelevant sounds were used to elucidate neural correlates of auditory processing and associations with three common sensory response patterns (hyperresponsiveness; hyporesponsiveness; sensory seeking). Twenty-eight children with autism and 39 typically developing children (4–12 year-olds) completed an auditory oddball paradigm. Results revealed marginally attenuated P1 and N2 to standard tones and attenuated P3a to novel sounds in autism versus controls. Exploratory analyses suggested that within the autism group, attenuated N2 and P3a amplitudes were associated with greater sensory seeking behaviors for specific ranges of P1 responses. Findings suggest that attenuated early sensory as well as later attention-orienting neural responses to stimuli may underlie selective sensory features via complex mechanisms. PMID:24072639
Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.
Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles
2015-01-01
The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.
Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence
Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles
2015-01-01
The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628
Emotionally negative pictures increase attention to a subsequent auditory stimulus.
Tartar, Jaime L; de Almeida, Kristen; McIntosh, Roger C; Rosselli, Monica; Nash, Allan J
2012-01-01
Emotionally negative stimuli serve as a mechanism of biological preparedness to enhance attention. We hypothesized that emotionally negative stimuli would also serve as motivational priming to increase attention resources for subsequent stimuli. To that end, we tested 11 participants in a dual sensory modality task, wherein emotionally negative pictures were contrasted with emotionally neutral pictures and each picture was followed 600 ms later by a tone in an auditory oddball paradigm. Each trial began with a picture displayed for 200 ms; half of the trials began with an emotionally negative picture and half of the trials began with an emotionally neutral picture; 600 ms following picture presentation, the participants heard either an oddball tone or a standard tone. At the end of each trial (picture followed by tone), the participants categorized, with a button press, the picture and tone combination. As expected, and consistent with previous studies, we found an enhanced visual late positive potential (latency range=300-700 ms) to the negative picture stimuli. We further found that compared to neutral pictures, negative pictures resulted in early attention and orienting effects to subsequent tones (measured through an enhanced N1 and N2) and sustained attention effects only to the subsequent oddball tones (measured through late processing negativity, latency range=400-700 ms). Number pad responses to both the picture and tone category showed the shortest response latencies and greatest percentage of correct picture-tone categorization on the negative picture followed by oddball tone trials. Consistent with previous work on natural selective attention, our results support the idea that emotional stimuli can alter attention resource allocation. This finding has broad implications for human attention and performance as it specifically shows the conditions in which an emotionally negative stimulus can result in extended stimulus evaluation. Copyright © 2011
Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J
2018-03-01
Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.
Effects of Visual Game Experience on Auditory Processing Speed.
Shin, Kyung Soon; Yim, Yoon Kyoung; Kim, Yuwon; Park, Soowon; Lee, Jun-Young
2017-03-01
Games are one of the fastest growing and most exciting forms of entertainment. Whether casual mobile game playing has a cognitive, physiological, or behavioral effect on players whose game use is not pathological is unknown. Here we explored whether preattentive auditory processing is linked to the behavioral inhibition system (BIS) in frequent and infrequent game players. A total of 74 subjects who were enrolled in our study were divided into two groups, 40 subjects were frequent gamers and 34 subjects were age-, gender-, IQ-, and education-matched infrequent gamers. All participants underwent a passive auditory oddball paradigm and completed the behavioral inhibition/behavioral activation system scales. The mismatch negativity (MMN) latency was shorter for the frequent gamers relative to the infrequent gamers, whereas no difference in MMN amplitude was found between groups. MMN amplitude was negatively associated with the degree of behavioral inhibition in the frequent and infrequent gaming group. We also found that those who frequently play games show an enhanced processing speed, which could be an effect of game practice. Greater behavioral inhibition induces increased vigilance, and this may have enhanced the MMN amplitude in the infrequent gamers. This differential pattern of correlations suggests that differences in the BIS could lead to different approaches to auditory information processing.
Change detection and difference detection of tone duration discrimination.
Okazaki, Shuntaro; Kanoh, Shin'ichiro; Takaura, Kana; Tsukada, Minoru; Oka, Kotaro
2006-03-20
An event-related potential called mismatch negativity is known to exhibit physiological evidence of sensory memory. Mismatch negativity is believed to represent complicated neuronal mechanisms in a variety of animals and in humans. We employed the auditory oddball paradigm varying sound durations and observed two types of duration mismatch negativity in anesthetized guinea pigs. One was a duration mismatch negativity whose increase in peak amplitude occurred immediately after onset of the stimulus difference in a decrement oddball paradigm. The other exhibited a peak amplitude increase closer to the offset of the longer stimulus in an increment oddball paradigm. These results demonstrated a mechanism to percept the difference of duration change and revealed the importance of the end of a stimulus for this perception.
ERP evaluation of auditory sensory memory systems in adults with intellectual disability.
Ikeda, Kazunari; Hashimoto, Souichi; Hayashi, Akiko; Kanno, Atsushi
2009-01-01
Auditory sensory memory stage can be functionally divided into two subsystems; transient-detector system and permanent feature-detector system (Naatanen, 1992). We assessed these systems in persons with intellectual disability by measuring event-related potentials (ERPs) N1 and mismatch negativity (MMN), which reflect the two auditory subsystems, respectively. Added to these, P3a (an ERP reflecting stage after sensory memory) was evaluated. Either synthesized vowels or simple tones were delivered during a passive oddball paradigm to adults with and without intellectual disability. ERPs were recorded from midline scalp sites (Fz, Cz, and Pz). Relative to control group, participants with the disability exhibited greater N1 latency and less MMN amplitude. The results for N1 amplitude and MMN latency were basically comparable between both groups. IQ scores in participants with the disability revealed no significant relation with N1 and MMN measures, whereas the IQ scores tended to increase significantly as P3a latency reduced. These outcomes suggest that persons with intellectual disability might own discrete malfunctions for the two detector systems in auditory sensory-memory stage. Moreover, the processes following sensory memory might be partly related to a determinant of mental development.
Assessing the validity of subjective reports in the auditory streaming paradigm.
Farkas, Dávid; Denham, Susan L; Bendixen, Alexandra; Winkler, István
2016-04-01
While subjective reports provide a direct measure of perception, their validity is not self-evident. Here, the authors tested three possible biasing effects on perceptual reports in the auditory streaming paradigm: errors due to imperfect understanding of the instructions, voluntary perceptual biasing, and susceptibility to implicit expectations. (1) Analysis of the responses to catch trials separately promoting each of the possible percepts allowed the authors to exclude participants who likely have not fully understood the instructions. (2) Explicit biasing instructions led to markedly different behavior than the conventional neutral-instruction condition, suggesting that listeners did not voluntarily bias their perception in a systematic way under the neutral instructions. Comparison with a random response condition further supported this conclusion. (3) No significant relationship was found between social desirability, a scale-based measure of susceptibility to implicit social expectations, and any of the perceptual measures extracted from the subjective reports. This suggests that listeners did not significantly bias their perceptual reports due to possible implicit expectations present in the experimental context. In sum, these results suggest that valid perceptual data can be obtained from subjective reports in the auditory streaming paradigm.
Prediction of P300 BCI Aptitude in Severe Motor Impairment
Halder, Sebastian; Ruf, Carolin Anne; Furdea, Adrian; Pasqualotto, Emanuele; De Massari, Daniele; van der Heiden, Linda; Bogdan, Martin; Rosenstiel, Wolfgang; Birbaumer, Niels; Kübler, Andrea; Matuz, Tamara
2013-01-01
Brain-computer interfaces (BCIs) provide a non-muscular communication channel for persons with severe motor impairments. Previous studies have shown that the aptitude with which a BCI can be controlled varies from person to person. A reliable predictor of performance could facilitate selection of a suitable BCI paradigm. Eleven severely motor impaired participants performed three sessions of a P300 BCI web browsing task. Before each session auditory oddball data were collected to predict the BCI aptitude of the participants exhibited in the current session. We found a strong relationship of early positive and negative potentials around 200 ms (elicited with the auditory oddball task) with performance. The amplitude of the P2 (r = −0.77) and of the N2 (r = −0.86) had the strongest correlations. Aptitude prediction using an auditory oddball was successful. The finding that the N2 amplitude is a stronger predictor of performance than P3 amplitude was reproduced after initially showing this effect with a healthy sample of BCI users. This will reduce strain on the end-users by minimizing the time needed to find suitable paradigms and inspire new approaches to improve performance. PMID:24204597
Luo, Hao; Ni, Jing-Tian; Li, Zhi-Hao; Li, Xiao-Ou; Zhang, Da-Ren; Zeng, Fan-Gang; Chen, Lin
2006-01-01
In tonal languages such as Mandarin Chinese, a lexical tone carries semantic information and is preferentially processed in the left brain hemisphere of native speakers as revealed by the functional MRI or positron emission tomography studies, which likely measure the temporally aggregated neural events including those at an attentive stage of auditory processing. Here, we demonstrate that early auditory processing of a lexical tone at a preattentive stage is actually lateralized to the right hemisphere. We frequently presented to native Mandarin Chinese speakers a meaningful auditory word with a consonant-vowel structure and infrequently varied either its lexical tone or initial consonant using an odd-ball paradigm to create a contrast resulting in a change in word meaning. The lexical tone contrast evoked a stronger preattentive response, as revealed by whole-head electric recordings of the mismatch negativity, in the right hemisphere than in the left hemisphere, whereas the consonant contrast produced an opposite pattern. Given the distinct acoustic features between a lexical tone and a consonant, this opposite lateralization pattern suggests the dependence of hemisphere dominance mainly on acoustic cues before speech input is mapped into a semantic representation in the processing stream. PMID:17159136
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Lifespan Differences in Cortical Dynamics of Auditory Perception
ERIC Educational Resources Information Center
Muller, Viktor; Gruber, Walter; Klimesch, Wolfgang; Lindenberger, Ulman
2009-01-01
Using electroencephalographic recordings (EEG), we assessed differences in oscillatory cortical activity during auditory-oddball performance between children aged 9-13 years, younger adults, and older adults. From childhood to old age, phase synchronization increased within and between electrodes, whereas whole power and evoked power decreased. We…
Development of auditory event-related potentials in infants prenatally exposed to methadone.
Paul, Jonathan A; Logan, Beth A; Krishnan, Ramesh; Heller, Nicole A; Morrison, Deborah G; Pritham, Ursula A; Tisher, Paul W; Troese, Marcia; Brown, Mark S; Hayes, Marie J
2014-07-01
Developmental features of the P2 auditory ERP in a change detection paradigm were examined in infants prenatally exposed to methadone. Opiate dependent pregnant women maintained on methadone replacement therapy were recruited during pregnancy (N = 60). Current and historical alcohol and substance use, SES, and psychiatric status were assessed with a maternal interview during the third trimester. Medical records were used to collect information regarding maternal medications, monthly urinalysis, and breathalyzer to confirm comorbid drug and alcohol exposures. Between birth and 4 months infant ERP change detection performance was evaluated on one occasion with the oddball paradigm (.2 probability oddball) using pure-tone stimuli (standard = 1 kHz and oddball = 2 kHz frequency) at midline electrode sites, Fz, Cz, Pz. Infant groups were examined in the following developmental windows: 4-15, 16-32, or 33-120 days PNA. Older groups showed increased P2 amplitude at Fz and effective change detection performance at P2 not seen in the newborn group. Developmental maturation of amplitude and stimulus discrimination for P2 has been reported in developing infants at all of the ages tested and data reported here in the older infants are consistent with typical development. However, it has been previously reported that the P2 amplitude difference is detectable in neonates; therefore, absence of a difference in P2 amplitude between stimuli in the 4-15 days group may represent impaired ERP performance by neonatal abstinence syndrome or prenatal methadone exposure. © 2013 Wiley Periodicals, Inc.
Justen, Christoph; Herbert, Cornelia
2016-01-01
So far, neurophysiological studies have investigated implicit and explicit self-related processing particularly for self-related stimuli such as the own face or name. The present study extends previous research to the implicit processing of self-related movement sounds and explores their spatio-temporal dynamics. Event-related potentials (ERPs) were assessed while participants (N = 12 healthy subjects) listened passively to previously recorded self- and other-related finger snapping sounds, presented either as deviants or standards during an oddball paradigm. Passive listening to low (500 Hz) and high (1000 Hz) pure tones served as additional control. For self- vs. other-related finger snapping sounds, analysis of ERPs revealed significant differences in the time windows of the N2a/MMN and P3. An subsequent source localization analysis with standardized low-resolution brain electromagnetic tomography (sLORETA) revealed increased cortical activation in distinct motor areas such as the supplementary motor area (SMA) in the N2a/mismatch negativity (MMN) as well as the P3 time window during processing of self- and other-related finger snapping sounds. In contrast, brain regions associated with self-related processing [e.g., right anterior/posterior cingulate cortex (ACC/PPC)] as well as the right inferior parietal lobule (IPL) showed increased activation particularly during processing of self- vs. other-related finger snapping sounds in the time windows of the N2a/MMN (ACC/PCC) or the P3 (IPL). None of these brain regions showed enhanced activation while listening passively to low (500 Hz) and high (1000 Hz) pure tones. Taken together, the current results indicate (1) a specific role of motor regions such as SMA during auditory processing of movement-related information, regardless of whether this information is self- or other-related, (2) activation of neural sources such as the ACC/PCC and the IPL during implicit processing of self-related movement stimuli, and (3
Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De
2016-08-01
In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario's while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.
The Development of Visual and Auditory Selective Attention Using the Central-Incidental Paradigm.
ERIC Educational Resources Information Center
Conroy, Robert L.; Weener, Paul
Analogous auditory and visual central-incidental learning tasks were administered to 24 students from each of the second, fourth, and sixth grades. The visual tasks served as another modification of Hagen's central-incidental learning paradigm, with the interpretation that focal attention processes continue to develop until the age of 12 or 13…
NASA Astrophysics Data System (ADS)
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-01
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient’s state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
Xiao, Jun; Xie, Qiuyou; He, Yanbin; Yu, Tianyou; Lu, Shenglin; Huang, Ningmeng; Yu, Ronghao; Li, Yuanqing
2016-09-13
The Coma Recovery Scale-Revised (CRS-R) is a consistent and sensitive behavioral assessment standard for disorders of consciousness (DOC) patients. However, the CRS-R has limitations due to its dependence on behavioral markers, which has led to a high rate of misdiagnosis. Brain-computer interfaces (BCIs), which directly detect brain activities without any behavioral expression, can be used to evaluate a patient's state. In this study, we explored the application of BCIs in assisting CRS-R assessments of DOC patients. Specifically, an auditory passive EEG-based BCI system with an oddball paradigm was proposed to facilitate the evaluation of one item of the auditory function scale in the CRS-R - the auditory startle. The results obtained from five healthy subjects validated the efficacy of the BCI system. Nineteen DOC patients participated in the CRS-R and BCI assessments, of which three patients exhibited no responses in the CRS-R assessment but were responsive to auditory startle in the BCI assessment. These results revealed that a proportion of DOC patients who have no behavioral responses in the CRS-R assessment can generate neural responses, which can be detected by our BCI system. Therefore, the proposed BCI may provide more sensitive results than the CRS-R and thus assist CRS-R behavioral assessments.
Teki, Sundeep; Barnes, Gareth R; Penny, William D; Iverson, Paul; Woodhead, Zoe V J; Griffiths, Timothy D; Leff, Alexander P
2013-06-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.
Barnes, Gareth R.; Penny, William D.; Iverson, Paul; Woodhead, Zoe V. J.; Griffiths, Timothy D.; Leff, Alexander P.
2013-01-01
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics’ speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired. PMID:23715097
Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin
2017-07-05
Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Affective ERP Processing in a Visual Oddball Task: Arousal, Valence, and Gender
Rozenkrants, Bella; Polich, John
2008-01-01
Objective To assess affective event-related brain potentials (ERPs) using visual pictures that were highly distinct on arousal level/valence category ratings and a response task. Methods Images from the International Affective Pictures System (IAPS) were selected to obtain distinct affective arousal (low, high) and valence (negative, positive) rating levels. The pictures were used as target stimuli in an oddball paradigm, with a visual pattern as the standard stimulus. Participants were instructed to press a button whenever a picture occurred and to ignore the standard. Task performance and response time did not differ across conditions. Results High-arousal compared to low-arousal stimuli produced larger amplitudes for the N2, P3, early slow wave, and late slow wave components. Valence amplitude effects were weak overall and originated primarily from the later waveform components and interactions with electrode position. Gender differences were negligible. Conclusion The findings suggest that arousal level is the primary determinant of affective oddball processing, and valence minimally influences ERP amplitude. Significance Affective processing engages selective attentional mechanisms that are primarily sensitive to the arousal properties of emotional stimuli. The application and nature of task demands are important considerations for interpreting these effects. PMID:18783987
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B.
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users. PMID:24599314
Lopez Valdes, Alejandro; Mc Laughlin, Myles; Viani, Laura; Walshe, Peter; Smith, Jaclyn; Zeng, Fan-Gang; Reilly, Richard B
2014-01-01
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2=0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users.
NASA Astrophysics Data System (ADS)
Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten
2016-08-01
Objective. In the past few years there has been a growing interest in studying brain functioning in natural, real-life situations. Mobile EEG allows to study the brain in real unconstrained environments but it faces the intrinsic challenge that it is impossible to disentangle observed changes in brain activity due to increase in cognitive demands by the complex natural environment or due to the physical involvement. In this work we aim to disentangle the influence of cognitive demands and distractions that arise from such outdoor unconstrained recordings. Approach. We evaluate the ERP and single trial characteristics of a three-class auditory oddball paradigm recorded in outdoor scenario’s while peddling on a fixed bike or biking freely around. In addition we also carefully evaluate the trial specific motion artifacts through independent gyro measurements and control for muscle artifacts. Main results. A decrease in P300 amplitude was observed in the free biking condition as compared to the fixed bike conditions. Above chance P300 single-trial classification in highly dynamic real life environments while biking outdoors was achieved. Certain significant artifact patterns were identified in the free biking condition, but neither these nor the increase in movement (as derived from continuous gyrometer measurements) can explain the differences in classification accuracy and P300 waveform differences with full clarity. The increased cognitive load in real-life scenarios is shown to play a major role in the observed differences. Significance. Our findings suggest that auditory oddball results measured in natural real-life scenarios are influenced mainly by increased cognitive load due to being in an unconstrained environment.
Evaluation of auditory perception development in neonates by event-related potential technique.
Zhang, Qinfen; Li, Hongxin; Zheng, Aibin; Dong, Xuan; Tu, Wenjuan
2017-08-01
To investigate auditory perception development in neonates and correlate it with days after birth, left and right hemisphere development and sex using event-related potential (ERP) technique. Sixty full-term neonates, consisting of 32 males and 28 females, aged 2-28days were included in this study. An auditory oddball paradigm was used to elicit ERPs. N2 wave latencies and areas were recorded at different days after birth, to study on relationship between auditory perception and age, and comparison of left and right hemispheres, and males and females. Average wave forms of ERPs in neonates started from relatively irregular flat-bottomed troughs to relatively regular steep-sided ripples. A good linear relationship between ERPs and days after birth in neonates was observed. As days after birth increased, N2 latencies gradually and significantly shortened, and N2 areas gradually and significantly increased (both P<0.01). N2 areas in the central part of the brain were significantly greater, and N2 latencies in the central part were significantly shorter in the left hemisphere compared with the right, indicative of left hemisphere dominance (both P<0.05). N2 areas were greater and N2 latencies shorter in female neonates compared with males. The neonatal period is one of rapid auditory perception development. In the days following birth, the auditory perception ability of neonates gradually increases. This occurs predominantly in the left hemisphere, with auditory perception ability appearing to develop earlier in female neonates than in males. ERP can be used as an objective index used to evaluate auditory perception development in neonates. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C
2013-11-01
Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles
2016-02-01
Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Happiness increases distraction by auditory deviant stimuli.
Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R
2016-08-01
Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.
Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S
2016-01-01
Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD. PMID:26812042
Park, M; Choi, J-S; Park, S M; Lee, J-Y; Jung, H Y; Sohn, B K; Kim, S N; Kim, D J; Kwon, J S
2016-01-26
Internet gaming disorder (IGD) leading to serious impairments in cognitive, psychological and social functions has gradually been increasing. However, very few studies conducted to date have addressed issues related to the event-related potential (ERP) patterns in IGD. Identifying the neurobiological characteristics of IGD is important to elucidate the pathophysiology of this condition. P300 is a useful ERP component for investigating electrophysiological features of the brain. The aims of the present study were to investigate differences between patients with IGD and healthy controls (HCs), with regard to the P300 component of the ERP during an auditory oddball task, and to examine the relationship of this component to the severity of IGD symptoms in identifying the relevant neurophysiological features of IGD. Twenty-six patients diagnosed with IGD and 23 age-, sex-, education- and intelligence quotient-matched HCs participated in this study. During an auditory oddball task, participants had to respond to the rare, deviant tones presented in a sequence of frequent, standard tones. The IGD group exhibited a significant reduction in response to deviant tones compared with the HC group in the P300 amplitudes at the midline centro-parietal electrode regions. We also found a negative correlation between the severity of IGD and P300 amplitudes. The reduced amplitude of the P300 component in an auditory oddball task may reflect dysfunction in auditory information processing and cognitive capabilities in IGD. These findings suggest that reduced P300 amplitudes may be candidate neurobiological marker for IGD.
Kanemoto, Mari; Asai, Tomohisa; Sugimori, Eriko; Tanno, Yoshihiko
2013-01-01
Previous studies have suggested that a tendency to externalize internal thought is related to auditory hallucinations or even proneness to auditory hallucinations (AHp) in the general population. However, although auditory hallucinations are related to emotional phenomena, few studies have investigated the effect of emotional valence on the aforementioned relationship. In addition, we do not know what component of psychotic phenomena relate to externalizing bias. The current study replicated our previous research, which suggested that individual differences in auditory hallucination-like experiences are strongly correlated with the external misattribution of internal thoughts, conceptualized in terms of false memory, using the Deese–Roediger–McDermott (DRM) paradigm. We found a significant relationship between experimental performance and total scores on the Launay–Slade Hallucination Scale (LSHS). Among the LSHS factors, only vivid mental image, which is said to be a predictor of auditory hallucinations, was significantly related to experimental performance. We then investigated the potential effect of emotional valence using the DRM paradigm. The results indicate that participants with low scores on the LSHS (the low-AHp group in the current study) showed an increased discriminability index (d′) for positive words and a decreased d′ for negative words. However, no effects of emotional valence were found for participants with high LSHS scores (high-AHp group). This study indicated that external misattribution of internal thoughts predicts AHp, and that the high-AHp group showed a smaller emotional valence effect in the DRM paradigm compared with the low-AHp group. We discuss this outcome from the perspective of the dual-process activation-monitoring framework in the DRM paradigm in regard to emotion-driven automatic thought in false memory. PMID:23847517
Pei, Yu-Cheng; Chen, Chia-Ling; Chung, Chia-Ying; Chou, Shi-Wei; Wong, Alice M K; Tang, Simon F T
2004-02-01
Auditory event-related potentials (ERPs) were investigated in an oddball paradigm to verify electrophysiological evidence of music expectation, which is a key component of artistic presentation. The non-target condition consisted of four-chord harmonic chord sequences, while the target condition was manifested by a partially violating third chord and a resolving fourth chord. The results showed that the specific mismatch negativity (MMN) elicited in the resolving chord is as robust as that elicited in the partially violating chord. Moreover, the P3b (P300) elicited in the resolving chord was smaller than the one in the violating chord. Taken together these data indicates that the human brain pre-attentatively may be able to anticipate a subsequent resolving chord when music expectation is generated by a partially violating chord.
Effect of EEG Referencing Methods on Auditory Mismatch Negativity
Mahajan, Yatin; Peter, Varghese; Sharma, Mridula
2017-01-01
Auditory event-related potentials (ERPs) have consistently been used in the investigation of auditory and cognitive processing in the research and clinical laboratories. There is currently no consensus on the choice of appropriate reference for auditory ERPs. The most commonly used references in auditory ERP research are the mathematically linked-mastoids (LM) and average referencing (AVG). Since LM and AVG referencing procedures do not solve the issue of electrically-neutral reference, Reference Electrode Standardization Technique (REST) was developed to create a neutral reference for EEG recordings. The aim of the current research is to compare the influence of the reference on amplitude and latency of auditory mismatch negativity (MMN) as a function of magnitude of frequency deviance across three commonly used electrode montages (16, 32, and 64-channel) using REST, LM, and AVG reference procedures. The current study was designed to determine if the three reference methods capture the variation in amplitude and latency of MMN with the deviance magnitude. We recorded MMN from 12 normal hearing young adults in an auditory oddball paradigm with 1,000 Hz pure tone as standard and 1,030, 1,100, and 1,200 Hz as small, medium and large frequency deviants, respectively. The EEG data recorded to these sounds was re-referenced using REST, LM, and AVG methods across 16-, 32-, and 64-channel EEG electrode montages. Results revealed that while the latency of MMN decreased with increment in frequency of deviant sounds, no effect of frequency deviance was present for amplitude of MMN. There was no effect of referencing procedure on the experimental effect tested. The amplitude of MMN was largest when the ERP was computed using LM referencing and the REST referencing produced the largest amplitude of MMN for 64-channel montage. There was no effect of electrode-montage on AVG referencing induced ERPs. Contrary to our predictions, the results suggest that the auditory MMN elicited
Auditory selective attention in adolescents with major depression: An event-related potential study.
Greimel, E; Trinkl, M; Bartling, J; Bakos, S; Grossheinrich, N; Schulte-Körne, G
2015-02-01
Major depression (MD) is associated with deficits in selective attention. Previous studies in adults with MD using event-related potentials (ERPs) reported abnormalities in the neurophysiological correlates of auditory selective attention. However, it is yet unclear whether these findings can be generalized to MD in adolescence. Thus, the aim of the present ERP study was to explore the neural mechanisms of auditory selective attention in adolescents with MD. 24 male and female unmedicated adolescents with MD and 21 control subjects were included in the study. ERPs were collected during an auditory oddball paradigm. Depressive adolescents tended to show a longer N100 latency to target and non-target tones. Moreover, MD subjects showed a prolonged latency of the P200 component to targets. Across groups, longer P200 latency was associated with a decreased tendency of disinhibited behavior as assessed by a behavioral questionnaire. To be able to draw more precise conclusions about differences between the neural bases of selective attention in adolescents vs. adults with MD, future studies should include both age groups and apply the same experimental setting across all subjects. The study provides strong support for abnormalities in the neurophysiolgical bases of selective attention in adolecents with MD at early stages of auditory information processing. Absent group differences in later ERP components reflecting voluntary attentional processes stand in contrast to results reported in adults with MD and may suggest that adolescents with MD possess mechanisms to compensate for abnormalities in the early stages of selective attention. Copyright © 2014 Elsevier B.V. All rights reserved.
Perdikis, Dionysios; Müller, Viktor; Blanc, Jean-Luc; Huys, Raoul; Temprado, Jean-Jacques
2015-01-01
Abstract The present work focused on the study of fluctuations of cortical activity across time scales in young and older healthy adults. The main objective was to offer a comprehensive characterization of the changes of brain (cortical) signal variability during aging, and to make the link with known underlying structural, neurophysiological, and functional modifications, as well as aging theories. We analyzed electroencephalogram (EEG) data of young and elderly adults, which were collected at resting state and during an auditory oddball task. We used a wide battery of metrics that typically are separately applied in the literature, and we compared them with more specific ones that address their limits. Our procedure aimed to overcome some of the methodological limitations of earlier studies and verify whether previous findings can be reproduced and extended to different experimental conditions. In both rest and task conditions, our results mainly revealed that EEG signals presented systematic age-related changes that were time-scale-dependent with regard to the structure of fluctuations (complexity) but not with regard to their magnitude. Namely, compared with young adults, the cortical fluctuations of the elderly were more complex at shorter time scales, but less complex at longer scales, although always showing a lower variance. Additionally, the elderly showed signs of spatial, as well as between, experimental conditions dedifferentiation. By integrating these so far isolated findings across time scales, metrics, and conditions, the present study offers an overview of age-related changes in the fluctuation electrocortical activity while making the link with underlying brain dynamics. PMID:26464983
Habituation deficit of auditory N100m in patients with fibromyalgia.
Choi, W; Lim, M; Kim, J S; Chung, C K
2016-11-01
Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
2017-02-01
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis
Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.
2016-01-01
Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815
P300 Event-Related Potentials in Children with Dyslexia
ERIC Educational Resources Information Center
Papagiannopoulou, Eleni A.; Lagopoulos, Jim
2017-01-01
To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in…
Towards User-Friendly Spelling with an Auditory Brain-Computer Interface: The CharStreamer Paradigm
Höhne, Johannes; Tangermann, Michael
2014-01-01
Realizing the decoding of brain signals into control commands, brain-computer interfaces (BCI) aim to establish an alternative communication pathway for locked-in patients. In contrast to most visual BCI approaches which use event-related potentials (ERP) of the electroencephalogram, auditory BCI systems are challenged with ERP responses, which are less class-discriminant between attended and unattended stimuli. Furthermore, these auditory approaches have more complex interfaces which imposes a substantial workload on their users. Aiming for a maximally user-friendly spelling interface, this study introduces a novel auditory paradigm: “CharStreamer”. The speller can be used with an instruction as simple as “please attend to what you want to spell”. The stimuli of CharStreamer comprise 30 spoken sounds of letters and actions. As each of them is represented by the sound of itself and not by an artificial substitute, it can be selected in a one-step procedure. The mental mapping effort (sound stimuli to actions) is thus minimized. Usability is further accounted for by an alphabetical stimulus presentation: contrary to random presentation orders, the user can foresee the presentation time of the target letter sound. Healthy, normal hearing users (n = 10) of the CharStreamer paradigm displayed ERP responses that systematically differed between target and non-target sounds. Class-discriminant features, however, varied individually from the typical N1-P2 complex and P3 ERP components found in control conditions with random sequences. To fully exploit the sequential presentation structure of CharStreamer, novel data analysis approaches and classification methods were introduced. The results of online spelling tests showed that a competitive spelling speed can be achieved with CharStreamer. With respect to user rating, it clearly outperforms a control setup with random presentation sequences. PMID:24886978
Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.
Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine
2017-03-22
Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language
Psychopathy, attention, and oddball target detection: New insights from PCL-R facet scores.
Anderson, Nathaniel E; Steele, Vaughn R; Maurer, J Michael; Bernat, Edward M; Kiehl, Kent A
2015-09-01
Psychopathy is a disorder accompanied by cognitive deficits including abnormalities in attention. Prior studies examining cognitive features of psychopaths using ERPs have produced some inconsistent results. We examined psychopathy-related differences in ERPs during an auditory oddball task in a sample of incarcerated adult males. We extend previous work by deriving ERPs with principal component analysis (PCA) and relate these to the four facets of Hare's Psychopathy Checklist Revised (PCL-R). Features of psychopathy were associated with increased target N1 amplitude (facets 1, 4), decreased target P3 amplitude (facet 1), and reduced slow wave amplitude for frequent standard stimuli (facets 1, 3, 4). We conclude that employing PCA and examining PCL-R facets improve sensitivity and help clarify previously reported associations. Furthermore, attenuated slow wave during standards may be a novel marker for psychopaths' abnormalities in attention. © 2015 Society for Psychophysiological Research.
Auditory event-related potentials in methadone substituted opiate users.
Wang, Grace Y; Kydd, Robert; Russell, Bruce R
2015-09-01
The effects of methadone maintenance treatment (MMT) on neurophysiological function are unclear. Using an auditory oddball paradigm, event-related potential (ERP) amplitudes and latencies were measured in 32 patients undertaking MMT, 17 opiate users who were addicted but not receiving substitution treatment and 25 healthy control subjects. Compared with healthy control subjects, the MMT and opiate user groups showed an increased P200 amplitude in response to target stimuli. The opiate user group also exhibited a decreased amplitude and an increased latency of N200, and a greater number of task-related errors than either healthy control subjects or patients undertaking MMT. There were no significant group differences in the P300 amplitude. However, it is noteworthy that the frontal P300 amplitude of the MMT group was greater than that of opiate users or healthy controls. Our findings suggest that altered sensory information processing associated with a history of opiate use remains in patients undertaking MMT. However, there are less marked ERP abnormalities in those receiving MMT than in active opiate users. The deficits in information processing associated with illicit opiate use are likely to be reduced during MMT. © The Author(s) 2015.
Laursen, Bettina; Mørk, Arne; Kristiansen, Uffe; Bastlund, Jesper Frank
2014-01-01
P300 (P3) event-related potentials (ERPs) have been suggested to be an endogenous marker of cognitive function and auditory oddball paradigms are frequently used to evaluate P3 ERPs in clinical settings. Deficits in P3 amplitude and latency reflect some of the neurological dysfunctions related to several psychiatric and neurological diseases, e.g., Alzheimer's disease (AD). However, only a very limited number of rodent studies have addressed the back-translational validity of the P3-like ERPs as suitable markers of cognition. Thus, the potential of rodent P3-like ERPs to predict pro-cognitive effects in humans remains to be fully validated. The current study characterizes P3-like ERPs in the 192-IgG-SAP (SAP) rat model of the cholinergic degeneration associated with AD. Following training in a combined auditory oddball and lever-press setup, rats were subjected to bilateral intracerebroventricular infusion of 1.25 μg SAP or PBS (sham lesion) and recording electrodes were implanted in hippocampal CA1. Relative to sham-lesioned rats, SAP-lesioned rats had significantly reduced amplitude of P3-like ERPs. P3 amplitude was significantly increased in SAP-treated rats following pre-treatment with 1 mg/kg donepezil. Infusion of SAP reduced the hippocampal choline acetyltransferase activity by 75%. Behaviorally defined cognitive performance was comparable between treatment groups. The present study suggests that AD-like deficits in P3-like ERPs may be mimicked by the basal forebrain cholinergic degeneration induced by SAP. SAP-lesioned rats may constitute a suitable model to test the efficacy of pro-cognitive substances in an applied experimental setup.
Early sensory encoding of affective prosody: neuromagnetic tomography of emotional category changes.
Thönnessen, Heike; Boers, Frank; Dammers, Jürgen; Chen, Yu-Han; Norra, Christine; Mathiak, Klaus
2010-03-01
In verbal communication, prosodic codes may be phylogenetically older than lexical ones. Little is known, however, about early, automatic encoding of emotional prosody. This study investigated the neuromagnetic analogue of mismatch negativity (MMN) as an index of early stimulus processing of emotional prosody using whole-head magnetoencephalography (MEG). We applied two different paradigms to study MMN; in addition to the traditional oddball paradigm, the so-called optimum design was adapted to emotion detection. In a sequence of randomly changing disyllabic pseudo-words produced by one male speaker in neutral intonation, a traditional oddball design with emotional deviants (10% happy and angry each) and an optimum design with emotional (17% happy and sad each) and nonemotional gender deviants (17% female) elicited the mismatch responses. The emotional category changes demonstrated early responses (<200 ms) at both auditory cortices with larger amplitudes at the right hemisphere. Responses to the nonemotional change from male to female voices emerged later ( approximately 300 ms). Source analysis pointed at bilateral auditory cortex sources without robust contribution from other such as frontal sources. Conceivably, both auditory cortices encode categorical representations of emotional prosodic. Processing of cognitive feature extraction and automatic emotion appraisal may overlap at this level enabling rapid attentional shifts to important social cues. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Lebedeva, I S
2015-01-01
The search of the structural and functional brain characteristics is one of the most studied directions in the modern biological psychiatry. However, in spite of the numerous studies the results are still controversial. As the necessity of the shift of the current paradigm in schizophrenia research evolves it has been suggested to discriminate not only abnormal but stable functioning neuronal circuits as well. Consequently, the aim is formulated as the search of the minimal brain damage sufficient for disease development. Author analyzed the auditory oddball P300 latency (as a marker of information processing speed), N-acetylaspartate level in the dorsolateral prefrontal cortex (as a marker of neuronal integrity in this brain area) and fractional anisotropy of the fasciculus uncindtus which connects the frontal and temporal lobes (as a marker of white matter bundles microstructure) in 30 patients with schizophrenia and 27 healthy people. The findings showed that all the tested characteristics are not "obligatory" for schizophrenia.
Melcher, Tobias; Gruber, Oliver
2006-11-22
The aim of this fMRI study was to investigate and compare the neural mechanisms of selective attention during two different operationalizations of competition between task-relevant and task-irrelevant information: Stroop-incongruity and oddballs. For this purpose, we employed a Stroop-like oddball task in which subjects responded to the font size of presented word stimuli. Stroop-incongruity was created by (response-)incongruent word information while oddballs comprised low-frequency events in a task-irrelevant, unattended dimension. Thereby, in order to elucidate the influence of processing domain from which competition emanates, oddball conditions were created in two different attribute dimensions, color and word meaning. Either oddball condition was expected to evoke an orienting response, which participants would have to override in order to maintain adequate performance. Incongruent Stroop trials were expected to produce Stroop-interference so that subjects would have to override the predominant tendency to read and respond to word meaning. All competition conditions exhibited significantly prolonged reaction times compared to control trials, demonstrating that our experimental manipulation was indeed effective. fMRI data analyses delineated two discriminative components of competition: one component mainly related to motor preparation and another, primarily attentional component. Regarding the first, Stroop-interference increased activation mainly in regions implicated in motor control or response preparation. Regarding the second, Word-oddballs increased activation in a frontoparietal "attention network". Furthermore, Word-oddballs and Color-oddballs exhibited striking activation overlap mainly in prefrontal regions but also in posterior processing areas. Here, the data emphasized a prominent role of posterior lateral PFC in implementing top-down attentional control.
Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase
NASA Astrophysics Data System (ADS)
Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten
2016-04-01
Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.
Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase.
Zink, Rob; Hunyadi, Borbála; Huffel, Sabine Van; Vos, Maarten De
2016-04-01
One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.
Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey
The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective
Kamal, Brishna; Holman, Constance; de Villers-Sidani, Etienne
2013-01-01
Age-related impairments in the primary auditory cortex (A1) include poor tuning selectivity, neural desynchronization, and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function. PMID:24062649
Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria
2007-01-01
Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
Statistical context shapes stimulus-specific adaptation in human auditory cortex.
Herrmann, Björn; Henry, Molly J; Fromboluti, Elisa Kim; McAuley, J Devin; Obleser, Jonas
2015-04-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. Copyright © 2015 the American Physiological Society.
Qiao, Zhengxue; Yang, Aiying; Qiu, Xiaohui; Yang, Xiuxian; Zhang, Congpei; Zhu, Xiongzhao; He, Jincai; Wang, Lin; Bai, Bing; Sun, Hailian; Zhao, Lun; Yang, Yanjie
2015-10-30
Gender differences in rates of major depressive disorder (MDD) are well established, but gender differences in cognitive function have been little studied. Auditory mismatch negativity (MMN) was used to investigate gender differences in pre-attentive information processing in first episode MDD. In the deviant-standard reverse oddball paradigm, duration auditory MMN was obtained in 30 patients (15 males) and 30 age-/education-matched controls. Over frontal-central areas, mean amplitude of increment MMN (to a 150-ms deviant tone) was smaller in female than male patients; there was no sex difference in decrement MMN (to a 50-ms deviant tone). Neither increment nor decrement MMN differed between female and male patients over temporal areas. Frontal-central MMN and temporal MMN did not differ between male and female controls in any condition. Over frontal-central areas, mean amplitude of increment MMN was smaller in female patients than female controls; there was no difference in decrement MMN. Neither increment nor decrement MMN differed between female patients and female controls over temporal areas. Frontal-central MMN and temporal MMN did not differ between male patients and male controls. Mean amplitude of increment MMN in female patients did not correlate with symptoms, suggesting this sex-specific deficit is a trait- not a state-dependent phenomenon. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Cycowicz, Yael M; Friedman, David
2007-01-01
The orienting response, the brain's reaction to novel and/or out of context familiar events, is reflected by the novelty P3 of the ERP. Contextually novel events also engender high rates of recognition memory. We examined, under incidental and intentional conditions, the effects of visual symbol familiarity on the novelty P3 recorded during an oddball task and on the parietal episodic memory (EM) effect, an index of recollection. Repetition of familiar, but not unfamiliar, symbols elicited a reduction in the novelty P3. Better recognition performance for the familiar symbols was associated with a robust parietal EM effect, which was absent for the unfamiliar symbols in the incidental task. These data demonstrate that processing of novel events depends on expectation and whether stimuli have preexisting representations in long-term semantic memory.
Recording Visual Evoked Potentials and Auditory Evoked P300 at 9.4T Static Magnetic Field
Hahn, David; Boers, Frank; Shah, N. Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4T were not different from those recorded at 0T. The amplitudes of ERPs were higher at 9.4T when compared to recordings at 0T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses. PMID:23650538
Recording visual evoked potentials and auditory evoked P300 at 9.4T static magnetic field.
Arrubla, Jorge; Neuner, Irene; Hahn, David; Boers, Frank; Shah, N Jon
2013-01-01
Simultaneous recording of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) has shown a number of advantages that make this multimodal technique superior to fMRI alone. The feasibility of recording EEG at ultra-high static magnetic field up to 9.4 T was recently demonstrated and promises to be implemented soon in fMRI studies at ultra high magnetic fields. Recording visual evoked potentials are expected to be amongst the most simple for simultaneous EEG/fMRI at ultra-high magnetic field due to the easy assessment of the visual cortex. Auditory evoked P300 measurements are of interest since it is believed that they represent the earliest stage of cognitive processing. In this study, we investigate the feasibility of recording visual evoked potentials and auditory evoked P300 in a 9.4 T static magnetic field. For this purpose, EEG data were recorded from 26 healthy volunteers inside a 9.4 T MR scanner using a 32-channel MR compatible EEG system. Visual stimulation and auditory oddball paradigm were presented in order to elicit evoked related potentials (ERP). Recordings made outside the scanner were performed using the same stimuli and EEG system for comparison purposes. We were able to retrieve visual P100 and auditory P300 evoked potentials at 9.4 T static magnetic field after correction of the ballistocardiogram artefact using independent component analysis. The latencies of the ERPs recorded at 9.4 T were not different from those recorded at 0 T. The amplitudes of ERPs were higher at 9.4 T when compared to recordings at 0 T. Nevertheless, it seems that the increased amplitudes of the ERPs are due to the effect of the ultra-high field on the EEG recording system rather than alteration in the intrinsic processes that generate the electrophysiological responses.
Rissling, Anthony J.; Miyakoshi, Makoto; Sugar, Catherine A.; Braff, David L.; Makeig, Scott; Light, Gregory A.
2014-01-01
Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ) patients, scalp-channel measures of averaged event-related potentials (ERPs) mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42) and non-psychiatric comparison subjects (n = 47) participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard) tone ERP difference complex, here termed the auditory deviance response (ADR), comprised of a mid-frontal mismatch negativity (MMN), P3a positivity, and re-orienting negativity (RON) peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs) across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz) scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics. PMID:25379456
Visual processing affects the neural basis of auditory discrimination.
Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko
2008-12-01
The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.
Koda, Hiroki; Basile, Muriel; Olivier, Marion; Remeuf, Kevin; Nagumo, Sumiharu; Blois-Heulin, Catherine; Lemasson, Alban
2013-08-01
The central position and universality of music in human societies raises the question of its phylogenetic origin. One of the most important properties of music involves harmonic musical intervals, in response to which humans show a spontaneous preference for consonant over dissonant sounds starting from early human infancy. Comparative studies conducted with organisms at different levels of the primate lineage are needed to understand the evolutionary scenario under which this phenomenon emerged. Although previous research found no preference for consonance in a New World monkey species, the question remained opened for Old World monkeys. We used an experimental paradigm based on a sensory reinforcement procedure to test auditory preferences for consonant sounds in Campbell's monkeys (Cercopithecus campbelli campbelli), an Old World monkey species. Although a systematic preference for soft (70 dB) over loud (90 dB) control white noise was found, Campbell's monkeys showed no preference for either consonant or dissonant sounds. The preference for soft white noise validates our noninvasive experimental paradigm, which can be easily reused in any captive facility to test for auditory preferences. This would suggest that human preference for consonant sounds is not systematically shared with New and Old World monkeys. The sensitivity for harmonic musical intervals emerged probably very late in the primate lineage.
Proulx, Nicole; Samadani, Ali-Akbar; Chau, Tom
2018-05-16
Event-related potentials (ERPs) have previously been used to confirm the existence of the fast optical signal (FOS) but validation methods have mainly been limited to exploring the temporal correspondence of FOS peaks to those of ERPs. The purpose of this study was to systematically quantify the relationship between FOS and ERP responses to a visual oddball task in both time and frequency domains. Near-infrared spectroscopy (NIRS) and electroencephalography (EEG) sensors were co-located over the prefrontal cortex while participants performed a visual oddball task. Fifteen participants completed 2 data collection sessions each, where they were instructed to keep a mental count of oddball images. The oddball condition produced a positive ERP at 200 ms followed by a negativity 300-500 ms after image onset in the frontal electrodes. In contrast to previous FOS studies, a FOS response was identified only in DC intensity signals and not in phase delay signals. A decrease in DC intensity was found 150-250 ms after oddball image onset with a 400-trial average in 10 of 15 participants. The latency of the positive 200 ms ERP and the FOS DC intensity decrease were significantly correlated for only 6 (out of 15) participants due to the low signal-to-noise ratio of the FOS response. Coherence values between the FOS and ERP oddball responses were found to be significant in the 3-5 Hz frequency band for 10 participants. A significant Granger causal influence of the ERP on the FOS oddball response was uncovered in the 2-6 Hz frequency band for 7 participants. Collectively, our findings suggest that, for a majority of participants, the ERP and the DC intensity signal of the FOS are spectrally coherent, specifically in narrow frequency bands previously associated with event-related oscillations in the prefrontal cortex. However, these electro-optical relationships were only found in a subset of participants. Further research on enhancing the quality of the event-related FOS
Scharinger, Mathias; Monahan, Philip J; Idsardi, William J
2016-03-01
While previous research has established that language-specific knowledge influences early auditory processing, it is still controversial as to what aspects of speech sound representations determine early speech perception. Here, we propose that early processing primarily depends on information propagated top-down from abstractly represented speech sound categories. In particular, we assume that mid-vowels (as in 'bet') exert less top-down effects than the high-vowels (as in 'bit') because of their less specific (default) tongue height position as compared to either high- or low-vowels (as in 'bat'). We tested this assumption in a magnetoencephalography (MEG) study where we contrasted mid- and high-vowels, as well as the low- and high-vowels in a passive oddball paradigm. Overall, significant differences between deviants and standards indexed reliable mismatch negativity (MMN) responses between 200 and 300ms post-stimulus onset. MMN amplitudes differed in the mid/high-vowel contrasts and were significantly reduced when a mid-vowel standard was followed by a high-vowel deviant, extending previous findings. Furthermore, mid-vowel standards showed reduced oscillatory power in the pre-stimulus beta-frequency band (18-26Hz), compared to high-vowel standards. We take this as converging evidence for linguistic category structure to exert top-down influences on auditory processing. The findings are interpreted within the linguistic model of underspecification and the neuropsychological predictive coding framework. Copyright © 2016 Elsevier Inc. All rights reserved.
Magnetoencephalographic signatures of numerosity discrimination in fetuses and neonates.
Schleger, Franziska; Landerl, Karin; Muenssinger, Jana; Draganova, Rossitza; Reinl, Maren; Kiefer-Schmidt, Isabelle; Weiss, Magdalene; Wacker-Gußmann, Annette; Huotilainen, Minna; Preissl, Hubert
2014-01-01
Numerosity discrimination has been demonstrated in newborns, but not in fetuses. Fetal magnetoencephalography allows non-invasive investigation of neural responses in neonates and fetuses. During an oddball paradigm with auditory sequences differing in numerosity, evoked responses were recorded and mismatch responses were quantified as an indicator for auditory discrimination. Thirty pregnant women with healthy fetuses (last trimester) and 30 healthy term neonates participated. Fourteen adults were included as a control group. Based on measurements eligible for analysis, all adults, all neonates, and 74% of fetuses showed numerical mismatch responses. Numerosity discrimination appears to exist in the last trimester of pregnancy.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Shibasaki, Manabu; Namba, Mari; Oshiro, Misaki; Crandall, Craig G; Nakata, Hiroki
2016-07-01
The effect of hyperthermia on cognitive function remains equivocal, perhaps because of methodological discrepancy. Using electroencephalographic event-related potentials (ERPs), we tested the hypothesis that a passive heat stress impairs cognitive processing. Thirteen volunteers performed repeated auditory oddball paradigms under two thermal conditions, normothermic time control and heat stress, on different days. For the heat stress trial, these paradigms were performed at preheat stress (i.e., normothermic) baseline, when esophageal temperature had increased by ∼0.8°C, when esophageal temperature had increased by ∼2.0°C, and during cooling following the heat stress. The reaction time and ERPs were recorded in each session. For the time control trial, subjects performed the auditory oddball paradigms at approximately the same time interval as they did in the heat stress trial. The peak latency and amplitude of an indicator of auditory processing (N100) were not altered regardless of thermal conditions. An indicator of stimulus classification/evaluation time (latency of P300) and the reaction time were shortened during heat stress; moreover an indicator of cognitive processing (the amplitude of P300) was significantly reduced during severe heat stress (8.3 ± 1.3 μV) relative to the baseline (12.2 ± 1.0 μV, P < 0.01). No changes in these indexes occurred during the time control trial. During subsequent whole body cooling, the amplitude of P300 remained reduced, and the reaction time and latency of P300 remained shortened. These results suggest that excessive elevations in internal temperature reduce cognitive processing but promote classification time. Copyright © 2016 the American Physiological Society.
Kabella, Danielle M; Flynn, Lucinda; Peters, Amanda; Kodituwakku, Piyadasa; Stephen, Julia M
2018-05-24
Prior studies indicate that the auditory mismatch response is sensitive to early alterations in brain development in multiple developmental disorders. Prenatal alcohol exposure is known to impact early auditory processing. The current study hypothesized alterations in the mismatch response in young children with fetal alcohol spectrum disorders (FASD). Participants in this study were 9 children with a FASD and 17 control children (Control) aged 3 to 6 years. Participants underwent magnetoencephalography and structural magnetic resonance imaging scans separately. We compared groups on neurophysiological mismatch negativity (MMN) responses to auditory stimuli measured using the auditory oddball paradigm. Frequent (1,000 Hz) and rare (1,200 Hz) tones were presented at 72 dB. There was no significant group difference in MMN response latency or amplitude represented by the peak located ~200 ms after stimulus presentation in the difference time course between frequent and infrequent tones. Examining the time courses to the frequent and infrequent tones separately, repeated measures analysis of variance with condition (frequent vs. rare), peak (N100m and N200m), and hemisphere as within-subject factors and diagnosis and sex as the between-subject factors showed a significant interaction of peak by diagnosis (p = 0.001), with a pattern of decreased amplitude from N100m to N200m in Control children and the opposite pattern in children with FASD. However, no significant difference was found with the simple effects comparisons. No group differences were found in the response latencies of the rare auditory evoked fields. The results indicate that there was no detectable effect of alcohol exposure on the amplitude or latency of the MMNm response to simple tones modulated by frequency change in preschool-aged children with FASD. However, while discrimination abilities to simple tones may be intact, early auditory sensory processing revealed by the interaction between N100
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
Auditory discrimination training for tinnitus treatment: the effect of different paradigms.
Herraiz, Carlos; Diges, I; Cobo, P; Aparicio, J M; Toledano, A
2010-07-01
Acoustic deprivation, i.e. hearing loss, is responsible for a cascade of processes resulting in reorganisation of the cortex. Tinnitus mechanisms are explained by synchronization of the neural spontaneous activity and might be related to cortical re-mapping. Auditory discrimination training (ADT) has demonstrated in both animals and humans to induce tonotopical changes in the auditory pathways through neural plasticity. We hypothesize that ADT could have some effect on tinnitus perception. The objective of this study is to compare the effect on tinnitus following two paradigms of ADT. Only patients from 20 to 60 years of age were recruited. Inclusion criteria were pure tone tinnitus of mild or moderate handicap according to the Tinnitus Handicap Inventory score (<56). ADT patients were randomized in two groups: SAME (ADT in the same frequency of tinnitus pitch, 20 patients) and NONSAME (ADT in the frequency one-octave below tinnitus pitch, 21 patients). Groups of pair of tones (70% standard tones ST, 30% deviant tones ST + 0.1-0.5 kHz) were randomly mixed for 20 min/day during 1 month. Patient had to mark when the two sounds of the pair were similar or different. Control group included 26 patients from the waiting list (WLG). Patients were also divided according to the trained frequency and the deepest hearing-impaired frequency. Outcome parameters were set up according to the answer to the question "is your tinnitus better, same, or worse with the treatment?" (RESP), the tinnitus handicap inventory (THI) and the visual analogue scale from 1 to 10 on tinnitus intensity (VAS). Tinnitus improved in 42.2% of the patients (RESP). VAS and THI scores were reduced but only THI differences were statistically significant (P = 0.003). ADT patients improved significantly compared with WLG in RESP and THI scores (P < 0.01). Training frequencies one-octave below the tinnitus pitch (NONSAME) decreased significantly THI scores compared with patients trained frequencies similar to
Fujimoto, Toshiro; Okumura, Eiichi; Kodabashi, Atsushi; Takeuchi, Kouzou; Otsubo, Toshiaki; Nakamura, Katsumi; Yatsushiro, Kazutaka; Sekine, Masaki; Kamiya, Shinichiro; Shimooki, Susumu; Tamura, Toshiyo
2016-01-01
We studied sex-related differences in gamma oscillation during an auditory oddball task, using magnetoencephalography and electroencephalography assessment of imaginary coherence (IC). We obtained a statistical source map of event-related desynchronization (ERD) / event-related synchronization (ERS), and compared females and males regarding ERD / ERS. Based on the results, we chose respectively seed regions for IC determinations in low (30-50 Hz), mid (50-100 Hz) and high gamma (100-150 Hz) bands. In males, ERD was increased in the left posterior cingulate cortex (CGp) at 500 ms in the low gamma band, and in the right caudal anterior cingulate cortex (cACC) at 125 ms in the mid-gamma band. ERS was increased in the left rostral anterior cingulate cortex (rACC) at 375 ms in the high gamma band. We chose the CGp, cACC and rACC as seeds, and examined IC between the seed and certain target regions using the IC map. IC changes depended on the height of the gamma frequency and the time window in the gamma band. Although IC in the mid and high gamma bands did not show sex-specific differences, IC at 30-50 Hz in males was increased between the left rACC and the frontal, orbitofrontal, inferior temporal and fusiform target regions. Increased IC in males suggested that males may acomplish the task constructively, analysingly, emotionally, and by perfoming analysis, and that information processing was more complicated in the cortico-cortical circuit. On the other hand, females showed few differences in IC. Females planned the task with general attention and economical well-balanced processing, which was explained by the higher overall functional cortical connectivity. CGp, cACC and rACC were involved in sex differences in information processing and were likely related to differences in neuroanatomy, hormones and neurotransmitter systems.
Kaufmann, Tobias; Holz, Elisa M; Kübler, Andrea
2013-01-01
This paper describes a case study with a patient in the classic locked-in state, who currently has no means of independent communication. Following a user-centered approach, we investigated event-related potentials (ERP) elicited in different modalities for use in brain-computer interface (BCI) systems. Such systems could provide her with an alternative communication channel. To investigate the most viable modality for achieving BCI based communication, classic oddball paradigms (1 rare and 1 frequent stimulus, ratio 1:5) in the visual, auditory and tactile modality were conducted (2 runs per modality). Classifiers were built on one run and tested offline on another run (and vice versa). In these paradigms, the tactile modality was clearly superior to other modalities, displaying high offline accuracy even when classification was performed on single trials only. Consequently, we tested the tactile paradigm online and the patient successfully selected targets without any error. Furthermore, we investigated use of the visual or tactile modality for different BCI systems with more than two selection options. In the visual modality, several BCI paradigms were tested offline. Neither matrix-based nor so-called gaze-independent paradigms constituted a means of control. These results may thus question the gaze-independence of current gaze-independent approaches to BCI. A tactile four-choice BCI resulted in high offline classification accuracies. Yet, online use raised various issues. Although performance was clearly above chance, practical daily life use appeared unlikely when compared to other communication approaches (e.g., partner scanning). Our results emphasize the need for user-centered design in BCI development including identification of the best stimulus modality for a particular user. Finally, the paper discusses feasibility of EEG-based BCI systems for patients in classic locked-in state and compares BCI to other AT solutions that we also tested during the
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Opposite brain laterality in analogous auditory and visual tests.
Oltedal, Leif; Hugdahl, Kenneth
2017-11-01
Laterality for language processing can be assessed by auditory and visual tasks. Typically, a right ear/right visual half-field (VHF) advantage is observed, reflecting left-hemispheric lateralization for language. Historically, auditory tasks have shown more consistent and reliable results when compared to VHF tasks. While few studies have compared analogous tasks applied to both sensory modalities for the same participants, one such study by Voyer and Boudreau [(2003). Cross-modal correlation of auditory and visual language laterality tasks: a serendipitous finding. Brain Cogn, 53(2), 393-397] found opposite laterality for visual and auditory language tasks. We adapted an experimental paradigm based on a dichotic listening and VHF approach, and applied the combined language paradigm in two separate experiments, including fMRI in the second experiment to measure brain activation in addition to behavioural data. The first experiment showed a right-ear advantage for the auditory task, but a left half-field advantage for the visual task. The second experiment, confirmed the findings, with opposite laterality effects for the visual and auditory tasks. In conclusion, we replicate the finding by Voyer and Boudreau (2003) and support their interpretation that these visual and auditory language tasks measure different cognitive processes.
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Bristle-sensors—low-cost flexible passive dry EEG electrodes for neurofeedback and BCI applications
NASA Astrophysics Data System (ADS)
Grozea, Cristian; Voinescu, Catalin D.; Fazli, Siamac
2011-04-01
In this paper, we present a new, low-cost dry electrode for EEG that is made of flexible metal-coated polymer bristles. We examine various standard EEG paradigms, such as capturing occipital alpha rhythms, testing for event-related potentials in an auditory oddball paradigm and performing a sensory motor rhythm-based event-related (de-) synchronization paradigm to validate the performance of the novel electrodes in terms of signal quality. Our findings suggest that the dry electrodes that we developed result in high-quality EEG recordings and are thus suitable for a wide range of EEG studies and BCI applications. Furthermore, due to the flexibility of the novel electrodes, greater comfort is achieved in some subjects, this being essential for long-term use.
Stable Scalp EEG Spatiospectral Patterns Across Paradigms Estimated by Group ICA.
Labounek, René; Bridwell, David A; Mareček, Radek; Lamoš, Martin; Mikl, Michal; Slavíček, Tomáš; Bednařík, Petr; Baštinec, Jaromír; Hluštík, Petr; Brázdil, Milan; Jan, Jiří
2018-01-01
Electroencephalography (EEG) oscillations reflect the superposition of different cortical sources with potentially different frequencies. Various blind source separation (BSS) approaches have been developed and implemented in order to decompose these oscillations, and a subset of approaches have been developed for decomposition of multi-subject data. Group independent component analysis (Group ICA) is one such approach, revealing spatiospectral maps at the group level with distinct frequency and spatial characteristics. The reproducibility of these distinct maps across subjects and paradigms is relatively unexplored domain, and the topic of the present study. To address this, we conducted separate group ICA decompositions of EEG spatiospectral patterns on data collected during three different paradigms or tasks (resting-state, semantic decision task and visual oddball task). K-means clustering analysis of back-reconstructed individual subject maps demonstrates that fourteen different independent spatiospectral maps are present across the different paradigms/tasks, i.e. they are generally stable.
Recent advances in exploring the neural underpinnings of auditory scene perception
Snyder, Joel S.; Elhilali, Mounya
2017-01-01
Studies of auditory scene analysis have traditionally relied on paradigms using artificial sounds—and conventional behavioral techniques—to elucidate how we perceptually segregate auditory objects or streams from each other. In the past few decades, however, there has been growing interest in uncovering the neural underpinnings of auditory segregation using human and animal neuroscience techniques, as well as computational modeling. This largely reflects the growth in the fields of cognitive neuroscience and computational neuroscience and has led to new theories of how the auditory system segregates sounds in complex arrays. The current review focuses on neural and computational studies of auditory scene perception published in the past few years. Following the progress that has been made in these studies, we describe (1) theoretical advances in our understanding of the most well-studied aspects of auditory scene perception, namely segregation of sequential patterns of sounds and concurrently presented sounds; (2) the diversification of topics and paradigms that have been investigated; and (3) how new neuroscience techniques (including invasive neurophysiology in awake humans, genotyping, and brain stimulation) have been used in this field. PMID:28199022
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M; Graversen, Carina; Sørensen, Helge B D; Bastlund, Jesper F
2017-04-01
Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
NASA Astrophysics Data System (ADS)
Richard, Nelly; Laursen, Bettina; Grupe, Morten; Drewes, Asbjørn M.; Graversen, Carina; Sørensen, Helge B. D.; Bastlund, Jesper F.
2017-04-01
Objective. Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. Approach. An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. Main results. While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. Significance. The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.
Effects of semantic relatedness on recall of stimuli preceding emotional oddballs.
Smith, Ryan M; Beversdorf, David Q
2008-07-01
Semantic and episodic memory networks function as highly interconnected systems, both relying on the hippocampal/medial temporal lobe complex (HC/MTL). Episodic memory encoding triggers the retrieval of semantic information, serving to incorporate contextual relationships between the newly acquired memory and existing semantic representations. While emotional material augments episodic memory encoding at the time of stimulus presentation, interactions between emotion and semantic memory that contribute to subsequent episodic recall are not well understood. Using a modified oddball task, we examined the modulatory effects of negative emotion on semantic interactions with episodic memory by measuring the free-recall of serially presented neutral or negative words varying in semantic relatedness. We found increased free-recall for words related to and preceding emotionally negative oddballs, suggesting that negative emotion can indirectly facilitate episodic free-recall by enhancing semantic contributions during encoding. Our findings demonstrate the ability of emotion and semantic memory to interact to mutually enhance free-recall.
Impact of Language on Development of Auditory-Visual Speech Perception
ERIC Educational Resources Information Center
Sekiyama, Kaoru; Burnham, Denis
2008-01-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various…
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
Exploring Auditory Saltation Using the "Reduced-Rabbit" Paradigm
ERIC Educational Resources Information Center
Getzmann, Stephan
2009-01-01
Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal…
Diminished auditory sensory gating during active auditory verbal hallucinations.
Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia
2017-10-01
Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.
Daliri, Ayoub; Max, Ludo
2018-02-01
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre
Cerebral responses to local and global auditory novelty under general anesthesia
Uhrig, Lynn; Janssen, David; Dehaene, Stanislas; Jarraya, Béchir
2017-01-01
Primate brains can detect a variety of unexpected deviations in auditory sequences. The local-global paradigm dissociates two hierarchical levels of auditory predictive coding by examining the brain responses to first-order (local) and second-order (global) sequence violations. Using the macaque model, we previously demonstrated that, in the awake state, local violations cause focal auditory responses while global violations activate a brain circuit comprising prefrontal, parietal and cingulate cortices. Here we used the same local-global auditory paradigm to clarify the encoding of the hierarchical auditory regularities in anesthetized monkeys and compared their brain responses to those obtained in the awake state as measured with fMRI. Both, propofol, a GABAA-agonist, and ketamine, an NMDA-antagonist, left intact or even enhanced the cortical response to auditory inputs. The local effect vanished during propofol anesthesia and shifted spatially during ketamine anesthesia compared with wakefulness. Under increasing levels of propofol, we observed a progressive disorganization of the global effect in prefrontal, parietal and cingulate cortices and its complete suppression under ketamine anesthesia. Anesthesia also suppressed thalamic activations to the global effect. These results suggest that anesthesia preserves initial auditory processing, but disturbs both short-term and long-term auditory predictive coding mechanisms. The disorganization of auditory novelty processing under anesthesia relates to a loss of thalamic responses to novelty and to a disruption of higher-order functional cortical networks in parietal, prefrontal and cingular cortices. PMID:27502046
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
The Complex Pre-Execution Stage of Auditory Cognitive Control: ERPs Evidence from Stroop Tasks
Yu, Bo; Wang, Xunda; Ma, Lin; Li, Liang; Li, Haifeng
2015-01-01
Cognitive control has been extensively studied from Event-Related Potential (ERP) point of view in visual modality using Stroop paradigms. Little work has been done in auditory Stroop paradigms, and inconsistent conclusions have been reported, especially on the conflict detection stage of cognitive control. This study investigated the early ERP components in an auditory Stroop paradigm, during which participants were asked to identify the volume of spoken words and ignore the word meanings. A series of significant ERP components were revealed that distinguished incongruent and congruent trials: two declined negative polarity waves (the N1 and the N2) and three declined positive polarity wave (the P1, the P2 and the P3) over the fronto-central area for the incongruent trials. These early ERP components imply that both a perceptual stage and an identification stage exist in the auditory Stroop effect. A 3-stage cognitive control model was thus proposed for a more detailed description of the human cognitive control mechanism in the auditory Stroop tasks. PMID:26368570
Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard
2009-01-01
Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…
Event-related potential study to aversive auditory stimuli.
Czigler, István; Cox, Trevor J; Gyimesi, Kinga; Horváth, János
2007-06-15
In an auditory oddball task emotionally negative (aversive) sounds (e.g. rubbing together of polystyrene) and everyday sounds (e.g. ringing of a bicycle bell) were presented as task-irrelevant (novel) sounds. Both the aversive and the everyday sounds elicited the orientation-related P3a component of the event-related potentials (ERPs). In the 154-250 ms range the ERPs for the aversive sounds were more negative than the ERP of the everyday sounds. For the aversive sounds, this negativity was followed by a frontal positive wave (372-456 ms). The aversive sounds elicited larger late positive shift than the everyday sounds. The early negativity is considered as an initial effect in a broad neural network including limbic structures, while the later is related to the cognitive assessment of the stimuli and to memory-related processes.
Working memory capacity affects the interference control of distractors at auditory gating.
Tsuchida, Yukio; Katayama, Jun'ichi; Murohashi, Harumitsu
2012-05-10
It is important to understand the role of individual differences in working memory capacity (WMC). We investigated the relation between differences in WMC and N1 in event-related brain potentials as a measure of early selective attention for an auditory distractor in three-stimulus oddball tasks that required minimum memory. A high-WMC group (n=13) showed a smaller N1 in response to a distractor and target than did a low-WMC group (n=13) in the novel condition with high distraction. However, in the simple condition with low distraction, there was no difference in N1 between the groups. For all participants (n=52), the correlation between the scores for WMC and N1 peak amplitude was strong for distractors in the novel condition, whereas there was no relation in the simple condition. These results suggest that WMC can predict the interference control for a salient distractor at auditory gating even during a selective attention task. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Auditory Temporal Conditioning in Neonates.
ERIC Educational Resources Information Center
Franz, W. K.; And Others
Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…
Event-related potentials to visual, auditory, and bimodal (combined auditory-visual) stimuli.
Isoğlu-Alkaç, Ummühan; Kedzior, Karina; Keskindemirci, Gonca; Ermutlu, Numan; Karamursel, Sacit
2007-02-01
The purpose of this study was to investigate the response properties of event related potentials to unimodal and bimodal stimulations. The amplitudes of N1 and P2 were larger during bimodal evoked potentials (BEPs) than auditory evoked potentials (AEPs) in the anterior sites and the amplitudes of P1 were larger during BEPs than VEPs especially at the parieto-occipital locations. Responses to bimodal stimulation had longer latencies than responses to unimodal stimulation. The N1 and P2 components were larger in amplitude and longer in latency during the bimodal paradigm and predominantly occurred at the anterior sites. Therefore, the current bimodal paradigm can be used to investigate the involvement and location of specific neural generators that contribute to higher processing of sensory information. Moreover, this paradigm may be a useful tool to investigate the level of sensory dysfunctions in clinical samples.
Briggs, Kate E; Martin, Frances H
2009-06-01
There are two dominant theories of affective picture processing; one that attention is more deeply engaged by motivationally relevant stimuli (i.e., stimuli that activate both the appetitive and aversive systems), and two that attention is more deeply engaged by aversive stimuli described as the negativity bias. In order to identify the theory that can best account for affective picture processing, event-related potentials (ERPs) were recorded from 34 participants during a modified oddball paradigm in which levels of stimulus valence, arousal, and motivational relevance were systematically varied. Results were partially consistent with motivated attention models of emotional perception, as P3b amplitude was enhanced in response to highly arousing and motivationally relevant sexual and unpleasant stimuli compared to respective low arousing and less motivationally relevant stimuli. However P3b amplitudes were significantly larger in response to the highly arousing sexual stimuli compared to all other affective stimuli, which is not consistent with either dominant theory. The current study therefore highlights the need for a revised model of affective picture processing and provides a platform for further research investigating the independent effects of sexual arousal on cognitive processing.
Deviance sensitivity in the auditory cortex of freely moving rats
2018-01-01
Deviance sensitivity is the specific response to a surprising stimulus, one that violates expectations set by the past stimulation stream. In audition, deviance sensitivity is often conflated with stimulus-specific adaptation (SSA), the decrease in responses to a common stimulus that only partially generalizes to other, rare stimuli. SSA is usually measured using oddball sequences, where a common (standard) tone and a rare (deviant) tone are randomly intermixed. However, the larger responses to a tone when deviant does not necessarily represent deviance sensitivity. Deviance sensitivity is commonly tested using a control sequence in which many different tones serve as the standard, eliminating the expectations set by the standard ('deviant among many standards'). When the response to a tone when deviant (against a single standard) is larger than the responses to the same tone in the control sequence, it is concluded that true deviance sensitivity occurs. In primary auditory cortex of anesthetized rats, responses to deviants and to the same tones in the control condition are comparable in size. We recorded local field potentials and multiunit activity from the auditory cortex of awake, freely moving rats, implanted with 32-channel drivable microelectrode arrays and using telemetry. We observed highly significant SSA in the awake state. Moreover, the responses to a tone when deviant were significantly larger than the responses to the same tone in the control condition. These results establish the presence of true deviance sensitivity in primary auditory cortex in awake rats. PMID:29874246
Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies
2013-08-29
Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Music training and working memory: an ERP study.
George, Elyse M; Coch, Donna
2011-04-01
While previous research has suggested that music training is associated with improvements in various cognitive and linguistic skills, the mechanisms mediating or underlying these associations are mostly unknown. Here, we addressed the hypothesis that previous music training is related to improved working memory. Using event-related potentials (ERPs) and a standardized test of working memory, we investigated both neural and behavioral aspects of working memory in college-aged, non-professional musicians and non-musicians. Behaviorally, musicians outperformed non-musicians on standardized subtests of visual, phonological, and executive memory. ERPs were recorded in standard auditory and visual oddball paradigms (participants responded to infrequent deviant stimuli embedded in lists of standard stimuli). Electrophysiologically, musicians demonstrated faster updating of working memory (shorter latency P300s) in both the auditory and visual domains and musicians allocated more neural resources to auditory stimuli (larger amplitude P300), showing increased sensitivity to the auditory standard/deviant difference and less effortful updating of auditory working memory. These findings demonstrate that long-term music training is related to improvements in working memory, in both the auditory and visual domains and in terms of both behavioral and ERP measures. Copyright © 2011 Elsevier Ltd. All rights reserved.
Auditory spatial processing in Alzheimer’s disease
Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.
2015-01-01
The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer
Incidental Auditory Category Learning
Gabay, Yafit; Dick, Frederic K.; Zevin, Jason D.; Holt, Lori L.
2015-01-01
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in one of four possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from one of four distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. PMID:26010588
Auditory perceptual simulation: Simulating speech rates or accents?
Zhou, Peiyun; Christianson, Kiel
2016-07-01
When readers engage in Auditory Perceptual Simulation (APS) during silent reading, they mentally simulate characteristics of voices attributed to a particular speaker or a character depicted in the text. Previous research found that auditory perceptual simulation of a faster native English speaker during silent reading led to shorter reading times that auditory perceptual simulation of a slower non-native English speaker. Yet, it was uncertain whether this difference was triggered by the different speech rates of the speakers, or by the difficulty of simulating an unfamiliar accent. The current study investigates this question by comparing faster Indian-English speech and slower American-English speech in the auditory perceptual simulation paradigm. Analyses of reading times of individual words and the full sentence reveal that the auditory perceptual simulation effect again modulated reading rate, and auditory perceptual simulation of the faster Indian-English speech led to faster reading rates compared to auditory perceptual simulation of the slower American-English speech. The comparison between this experiment and the data from Zhou and Christianson (2016) demonstrate further that the "speakers'" speech rates, rather than the difficulty of simulating a non-native accent, is the primary mechanism underlying auditory perceptual simulation effects. Copyright © 2016 Elsevier B.V. All rights reserved.
Auditory Cortex Is Required for Fear Potentiation of Gap Detection
Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.
2014-01-01
Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510
Finke, Mareike; Büchner, Andreas; Ruigendijk, Esther; Meyer, Martin; Sandmann, Pascale
2016-07-01
There is a high degree of variability in speech intelligibility outcomes across cochlear-implant (CI) users. To better understand how auditory cognition affects speech intelligibility with the CI, we performed an electroencephalography study in which we examined the relationship between central auditory processing, cognitive abilities, and speech intelligibility. Postlingually deafened CI users (N=13) and matched normal-hearing (NH) listeners (N=13) performed an oddball task with words presented in different background conditions (quiet, stationary noise, modulated noise). Participants had to categorize words as living (targets) or non-living entities (standards). We also assessed participants' working memory (WM) capacity and verbal abilities. For the oddball task, we found lower hit rates and prolonged response times in CI users when compared with NH listeners. Noise-related prolongation of the N1 amplitude was found for all participants. Further, we observed group-specific modulation effects of event-related potentials (ERPs) as a function of background noise. While NH listeners showed stronger noise-related modulation of the N1 latency, CI users revealed enhanced modulation effects of the N2/N4 latency. In general, higher-order processing (N2/N4, P3) was prolonged in CI users in all background conditions when compared with NH listeners. Longer N2/N4 latency in CI users suggests that these individuals have difficulties to map acoustic-phonetic features to lexical representations. These difficulties seem to be increased for speech-in-noise conditions when compared with speech in quiet background. Correlation analyses showed that shorter ERP latencies were related to enhanced speech intelligibility (N1, N2/N4), better lexical fluency (N1), and lower ratings of listening effort (N2/N4) in CI users. In sum, our findings suggest that CI users and NH listeners differ with regards to both the sensory and the higher-order processing of speech in quiet as well as in
Stress improves selective attention towards emotionally neutral left ear stimuli.
Hoskin, Robert; Hunter, M D; Woodruff, P W R
2014-09-01
Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.
Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.
De Vos, Maarten; Gandras, Katharina; Debener, Stefan
2014-01-01
In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
2018-06-01
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H
2016-07-06
During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear
Frequency-specific adaptation and its underlying circuit model in the auditory midbrain.
Shen, Li; Zhao, Lingyun; Hong, Bo
2015-01-01
Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.
Bachiller, Alejandro; Romero, Sergio; Molina, Vicente; Alonso, Joan F; Mañanas, Miguel A; Poza, Jesús; Hornero, Roberto
2015-12-01
The present study investigates the neural substrates underlying cognitive processing in schizophrenia (Sz) patients. To this end, an auditory 3-stimulus oddball paradigm was used to identify P3a and P3b components, elicited by rare-distractor and rare-target tones, respectively. Event-related potentials (ERP) were recorded from 31 Sz patients and 38 healthy controls. The P3a and P3b brain-source generators were identified by time-averaging of low-resolution brain electromagnetic tomography (LORETA) current density images. In contrast with the commonly used fixed window of interest (WOI), we proposed to apply an adaptive WOI, which takes into account subjects' P300 latency variability. Our results showed different P3a and P3b source activation patterns in both groups. P3b sources included frontal, parietal and limbic lobes, whereas P3a response generators were localized over bilateral frontal and superior temporal regions. These areas have been related to the discrimination of auditory stimulus and to the inhibition (P3a) or the initiation (P3b) of motor response in a cognitive task. In addition, differences in source localization between Sz and control groups were observed. Sz patients showed lower P3b source activity in bilateral frontal structures and the cingulate. P3a generators were less widespread for Sz patients than for controls in right superior, medial and middle frontal gyrus. Our findings suggest that target and distractor processing involves distinct attentional subsystems, both being altered in Sz. Hence, the study of neuroelectric brain information can provide further insights to understand cognitive processes and underlying mechanisms in Sz. Copyright © 2015 Elsevier B.V. All rights reserved.
Biagianti, Bruno; Roach, Brian J; Fisher, Melissa; Loewy, Rachel; Ford, Judith M; Vinogradov, Sophia; Mathalon, Daniel H
2017-01-01
Individuals with schizophrenia have heterogeneous impairments of the auditory processing system that likely mediate differences in the cognitive gains induced by auditory training (AT). Mismatch negativity (MMN) is an event-related potential component reflecting auditory echoic memory, and its amplitude reduction in schizophrenia has been linked to cognitive deficits. Therefore, MMN may predict response to AT and identify individuals with schizophrenia who have the most to gain from AT. Furthermore, to the extent that AT strengthens auditory deviance processing, MMN may also serve as a readout of the underlying changes in the auditory system induced by AT. Fifty-six individuals early in the course of a schizophrenia-spectrum illness (ESZ) were randomly assigned to 40 h of AT or Computer Games (CG). Cognitive assessments and EEG recordings during a multi-deviant MMN paradigm were obtained before and after AT and CG. Changes in these measures were compared between the treatment groups. Baseline and trait-like MMN data were evaluated as predictors of treatment response. MMN data collected with the same paradigm from a sample of Healthy Controls (HC; n = 105) were compared to baseline MMN data from the ESZ group. Compared to HC, ESZ individuals showed significant MMN reductions at baseline ( p = .003). Reduced Double-Deviant MMN was associated with greater general cognitive impairment in ESZ individuals ( p = .020). Neither ESZ intervention group showed significant change in MMN. We found high correlations in all MMN deviant types (rs = .59-.68, all ps < .001) between baseline and post-intervention amplitudes irrespective of treatment group, suggesting trait-like stability of the MMN signal. Greater deficits in trait-like Double-Deviant MMN predicted greater cognitive improvements in the AT group ( p = .02), but not in the CG group. In this sample of ESZ individuals, AT had no effect on auditory deviance processing as assessed by MMN. In ESZ individuals, baseline MMN
Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas
2018-03-01
Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback
Neuronal chronometry of target detection: fusion of hemodynamic and event-related potential data.
Calhoun, V D; Adali, T; Pearlson, G D; Kiehl, K A
2006-04-01
Event-related potential (ERP) studies of the brain's response to infrequent, target (oddball) stimuli elicit a sequence of physiological events, the most prominent and well studied being a complex, the P300 (or P3) peaking approximately 300 ms post-stimulus for simple stimuli and slightly later for more complex stimuli. Localization of the neural generators of the human oddball response remains challenging due to the lack of a single imaging technique with good spatial and temporal resolution. Here, we use independent component analyses to fuse ERP and fMRI modalities in order to examine the dynamics of the auditory oddball response with high spatiotemporal resolution across the entire brain. Initial activations in auditory and motor planning regions are followed by auditory association cortex and motor execution regions. The P3 response is associated with brainstem, temporal lobe, and medial frontal activity and finally a late temporal lobe "evaluative" response. We show that fusing imaging modalities with different advantages can provide new information about the brain.
Role of semantic paradigms for optimization of language mapping in clinical FMRI studies.
Zacà, D; Jarso, S; Pillai, J J
2013-10-01
The optimal paradigm choice for language mapping in clinical fMRI studies is challenging due to the variability in activation among different paradigms, the contribution to activation of cognitive processes other than language, and the difficulties in monitoring patient performance. In this study, we compared language localization and lateralization between 2 commonly used clinical language paradigms and 3 newly designed dual-choice semantic paradigms to define a streamlined and adequate language-mapping protocol. Twelve healthy volunteers performed 5 language paradigms: Silent Word Generation, Sentence Completion, Visual Antonym Pair, Auditory Antonym Pair, and Noun-Verb Association. Group analysis was performed to assess statistically significant differences in fMRI percentage signal change and lateralization index among these paradigms in 5 ROIs: inferior frontal gyrus, superior frontal gyrus, middle frontal gyrus for expressive language activation, middle temporal gyrus, and superior temporal gyrus for receptive language activation. In the expressive ROIs, Silent Word Generation was the most robust and best lateralizing paradigm (greater percentage signal change and lateralization index than semantic paradigms at P < .01 and P < .05 levels, respectively). In the receptive region of interest, Sentence Completion and Noun-Verb Association were the most robust activators (greater percentage signal change than other paradigms, P < .01). All except Auditory Antonym Pair were good lateralizing tasks (the lateralization index was significantly lower than other paradigms, P < .05). The combination of Silent Word Generation and ≥1 visual semantic paradigm, such as Sentence Completion and Noun-Verb Association, is adequate to determine language localization and lateralization; Noun-Verb Association has the additional advantage of objective monitoring of patient performance.
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Trunk, Attila; Stefanics, Gábor; Zentai, Norbert; Kovács-Bálint, Zsófia; Thuróczy, György; Hernádi, István
2013-01-01
Potential effects of a 30 min exposure to third generation (3G) Universal Mobile Telecommunications System (UMTS) mobile phone-like electromagnetic fields (EMFs) were investigated on human brain electrical activity in two experiments. In the first experiment, spontaneous electroencephalography (sEEG) was analyzed (n = 17); in the second experiment, auditory event-related potentials (ERPs) and automatic deviance detection processes reflected by mismatch negativity (MMN) were investigated in a passive oddball paradigm (n = 26). Both sEEG and ERP experiments followed a double-blind protocol where subjects were exposed to either genuine or sham irradiation in two separate sessions. In both experiments, electroencephalograms (EEG) were recorded at midline electrode sites before and after exposure while subjects were watching a silent documentary. Spectral power of sEEG data was analyzed in the delta, theta, alpha, and beta frequency bands. In the ERP experiment, subjects were presented with a random series of standard (90%) and frequency-deviant (10%) tones in a passive binaural oddball paradigm. The amplitude and latency of the P50, N100, P200, MMN, and P3a components were analyzed. We found no measurable effects of a 30 min 3G mobile phone irradiation on the EEG spectral power in any frequency band studied. Also, we found no significant effects of EMF irradiation on the amplitude and latency of any of the ERP components. In summary, the present results do not support the notion that a 30 min unilateral 3G EMF exposure interferes with human sEEG activity, auditory evoked potentials or automatic deviance detection indexed by MMN. Copyright © 2012 Wiley Periodicals, Inc.
Duration of Auditory Sensory Memory in Parents of Children with SLI: A Mismatch Negativity Study
ERIC Educational Resources Information Center
Barry, Johanna G.; Hardiman, Mervyn J.; Line, Elizabeth; White, Katherine B.; Yasin, Ifat; Bishop, Dorothy V. M.
2008-01-01
In a previous behavioral study, we showed that parents of children with SLI had a subclinical deficit in phonological short-term memory. Here, we tested the hypothesis that they also have a deficit in nonverbal auditory sensory memory. We measured auditory sensory memory using a paradigm involving an electrophysiological component called the…
Transitioning EEG experiments away from the laboratory using a Raspberry Pi 2.
Kuziek, Jonathan W P; Shienh, Axita; Mathewson, Kyle E
2017-02-01
Electroencephalography (EEG) experiments are typically performed in controlled laboratory settings to minimise noise and produce reliable measurements. These controlled conditions also reduce the applicability of the obtained results to more varied environments and may limit their relevance to everyday situations. Advances in computer portability may increase the mobility and applicability of EEG results while decreasing costs. In this experiment we show that stimulus presentation using a Raspberry Pi 2 computer provides a low cost, reliable alternative to a traditional desktop PC in the administration of EEG experimental tasks. Significant and reliable MMN and P3 activity, typical event-related potentials (ERPs) associated with an auditory oddball paradigm, were measured while experiments were administered using the Raspberry Pi 2. While latency differences in ERP triggering were observed between systems, these differences reduced power only marginally, likely due to the reduced processing power of the Raspberry Pi 2. An auditory oddball task administered using the Raspberry Pi 2 produced similar ERPs to those derived from a desktop PC in a laboratory setting. Despite temporal differences and slight increases in trials needed for similar statistical power, the Raspberry Pi 2 can be used to design and present auditory experiments comparable to a PC. Our results show that the Raspberry Pi 2 is a low cost alternative to the desktop PC when administering EEG experiments and, due to its small size and low power consumption, will enable mobile EEG experiments unconstrained by a traditional laboratory setting. Copyright © 2016 Elsevier B.V. All rights reserved.
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
How stimulation speed affects Event-Related Potentials and BCI performance.
Höhne, Johannes; Tangermann, Michael
2012-01-01
In most paradigms for Brain-Computer Interfaces (BCIs) that are based on Event-Related Potentials (ERPs), stimuli are presented with a pre-defined and constant speed. In order to boost BCI performance by optimizing the parameters of stimulation, this offline study investigates the impact of the stimulus onset asynchrony (SOA) on ERPs and the resulting classification accuracy. The SOA is defined as the time between the onsets of two consecutive stimuli, which represents a measure for stimulation speed. A simple auditory oddball paradigm was tested in 14 SOA conditions with a SOA between 50 ms and 1000 ms. Based on an offline ERP analysis, the BCI performance (quantified by the Information Transfer Rate, ITR in bits/min) was simulated. A great variability in the simulated BCI performance was observed within subjects (N=11). This indicates a potential increase in BCI performance (≥ 1.6 bits/min) for ERP-based paradigms, if the stimulation speed is specified for each user individually.
Colin, C; Radeau, M; Soquet, A; Demolin, D; Colin, F; Deltenre, P
2002-04-01
The McGurk-MacDonald illusory percept is obtained by dubbing an incongruent articulatory movement on an auditory phoneme. This type of audiovisual speech perception contributes to the assessment of theories of speech perception. The mismatch negativity (MMN) reflects the detection of a deviant stimulus within the auditory short-term memory and besides an acoustic component, possesses, under certain conditions, a phonetic one. The present study assessed the existence of an MMN evoked by McGurk-MacDonald percepts elicited by audiovisual stimuli with constant auditory components. Cortical evoked potentials were recorded using the oddball paradigm on 8 adults in 3 experimental conditions: auditory alone, visual alone and audiovisual stimulation. Obtaining illusory percepts was confirmed in an additional psychophysical condition. The auditory deviant syllables and the audiovisual incongruent syllables elicited a significant MMN at Fz. In the visual condition, no negativity was observed either at Fz, or at O(z). An MMN can be evoked by visual articulatory deviants, provided they are presented in a suitable auditory context leading to a phonetically significant interaction. The recording of an MMN elicited by illusory McGurk percepts suggests that audiovisual integration mechanisms in speech take place rather early during the perceptual processes.
Synchronization with competing visual and auditory rhythms: bouncing ball meets metronome.
Hove, Michael J; Iversen, John R; Zhang, Allen; Repp, Bruno H
2013-07-01
Synchronization of finger taps with periodically flashing visual stimuli is known to be much more variable than synchronization with an auditory metronome. When one of these rhythms is the synchronization target and the other serves as a distracter at various temporal offsets, strong auditory dominance is observed. However, it has recently been shown that visuomotor synchronization improves substantially with moving stimuli such as a continuously bouncing ball. The present study pitted a bouncing ball against an auditory metronome in a target-distracter synchronization paradigm, with the participants being auditory experts (musicians) and visual experts (video gamers and ball players). Synchronization was still less variable with auditory than with visual target stimuli in both groups. For musicians, auditory stimuli tended to be more distracting than visual stimuli, whereas the opposite was the case for the visual experts. Overall, there was no main effect of distracter modality. Thus, a distracting spatiotemporal visual rhythm can be as effective as a distracting auditory rhythm in its capacity to perturb synchronous movement, but its effectiveness also depends on modality-specific expertise.
Neural correlates of emotional intelligence in a visual emotional oddball task: an ERP study.
Raz, Sivan; Dan, Orrie; Zysberg, Leehu
2014-11-01
The present study was aimed at identifying potential behavioral and neural correlates of Emotional Intelligence (EI) by using scalp-recorded Event-Related Potentials (ERPs). EI levels were defined according to both self-report questionnaire and a performance-based ability test. We identified ERP correlates of emotional processing by using a visual-emotional oddball paradigm, in which subjects were confronted with one frequent standard stimulus (a neutral face) and two deviant stimuli (a happy and an angry face). The effects of these faces were then compared across groups with low and high EI levels. The ERP results indicate that participants with high EI exhibited significantly greater mean amplitudes of the P1, P2, N2, and P3 ERP components in response to emotional and neutral faces, at frontal, posterior-parietal and occipital scalp locations. P1, P2 and N2 are considered indexes of attention-related processes and have been associated with early attention to emotional stimuli. The later P3 component has been thought to reflect more elaborative, top-down, emotional information processing including emotional evaluation and memory encoding and formation. These results may suggest greater recruitment of resources to process all emotional and non-emotional faces at early and late processing stages among individuals with higher EI. The present study underscores the usefulness of ERP methodology as a sensitive measure for the study of emotional stimuli processing in the research field of EI. Copyright © 2014 Elsevier Inc. All rights reserved.
Task difficulty modulates brain activation in the emotional oddball task.
Siciliano, Rachel E; Madden, David J; Tallman, Catherine W; Boylan, Maria A; Kirste, Imke; Monge, Zachary A; Packard, Lauren E; Potter, Guy G; Wang, Lihong
2017-06-01
Previous functional magnetic resonance imaging (fMRI) studies have reported that task-irrelevant, emotionally salient events can disrupt target discrimination, particularly when attentional demands are low, while others demonstrate alterations in the distracting effects of emotion in behavior and neural activation in the context of attention-demanding tasks. We used fMRI, in conjunction with an emotional oddball task, at different levels of target discrimination difficulty, to investigate the effects of emotional distractors on the detection of subsequent targets. In addition, we distinguished different behavioral components of target detection representing decisional, nondecisional, and response criterion processes. Results indicated that increasing target discrimination difficulty led to increased time required for both the decisional and nondecisional components of the detection response, as well as to increased target-related neural activation in frontoparietal regions. The emotional distractors were associated with activation in ventral occipital and frontal regions and dorsal frontal regions, but this activation was attenuated with increased difficulty. Emotional distraction did not alter the behavioral measures of target detection, but did lead to increased target-related frontoparietal activation for targets following emotional images as compared to those following neutral images. This latter effect varied with target discrimination difficulty, with an increased influence of the emotional distractors on subsequent target-related frontoparietal activation in the more difficult discrimination condition. This influence of emotional distraction was in addition associated specifically with the decisional component of target detection. These findings indicate that emotion-cognition interactions, in the emotional oddball task, vary depending on the difficulty of the target discrimination and the associated limitations on processing resources. Copyright © 2017
What is extinguished in auditory extinction?
Deouell, L Y; Soroker, N
2000-09-11
Extinction is a frequent sequel of brain damage, whereupon patients disregard (extinguish) a contralesional stimulus, and report only the more ipsilesional stimulus, of a pair of stimuli presented simultaneously. We investigated the possibility of a dissociation between the detection and the identification of extinguished phonemes. Fourteen right hemisphere damaged patients with severe auditory extinction were examined using a paradigm that separated the localization of stimuli and the identification of their phonetic content. Patients reported the identity of left-sided phonemes, while extinguishing them at the same time, in the traditional sense of the term. This dissociation suggests that auditory extinction is more about acknowledging the existence of a stimulus in the contralesional hemispace than about the actual processing of the stimulus.
Frequency-specific adaptation and its underlying circuit model in the auditory midbrain
Shen, Li; Zhao, Lingyun; Hong, Bo
2015-01-01
Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA). However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC) changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor) was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron's best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry. PMID:26483641
Biagianti, Bruno; Roach, Brian J.; Fisher, Melissa; Loewy, Rachel; Ford, Judith M.; Vinogradov, Sophia; Mathalon, Daniel H.
2017-01-01
Background Individuals with schizophrenia have heterogeneous impairments of the auditory processing system that likely mediate differences in the cognitive gains induced by auditory training (AT). Mismatch negativity (MMN) is an event-related potential component reflecting auditory echoic memory, and its amplitude reduction in schizophrenia has been linked to cognitive deficits. Therefore, MMN may predict response to AT and identify individuals with schizophrenia who have the most to gain from AT. Furthermore, to the extent that AT strengthens auditory deviance processing, MMN may also serve as a readout of the underlying changes in the auditory system induced by AT. Methods Fifty-six individuals early in the course of a schizophrenia-spectrum illness (ESZ) were randomly assigned to 40 h of AT or Computer Games (CG). Cognitive assessments and EEG recordings during a multi-deviant MMN paradigm were obtained before and after AT and CG. Changes in these measures were compared between the treatment groups. Baseline and trait-like MMN data were evaluated as predictors of treatment response. MMN data collected with the same paradigm from a sample of Healthy Controls (HC; n = 105) were compared to baseline MMN data from the ESZ group. Results Compared to HC, ESZ individuals showed significant MMN reductions at baseline (p = .003). Reduced Double-Deviant MMN was associated with greater general cognitive impairment in ESZ individuals (p = .020). Neither ESZ intervention group showed significant change in MMN. We found high correlations in all MMN deviant types (rs = .59–.68, all ps < .001) between baseline and post-intervention amplitudes irrespective of treatment group, suggesting trait-like stability of the MMN signal. Greater deficits in trait-like Double-Deviant MMN predicted greater cognitive improvements in the AT group (p = .02), but not in the CG group. Conclusions In this sample of ESZ individuals, AT had no effect on auditory deviance processing as assessed by
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle
Multistability in auditory stream segregation: a predictive coding view
Winkler, István; Denham, Susan; Mill, Robert; Bőhm, Tamás M.; Bendixen, Alexandra
2012-01-01
Auditory stream segregation involves linking temporally separate acoustic events into one or more coherent sequences. For any non-trivial sequence of sounds, many alternative descriptions can be formed, only one or very few of which emerge in awareness at any time. Evidence from studies showing bi-/multistability in auditory streaming suggest that some, perhaps many of the alternative descriptions are represented in the brain in parallel and that they continuously vie for conscious perception. Here, based on a predictive coding view, we consider the nature of these sound representations and how they compete with each other. Predictive processing helps to maintain perceptual stability by signalling the continuation of previously established patterns as well as the emergence of new sound sources. It also provides a measure of how well each of the competing representations describes the current acoustic scene. This account of auditory stream segregation has been tested on perceptual data obtained in the auditory streaming paradigm. PMID:22371621
Unattended processing of hierarchical pitch variations in spoken sentences.
Li, Xiaoqing; Chen, Yiya
2018-05-16
An auditory oddball paradigm was employed to examine the unattended processing of pitch variation which functions to signal hierarchically different levels of meaning contrasts. Four oddball conditions were constructed by varying the pitch contour of critical words embedded in a Mandarin Chinese sentence. Two conditions included lexical-level word meaning contrasts (i.e. TONE condition) and the other two sentence-level information-status contrasts (i.e. ACCENTUATION condition). Both included stimuli with early vs. late acoustic cue divergence points. Results showed that the two early-cue conditions elicited earlier Mismatch Negativities, regardless of their functional hierarchy. The deviant stimuli induced theta-band power increases in the TONE condition but beta-band power decreases in the ACCENTUATIION condition, regardless of the timing of their acoustic cues. These results suggest that, in an unattentive state, the human brain can functionally disentangle hierarchically different levels of pitch variation, and the brain responses to these pitch variations are time-locked to the presence of the acoustic cues. Copyright © 2018. Published by Elsevier Inc.
Reliance on auditory feedback in children with childhood apraxia of speech.
Iuzzini-Seigel, Jenya; Hogan, Tiffany P; Guarino, Anthony J; Green, Jordan R
2015-01-01
Children with childhood apraxia of speech (CAS) have been hypothesized to continuously monitor their speech through auditory feedback to minimize speech errors. We used an auditory masking paradigm to determine the effect of attenuating auditory feedback on speech in 30 children: 9 with CAS, 10 with speech delay, and 11 with typical development. The masking only affected the speech of children with CAS as measured by voice onset time and vowel space area. These findings provide preliminary support for greater reliance on auditory feedback among children with CAS. Readers of this article should be able to (i) describe the motivation for investigating the role of auditory feedback in children with CAS; (ii) report the effects of feedback attenuation on speech production in children with CAS, speech delay, and typical development, and (iii) understand how the current findings may support a feedforward program deficit in children with CAS. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Event-related potentials and secondary task performance during simulated driving.
Wester, A E; Böcker, K B E; Volkerts, E R; Verster, J C; Kenemans, J L
2008-01-01
Inattention and distraction account for a substantial number of traffic accidents. Therefore, we examined the impact of secondary task performance (an auditory oddball task) on a primary driving task (lane keeping). Twenty healthy participants performed two 20-min tests in the Divided Attention Steering Simulator (DASS). The visual secondary task of the DASS was replaced by an auditory oddball task to allow recording of brain activity. The driving task and the secondary (distracting) oddball task were presented in isolation and simultaneously, to assess their mutual interference. In addition to performance measures (lane keeping in the primary driving task and reaction speed in the secondary oddball task), brain activity, i.e. event-related potentials (ERPs), was recorded. Performance parameters on the driving test and the secondary oddball task did not differ between performance in isolation and simultaneous performance. However, when both tasks were performed simultaneously, reaction time variability increased in the secondary oddball task. Analysis of brain activity indicated that ERP amplitude (P3a amplitude) related to the secondary task, was significantly reduced when the task was performed simultaneously with the driving test. This study shows that when performing a simple secondary task during driving, performance of the driving task and this secondary task are both unaffected. However, analysis of brain activity shows reduced cortical processing of irrelevant, potentially distracting stimuli from the secondary task during driving.
Multivariate sensitivity to voice during auditory categorization.
Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard
2015-09-01
Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.
Incorporating Auditory Models in Speech/Audio Applications
NASA Astrophysics Data System (ADS)
Krishnamoorthi, Harish
2011-12-01
its equivalent time/frequency representation. This avoids the repeated application of auditory model stages to test different candidate time/frequency vectors in minimizing perceptual objective functions. In this dissertation, a constrained mapping scheme is developed by linearizing certain auditory model stages that ensures obtaining a time/frequency mapping corresponding to the estimated auditory representation. This paradigm was successfully incorporated in a perceptual speech enhancement algorithm and a sinusoidal component selection task.
Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S
2016-12-01
Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated
Statistical learning of multisensory regularities is enhanced in musicians: An MEG study.
Paraskevopoulos, Evangelos; Chalas, Nikolas; Kartsidis, Panagiotis; Wollbrink, Andreas; Bamidis, Panagiotis
2018-07-15
The present study used magnetoencephalography (MEG) to identify the neural correlates of audiovisual statistical learning, while disentangling the differential contributions of uni- and multi-modal statistical mismatch responses in humans. The applied paradigm was based on a combination of a statistical learning paradigm and a multisensory oddball one, combining an audiovisual, an auditory and a visual stimulation stream, along with the corresponding deviances. Plasticity effects due to musical expertise were investigated by comparing the behavioral and MEG responses of musicians to non-musicians. The behavioral results indicated that the learning was successful for both musicians and non-musicians. The unimodal MEG responses are consistent with previous studies, revealing the contribution of Heschl's gyrus for the identification of auditory statistical mismatches and the contribution of medial temporal and visual association areas for the visual modality. The cortical network underlying audiovisual statistical learning was found to be partly common and partly distinct from the corresponding unimodal networks, comprising right temporal and left inferior frontal sources. Musicians showed enhanced activation in superior temporal and superior frontal gyrus. Connectivity and information processing flow amongst the sources comprising the cortical network of audiovisual statistical learning, as estimated by transfer entropy, was reorganized in musicians, indicating enhanced top-down processing. This neuroplastic effect showed a cross-modal stability between the auditory and audiovisual modalities. Copyright © 2018 Elsevier Inc. All rights reserved.
Auditory rhythmic cueing in movement rehabilitation: findings and possible mechanisms
Schaefer, Rebecca S.
2014-01-01
Moving to music is intuitive and spontaneous, and music is widely used to support movement, most commonly during exercise. Auditory cues are increasingly also used in the rehabilitation of disordered movement, by aligning actions to sounds such as a metronome or music. Here, the effect of rhythmic auditory cueing on movement is discussed and representative findings of cued movement rehabilitation are considered for several movement disorders, specifically post-stroke motor impairment, Parkinson's disease and Huntington's disease. There are multiple explanations for the efficacy of cued movement practice. Potentially relevant, non-mutually exclusive mechanisms include the acceleration of learning; qualitatively different motor learning owing to an auditory context; effects of increased temporal skills through rhythmic practices and motivational aspects of musical rhythm. Further considerations of rehabilitation paradigm efficacy focus on specific movement disorders, intervention methods and complexity of the auditory cues. Although clinical interventions using rhythmic auditory cueing do not show consistently positive results, it is argued that internal mechanisms of temporal prediction and tracking are crucial, and further research may inform rehabilitation practice to increase intervention efficacy. PMID:25385780
Classifying the auditory P300 using mobile EEG recordings without calibration phase.
Zink, R; Hunyádi, B; Van Huffel, S; De Vos, M
2015-08-01
One of the major drawbacks in mobile EEG Brain Computer Interfaces (BCI) is the need for subject specific training data to train a classifier. By removing the need for supervised classification and calibration phase, new users could start immediate interaction with a BCI. We propose a solution to exploit the structural difference by means of canonical polyadic decomposition (CPD) for three-class auditory oddball data without the need for subject-specific information. We achieve this by adding average event-related-potential (ERP) templates to the CPD model. This constitutes a novel similarity measure between single-trial pairs and known-templates, which results in a fast and interpretable classifier. These results have similar accuracy to those of the supervised and cross-validated stepwise LDA approach but without the need for having subject-dependent data. Therefore the described CPD method has a significant practical advantage over the traditional and widely used approach.
Decomposing delta, theta, and alpha time–frequency ERP activity from a visual oddball task using PCA
Bernat, Edward M.; Malone, Stephen M.; Williams, William J.; Patrick, Christopher J.; Iacono, William G.
2008-01-01
Objective Time–frequency (TF) analysis has become an important tool for assessing electrical and magnetic brain activity from event-related paradigms. In electrical potential data, theta and delta activities have been shown to underlie P300 activity, and alpha has been shown to be inhibited during P300 activity. Measures of delta, theta, and alpha activity are commonly taken from TF surfaces. However, methods for extracting relevant activity do not commonly go beyond taking means of windows on the surface, analogous to measuring activity within a defined P300 window in time-only signal representations. The current objective was to use a data driven method to derive relevant TF components from event-related potential data from a large number of participants in an oddball paradigm. Methods A recently developed PCA approach was employed to extract TF components [Bernat, E. M., Williams, W. J., and Gehring, W. J. (2005). Decomposing ERP time-frequency energy using PCA. Clin Neurophysiol, 116(6), 1314–1334] from an ERP dataset of 2068 17 year olds (979 males). TF activity was taken from both individual trials and condition averages. Activity including frequencies ranging from 0 to 14 Hz and time ranging from stimulus onset to 1312.5 ms were decomposed. Results A coordinated set of time–frequency events was apparent across the decompositions. Similar TF components representing earlier theta followed by delta were extracted from both individual trials and averaged data. Alpha activity, as predicted, was apparent only when time–frequency surfaces were generated from trial level data, and was characterized by a reduction during the P300. Conclusions Theta, delta, and alpha activities were extracted with predictable time-courses. Notably, this approach was effective at characterizing data from a single-electrode. Finally, decomposition of TF data generated from individual trials and condition averages produced similar results, but with predictable differences
Functional neuroanatomy of auditory scene analysis in Alzheimer's disease
Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629
Temporal Organization of Sound Information in Auditory Memory.
Song, Kun; Luo, Huan
2017-01-01
Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.
Evidence for Auditory-Motor Impairment in Individuals with Hyperfunctional Voice Disorders
ERIC Educational Resources Information Center
Stepp, Cara E.; Lester-Smith, Rosemary A.; Abur, Defne; Daliri, Ayoub; Noordzij, J. Pieter; Lupiani, Ashling A.
2017-01-01
Purpose: The vocal auditory-motor control of individuals with hyperfunctional voice disorders was examined using a sensorimotor adaptation paradigm. Method: Nine individuals with hyperfunctional voice disorders and 9 individuals with typical voices produced sustained vowels over 160 trials in 2 separate conditions: (a) while experiencing gradual…
Altered auditory processing and effective connectivity in 22q11.2 deletion syndrome.
Larsen, Kit Melissa; Mørup, Morten; Birknow, Michelle Rosgaard; Fischer, Elvira; Hulme, Oliver; Vangkilde, Anders; Schmock, Henriette; Baaré, William Frans Christiaan; Didriksen, Michael; Olsen, Line; Werge, Thomas; Siebner, Hartwig R; Garrido, Marta I
2018-01-30
22q11.2 deletion syndrome (22q11.2DS) is one of the most common copy number variants and confers a markedly increased risk for schizophrenia. As such, 22q11.2DS is a homogeneous genetic liability model which enables studies to delineate functional abnormalities that may precede disease onset. Mismatch negativity (MMN), a brain marker of change detection, is reduced in people with schizophrenia compared to healthy controls. Using dynamic causal modelling (DCM), previous studies showed that top-down effective connectivity linking the frontal and temporal cortex is reduced in schizophrenia relative to healthy controls in MMN tasks. In the search for early risk-markers for schizophrenia we investigated the neural basis of change detection in a group with 22q11.2DS. We recorded high-density EEG from 19 young non-psychotic 22q11.2 deletion carriers, as well as from 27 healthy non-carriers with comparable age distribution and sex ratio, while they listened to a sequence of sounds arranged in a roving oddball paradigm. Despite finding no significant reduction in the MMN responses, whole-scalp spatiotemporal analysis of responses to the tones revealed a greater fronto-temporal N1 component in the 22q11.2 deletion carriers. DCM showed reduced intrinsic connection within right primary auditory cortex as well as in the top-down, connection from the right inferior frontal gyrus to right superior temporal gyrus for 22q11.2 deletion carriers although not surviving correction for multiple comparison. We discuss these findings in terms of reduced adaptation and a general increased sensitivity to tones in 22q11.2DS. Copyright © 2018. Published by Elsevier B.V.
Effects of training and motivation on auditory P300 brain-computer interface performance.
Baykara, E; Ruf, C A; Fioravanti, C; Käthner, I; Simon, N; Kleih, S C; Kübler, A; Halder, S
2016-01-01
Brain-computer interface (BCI) technology aims at helping end-users with severe motor paralysis to communicate with their environment without using the natural output pathways of the brain. For end-users in complete paralysis, loss of gaze control may necessitate non-visual BCI systems. The present study investigated the effect of training on performance with an auditory P300 multi-class speller paradigm. For half of the participants, spatial cues were added to the auditory stimuli to see whether performance can be further optimized. The influence of motivation, mood and workload on performance and P300 component was also examined. In five sessions, 16 healthy participants were instructed to spell several words by attending to animal sounds representing the rows and columns of a 5 × 5 letter matrix. 81% of the participants achieved an average online accuracy of ⩾ 70%. From the first to the fifth session information transfer rates increased from 3.72 bits/min to 5.63 bits/min. Motivation significantly influenced P300 amplitude and online ITR. No significant facilitative effect of spatial cues on performance was observed. Training improves performance in an auditory BCI paradigm. Motivation influences performance and P300 amplitude. The described auditory BCI system may help end-users to communicate independently of gaze control with their environment. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of white noise on event-related potentials in somatosensory Go/No-go paradigms.
Ohbayashi, Wakana; Kakigi, Ryusuke; Nakata, Hiroki
2017-09-06
Exposure to auditory white noise has been shown to facilitate human cognitive function. This phenomenon is termed stochastic resonance, and a moderate amount of auditory noise has been suggested to benefit individuals in hypodopaminergic states. The present study investigated the effects of white noise on the N140 and P300 components of event-related potentials in somatosensory Go/No-go paradigms. A Go or No-go stimulus was presented to the second or fifth digit of the left hand, respectively, at the same probability. Participants performed somatosensory Go/No-go paradigms while hearing three different white noise levels (45, 55, and 65 dB conditions). The peak amplitudes of Go-P300 and No-go-P300 in ERP waveforms were significantly larger under 55 dB than 45 and 65 dB conditions. White noise did not affect the peak latency of N140 or P300, or the peak amplitude of N140. Behavioral data for the reaction time, SD of reaction time, and error rates showed the absence of an effect by white noise. This is the first event-related potential study to show that exposure to auditory white noise at 55 dB enhanced the amplitude of P300 during Go/No-go paradigms, reflecting changes in the neural activation of response execution and inhibition processing.
Characterizing the roles of alpha and theta oscillations in multisensory attention.
Keller, Arielle S; Payne, Lisa; Sekuler, Robert
2017-05-01
Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterizing the roles of alpha and theta oscillations in multisensory attention
Keller, Arielle S.; Payne, Lisa; Sekuler, Robert
2017-01-01
Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771
Evidence for pitch chroma mapping in human auditory cortex.
Briley, Paul M; Breakey, Charlotte; Krumbholz, Katrin
2013-11-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, "pitch height," and a cyclical, "pitch chroma," dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex.
Auditory phonological priming in children and adults during word repetition
NASA Astrophysics Data System (ADS)
Cleary, Miranda; Schwartz, Richard G.
2004-05-01
Short-term auditory phonological priming effects involve changes in the speed with which words are processed by a listener as a function of recent exposure to other similar-sounding words. Activation of phonological/lexical representations appears to persist beyond the immediate offset of a word, influencing subsequent processing. Priming effects are commonly cited as demonstrating concurrent activation of word/phonological candidates during word identification. Phonological priming is controversial, the direction of effects (facilitating versus slowing) varying with the prime-target relationship. In adults, it has repeatedly been demonstrated, however, that hearing a prime word that rhymes with the following target word (ISI=50 ms) decreases the time necessary to initiate repetition of the target, relative to when the prime and target have no phonemic overlap. Activation of phonological representations in children has not typically been studied using this paradigm, auditory-word + picture-naming tasks being used instead. The present study employed an auditory phonological priming paradigm being developed for use with normal-hearing and hearing-impaired children. Initial results from normal-hearing adults replicate previous reports of faster naming times for targets following a rhyming prime word than for targets following a prime having no phonemes in common. Results from normal-hearing children will also be reported. [Work supported by NIH-NIDCD T32DC000039.
Neural Entrainment to Rhythmically Presented Auditory, Visual, and Audio-Visual Speech in Children
Power, Alan James; Mead, Natasha; Barnes, Lisa; Goswami, Usha
2012-01-01
Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal “samples” of information from the speech stream at different rates, phase resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (“phase locking”). Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate) based on repetition of the syllable “ba,” presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a “talking head”). To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the “ba” stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a “ba” in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal
Scanlon, Joanna E M; Townsend, Kimberley A; Cormier, Danielle L; Kuziek, Jonathan W P; Mathewson, Kyle E
2017-12-14
Mobile EEG allows the investigation of brain activity in increasingly complex environments. In this study, EEG equipment was adapted for use and transportation in a backpack while cycling. Participants performed an auditory oddball task while cycling outside and sitting in an isolated chamber inside the lab. Cycling increased EEG noise and marginally diminished alpha amplitude. However, this increased noise did not influence the ability to measure reliable event related potentials (ERP). The P3 was similar in topography, and morphology when outside on the bike, with a lower amplitude in the outside cycling condition. There was only a minor decrease in the statistical power to measure reliable ERP effects. Unexpectedly, when biking outside significantly decreased P2 and increased N1 amplitude were observed when evoked by both standards and targets compared with sitting in the lab. This may be due to attentional processes filtering the overlapping sounds between the tones used and similar environmental frequencies. This study established methods for mobile recording of ERP signals. Future directions include investigating auditory P2 filtering inside the laboratory. Copyright © 2017. Published by Elsevier B.V.
Farkas, Dávid; Denham, Susan L.; Bendixen, Alexandra; Tóth, Dénes; Kondo, Hirohito M.; Winkler, István
2016-01-01
Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual’s tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions. PMID:27135945
Shepard, Kathryn N; Chong, Kelly K; Liu, Robert C
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups' postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning.
The gap-startle paradigm to assess auditory temporal processing: Bridging animal and human research.
Fournier, Philippe; Hébert, Sylvie
2016-05-01
The gap-prepulse inhibition of the acoustic startle (GPIAS) paradigm is the primary test used in animal research to identify gap detection thresholds and impairment. When a silent gap is presented shortly before a loud startling stimulus, the startle reflex is inhibited and the extent of inhibition is assumed to reflect detection. Here, we applied the same paradigm in humans. One hundred and fifty-seven normal-hearing participants were tested using one of five gap durations (5, 25, 50, 100, 200 ms) in one of the following two paradigms-gap-embedded in or gap-following-the continuous background noise. The duration-inhibition relationship was observable for both conditions but followed different patterns. In the gap-embedded paradigm, GPIAS increased significantly with gap duration up to 50 ms and then more slowly up to 200 ms (trend only). In contrast, in the gap-following paradigm, significant inhibition-different from 0--was observable only at gap durations from 50 to 200 ms. The finding that different patterns are found depending on gap position within the background noise is compatible with distinct mechanisms underlying each of the two paradigms. © 2016 Society for Psychophysiological Research.
An fMRI investigation into the effect of preceding stimuli during visual oddball tasks.
Fajkus, Jiří; Mikl, Michal; Shaw, Daniel Joel; Brázdil, Milan
2015-08-15
This study investigates the modulatory effect of stimulus sequence on neural responses to novel stimuli. A group of 34 healthy volunteers underwent event-related functional magnetic resonance imaging while performing a three-stimulus visual oddball task, involving randomly presented frequent stimuli and two types of infrequent stimuli - targets and distractors. We developed a modified categorization of rare stimuli that incorporated the type of preceding rare stimulus, and analyzed the event-related functional data according to this sequence categorization; specifically, we explored hemodynamic response modulation associated with increasing rare-to-rare stimulus interval. For two consecutive targets, a modulation of brain function was evident throughout posterior midline and lateral temporal cortex, while responses to targets preceded by distractors were modulated in a widely distributed fronto-parietal system. As for distractors that follow targets, brain function was modulated throughout a set of posterior brain structures. For two successive distractors, however, no significant modulation was observed, which is consistent with previous studies and our primary hypothesis. The addition of the aforementioned technique extends the possibilities of conventional oddball task analysis, enabling researchers to explore the effects of the whole range of rare stimuli intervals. This methodology can be applied to study a wide range of associated cognitive mechanisms, such as decision making, expectancy and attention. Copyright © 2015 Elsevier B.V. All rights reserved.
Cognitive effects of rhythmic auditory stimulation in Parkinson's disease: A P300 study.
Lei, Juan; Conradi, Nadine; Abel, Cornelius; Frisch, Stefan; Brodski-Guerniero, Alla; Hildner, Marcel; Kell, Christian A; Kaiser, Jochen; Schmidt-Kassow, Maren
2018-05-16
Rhythmic auditory stimulation (RAS) may compensate dysfunctions of the basal ganglia (BG), involved with intrinsic evaluation of temporal intervals and action initiation or continuation. In the cognitive domain, RAS containing periodically presented tones facilitates young healthy participants' attention allocation to anticipated time points, indicated by better performance and larger P300 amplitudes to periodic compared to random stimuli. Additionally, active auditory-motor synchronization (AMS) leads to a more precise temporal encoding of stimuli via embodied timing encoding than stimulus presentation adapted to the participants' actual movements. Here we investigated the effect of RAS and AMS in Parkinson's disease (PD). 23 PD patients and 23 healthy age-matched controls underwent an auditory oddball task. We manipulated the timing (periodic/random/adaptive) and setting (pedaling/sitting still) of stimulation. While patients elicited a general timing effect, i.e., larger P300 amplitudes for periodic versus random tones for both, sitting and pedaling conditions, controls showed a timing effect only for the sitting but not for the pedaling condition. However, a correlation between P300 amplitudes and motor variability in the periodic pedaling condition was obtained in control participants only. We conclude that RAS facilitates attentional processing of temporally predictable external events in PD patients as well as healthy controls, but embodied timing encoding via body movement does not affect stimulus processing due to BG impairment in patients. Moreover, even with intact embodied timing encoding, such as healthy elderly, the effect of AMS depends on the degree of movement synchronization performance, which is very low in the current study. Copyright © 2018 Elsevier B.V. All rights reserved.
Evidence for Pitch Chroma Mapping in Human Auditory Cortex
Briley, Paul M.; Breakey, Charlotte; Krumbholz, Katrin
2013-01-01
Some areas in auditory cortex respond preferentially to sounds that elicit pitch, such as musical sounds or voiced speech. This study used human electroencephalography (EEG) with an adaptation paradigm to investigate how pitch is represented within these areas and, in particular, whether the representation reflects the physical or perceptual dimensions of pitch. Physically, pitch corresponds to a single monotonic dimension: the repetition rate of the stimulus waveform. Perceptually, however, pitch has to be described with 2 dimensions, a monotonic, “pitch height,” and a cyclical, “pitch chroma,” dimension, to account for the similarity of the cycle of notes (c, d, e, etc.) across different octaves. The EEG adaptation effect mirrored the cyclicality of the pitch chroma dimension, suggesting that auditory cortex contains a representation of pitch chroma. Source analysis indicated that the centroid of this pitch chroma representation lies somewhat anterior and lateral to primary auditory cortex. PMID:22918980
Cacace, Anthony T; McFarland, Dennis J
2013-01-01
Tests of auditory perception, such as those used in the assessment of central auditory processing disorders ([C]APDs), represent a domain in audiological assessment where measurement of this theoretical construct is often confounded by nonauditory abilities due to methodological shortcomings. These confounds include the effects of cognitive variables such as memory and attention and suboptimal testing paradigms, including the use of verbal reproduction as a form of response selection. We argue that these factors need to be controlled more carefully and/or modified so that their impact on tests of auditory and visual perception is only minimal. To advocate for a stronger theoretical framework than currently exists and to suggest better methodological strategies to improve assessment of auditory processing disorders (APDs). Emphasis is placed on adaptive forced-choice psychophysical methods and the use of matched tasks in multiple sensory modalities to achieve these goals. Together, this approach has potential to improve the construct validity of the diagnosis, enhance and develop theory, and evolve into a preferred method of testing. Examination of methods commonly used in studies of APDs. Where possible, currently used methodology is compared to contemporary psychophysical methods that emphasize computer-controlled forced-choice paradigms. In many cases, the procedures used in studies of APD introduce confounding factors that could be minimized if computer-controlled forced-choice psychophysical methods were utilized. Ambiguities of interpretation, indeterminate diagnoses, and unwanted confounds can be avoided by minimizing memory and attentional demands on the input end and precluding the use of response-selection strategies that use complex motor processes on the output end. Advocated are the use of computer-controlled forced-choice psychophysical paradigms in combination with matched tasks in multiple sensory modalities to enhance the prospect of obtaining a
Lavigne, Katie M; Woodward, Todd S
2018-04-01
Hypercoupling of activity in speech-perception-specific brain networks has been proposed to play a role in the generation of auditory-verbal hallucinations (AVHs) in schizophrenia; however, it is unclear whether this hypercoupling extends to nonverbal auditory perception. We investigated this by comparing schizophrenia patients with and without AVHs, and healthy controls, on task-based functional magnetic resonance imaging (fMRI) data combining verbal speech perception (SP), inner verbal thought generation (VTG), and nonverbal auditory oddball detection (AO). Data from two previously published fMRI studies were simultaneously analyzed using group constrained principal component analysis for fMRI (group fMRI-CPCA), which allowed for comparison of task-related functional brain networks across groups and tasks while holding the brain networks under study constant, leading to determination of the degree to which networks are common to verbal and nonverbal perception conditions, and which show coordinated hyperactivity in hallucinations. Three functional brain networks emerged: (a) auditory-motor, (b) language processing, and (c) default-mode (DMN) networks. Combining the AO and sentence tasks allowed the auditory-motor and language networks to separately emerge, whereas they were aggregated when individual tasks were analyzed. AVH patients showed greater coordinated activity (deactivity for DMN regions) than non-AVH patients during SP in all networks, but this did not extend to VTG or AO. This suggests that the hypercoupling in AVH patients in speech-perception-related brain networks is specific to perceived speech, and does not extend to perceived nonspeech or inner verbal thought generation. © 2017 Wiley Periodicals, Inc.
Electrophysiological revelations of trial history effects in a color oddball search task.
Shin, Eunsam; Chong, Sang Chul
2016-12-01
In visual oddball search tasks, viewing a no-target scene (i.e., no-target selection trial) leads to the facilitation or delay of the search time for a target in a subsequent trial. Presumably, this selection failure leads to biasing attentional set and prioritizing stimulus features unseen in the no-target scene. We observed attention-related ERP components and tracked the course of attentional biasing as a function of trial history. Participants were instructed to identify color oddballs (i.e., targets) shown in varied trial sequences. The number of no-target scenes preceding a target scene was increased from zero to two to reinforce attentional biasing, and colors presented in two successive no-target scenes were repeated or changed to systematically bias attention to specific colors. For the no-target scenes, the presentation of a second no-target scene resulted in an early selection of, and sustained attention to, the changed colors (mirrored in the frontal selection positivity, the anterior N2, and the P3b). For the target scenes, the N2pc indicated an earlier allocation of attention to the targets with unseen or remotely seen colors. Inhibitory control of attention, shown in the anterior N2, was greatest when the target scene was followed by repeated no-target scenes with repeated colors. Finally, search times and the P3b were influenced by both color previewing and its history. The current results demonstrate that attentional biasing can occur on a trial-by-trial basis and be influenced by both feature previewing and its history. © 2016 Society for Psychophysiological Research.
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
ERIC Educational Resources Information Center
Koegel, Robert L.; Openden, Daniel; Koegel, Lynn Kern
2004-01-01
Many children with autism display reactions to auditory stimuli that seem as if the stimuli were painful or otherwise extremely aversive. This article describes, within the contexts of three experimental designs, how procedures of systematic desensitization can be used to treat hypersensitivity to auditory stimuli in three young children with…
Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija
2014-09-01
Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory stream segregation in children with Asperger syndrome
Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.
2009-01-01
Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798
Event-related potentials (ERPs) in ecstasy (MDMA) users during a visual oddball task.
Mejias, S; Rossignol, M; Debatisse, D; Streel, E; Servais, L; Guérit, J M; Philippot, P; Campanella, S
2005-07-01
Ecstasy is the common name for a drug mainly containing a substance identified as 3,4-methylenedioxymethamphetamine (MDMA). It has become popular with participants in "raves", because it enhances energy, endurance and sexual arousal, together with the widespread belief that MDMA is a safe drug [Byard, R.W., Gilbert, J., James, R., Lokan, R.J., 1998. Amphetamine derivative fatalities in South Australia. Is "ecstasy" the culprit? Am. J. Forensic Med. Pathol. 19, 261-265]. However, it is suggested that this drug causes a neurotoxicity to the serotonergic system that could lead to permanent physical and cognitive problems. In order to investigate this issue, and during an ERP recording with 32 channels, we used a visual oddball design, in which subjects (14 MDMA abusers and 14 paired normal controls) saw frequent stimuli (neutral faces) while they had to detect as quickly as possible rare stimuli with happy or fearful expression. At a behavioral level, MDMA users imply longer latencies than normal controls to detect rare stimuli. At the neurophysiological level, ERP data suggest as main result that the N200 component, which is involved in attention orienting associated to the detection of stimulus novelty (e.g. [Campanella, S., Gaspard, C., Debatisse, D., Bruyer, R., Crommelinck, M., Guerit, J.M., 2002. Discrimination of emotional facial expression in a visual oddball task: an ERP study. Biol. Psychol. 59, 171-186]), shows shorter latencies for fearful rare stimuli (as compared to happy ones), but only for normal controls. This absence of delay was interpreted as an attentional deficit due to MDMA consumption.
Is sensorimotor BCI performance influenced differently by mono, stereo, or 3-D auditory feedback?
McCreadie, Karl A; Coyle, Damien H; Prasad, Girijesh
2014-05-01
Imagination of movement can be used as a control method for a brain-computer interface (BCI) allowing communication for the physically impaired. Visual feedback within such a closed loop system excludes those with visual problems and hence there is a need for alternative sensory feedback pathways. In the context of substituting the visual channel for the auditory channel, this study aims to add to the limited evidence that it is possible to substitute visual feedback for its auditory equivalent and assess the impact this has on BCI performance. Secondly, the study aims to determine for the first time if the type of auditory feedback method influences motor imagery performance significantly. Auditory feedback is presented using a stepped approach of single (mono), double (stereo), and multiple (vector base amplitude panning as an audio game) loudspeaker arrangements. Visual feedback involves a ball-basket paradigm and a spaceship game. Each session consists of either auditory or visual feedback only with runs of each type of feedback presentation method applied in each session. Results from seven subjects across five sessions of each feedback type (visual, auditory) (10 sessions in total) show that auditory feedback is a suitable substitute for the visual equivalent and that there are no statistical differences in the type of auditory feedback presented across five sessions.
Leske, Sabine; Ruhnau, Philipp; Frey, Julia; Lithari, Chrysa; Müller, Nadia; Hartmann, Thomas; Weisz, Nathan
2015-01-01
An ever-increasing number of studies are pointing to the importance of network properties of the brain for understanding behavior such as conscious perception. However, with regards to the influence of prestimulus brain states on perception, this network perspective has rarely been taken. Our recent framework predicts that brain regions crucial for a conscious percept are coupled prior to stimulus arrival, forming pre-established pathways of information flow and influencing perceptual awareness. Using magnetoencephalography (MEG) and graph theoretical measures, we investigated auditory conscious perception in a near-threshold (NT) task and found strong support for this framework. Relevant auditory regions showed an increased prestimulus interhemispheric connectivity. The left auditory cortex was characterized by a hub-like behavior and an enhanced integration into the brain functional network prior to perceptual awareness. Right auditory regions were decoupled from non-auditory regions, presumably forming an integrated information processing unit with the left auditory cortex. In addition, we show for the first time for the auditory modality that local excitability, measured by decreased alpha power in the auditory cortex, increases prior to conscious percepts. Importantly, we were able to show that connectivity states seem to be largely independent from local excitability states in the context of a NT paradigm. PMID:26408799
Influence of sleep deprivation and auditory intensity on reaction time and response force.
Włodarczyk, Dariusz; Jaśkowski, Piotr; Nowik, Agnieszka
2002-06-01
Arousal and activation are two variables supposed to underlie change in response force. This study was undertaken to explain these roles, specifically, for strong auditory stimuli and sleep deficit. Loud auditory stimuli can evoke phasic overarousal whereas sleep deficit leads to general underarousal. Moreover, Van der Molen and Keuss (1979, 1981) showed that paradoxically long reaction times occurred with extremely strong auditory stimuli when the task was difficult, e.g., choice reaction or Simon paradigm. It was argued that this paradoxical behavior related to reaction time is due to active disconnecting of the coupling between arousal and activation to prevent false responses. If so, we predicted that for extremely loud stimuli and for difficult tasks, the lengthening of reaction time should be associated with reduction of response force. The effects of loudness and sleep deficit on response time and force were investigated in three different tasks: simple response, choice response, and Simon paradigm. According to our expectation, we found a detrimental effect of sleep deficit on reaction time and on response force. In contrast to Van der Molen and Keuss, we found no increase in reaction time for loud stimuli (up to 110 dB) even on the Simon task.
Shepard, Kathryn N.; Chong, Kelly K.
2016-01-01
Tonotopic map plasticity in the adult auditory cortex (AC) is a well established and oft-cited measure of auditory associative learning in classical conditioning paradigms. However, its necessity as an enduring memory trace has been debated, especially given a recent finding that the areal expansion of core AC tuned to a newly relevant frequency range may arise only transiently to support auditory learning. This has been reinforced by an ethological paradigm showing that map expansion is not observed for ultrasonic vocalizations (USVs) or for ultrasound frequencies in postweaning dams for whom USVs emitted by pups acquire behavioral relevance. However, whether transient expansion occurs during maternal experience is not known, and could help to reveal the generality of cortical map expansion as a correlate for auditory learning. We thus mapped the auditory cortices of maternal mice at postnatal time points surrounding the peak in pup USV emission, but found no evidence of frequency map expansion for the behaviorally relevant high ultrasound range in AC. Instead, regions tuned to low frequencies outside of the ultrasound range show progressively greater suppression of activity in response to the playback of ultrasounds or pup USVs for maternally experienced animals assessed at their pups’ postnatal day 9 (P9) to P10, or postweaning. This provides new evidence for a lateral-band suppression mechanism elicited by behaviorally meaningful USVs, likely enhancing their population-level signal-to-noise ratio. These results demonstrate that tonotopic map enlargement has limits as a construct for conceptualizing how experience leaves neural memory traces within sensory cortex in the context of ethological auditory learning. PMID:27957529
Earl, Brian R.; Chertoff, Mark E.
2012-01-01
Future implementation of regenerative treatments for sensorineural hearing loss may be hindered by the lack of diagnostic tools that specify the target(s) within the cochlea and auditory nerve for delivery of therapeutic agents. Recent research has indicated that the amplitude of high-level compound action potentials (CAPs) is a good predictor of overall auditory nerve survival, but does not pinpoint the location of neural damage. A location-specific estimate of nerve pathology may be possible by using a masking paradigm and high-level CAPs to map auditory nerve firing density throughout the cochlea. This initial study in gerbil utilized a high-pass masking paradigm to determine normative ranges for CAP-derived neural firing density functions using broadband chirp stimuli and low-frequency tonebursts, and to determine if cochlear outer hair cell (OHC) pathology alters the distribution of neural firing in the cochlea. Neural firing distributions for moderate-intensity (60 dB pSPL) chirps were affected by OHC pathology whereas those derived with high-level (90 dB pSPL) chirps were not. These results suggest that CAP-derived neural firing distributions for high-level chirps may provide an estimate of auditory nerve survival that is independent of OHC pathology. PMID:22280596
Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.
Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann
2012-02-01
During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
2017-11-01
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Auditory perception in the aging brain: the role of inhibition and facilitation in early processing.
Stothart, George; Kazanina, Nina
2016-11-01
Aging affects the interplay between peripheral and cortical auditory processing. Previous studies have demonstrated that older adults are less able to regulate afferent sensory information and are more sensitive to distracting information. Using auditory event-related potentials we investigated the role of cortical inhibition on auditory and audiovisual processing in younger and older adults. Across puretone, auditory and audiovisual speech paradigms older adults showed a consistent pattern of inhibitory deficits, manifested as increased P50 and/or N1 amplitudes and an absent or significantly reduced N2. Older adults were still able to use congruent visual articulatory information to aid auditory processing but appeared to require greater neural effort to resolve conflicts generated by incongruent visual information. In combination, the results provide support for the Inhibitory Deficit Hypothesis of aging. They extend previous findings into the audiovisual domain and highlight older adults' ability to benefit from congruent visual information during speech processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia
ERIC Educational Resources Information Center
Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.
2003-01-01
The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…
Analysis of the Auditory Feedback and Phonation in Normal Voices.
Arbeiter, Mareike; Petermann, Simon; Hoppe, Ulrich; Bohr, Christopher; Doellinger, Michael; Ziethe, Anke
2018-02-01
The aim of this study was to investigate the auditory feedback mechanisms and voice quality during phonation in response to a spontaneous pitch change in the auditory feedback. Does the pitch shift reflex (PSR) change voice pitch and voice quality? Quantitative and qualitative voice characteristics were analyzed during the PSR. Twenty-eight healthy subjects underwent transnasal high-speed video endoscopy (HSV) at 8000 fps during sustained phonation [a]. While phonating, the subjects heard their sound pitched up for 700 cents (interval of a fifth), lasting 300 milliseconds in their auditory feedback. The electroencephalography (EEG), acoustic voice signal, electroglottography (EGG), and high-speed-videoendoscopy (HSV) were analyzed to compare feedback mechanisms for the pitched and unpitched condition of the phonation paradigm statistically. Furthermore, quantitative and qualitative voice characteristics were analyzed. The PSR was successfully detected within all signals of the experimental tools (EEG, EGG, acoustic voice signal, HSV). A significant increase of the perturbation measures and an increase of the values of the acoustic parameters during the PSR were observed, especially for the audio signal. The auditory feedback mechanism seems not only to control for voice pitch but also for voice quality aspects.
fMRI paradigm designing and post-processing tools
James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan
2014-01-01
In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001
Neural signature of the conscious processing of auditory regularities
Bekinschtein, Tristan A.; Dehaene, Stanislas; Rohaut, Benjamin; Tadel, François; Cohen, Laurent; Naccache, Lionel
2009-01-01
Can conscious processing be inferred from neurophysiological measurements? Some models stipulate that the active maintenance of perceptual representations across time requires consciousness. Capitalizing on this assumption, we designed an auditory paradigm that evaluates cerebral responses to violations of temporal regularities that are either local in time or global across several seconds. Local violations led to an early response in auditory cortex, independent of attention or the presence of a concurrent visual task, whereas global violations led to a late and spatially distributed response that was only present when subjects were attentive and aware of the violations. We could detect the global effect in individual subjects using functional MRI and both scalp and intracerebral event-related potentials. Recordings from 8 noncommunicating patients with disorders of consciousness confirmed that only conscious individuals presented a global effect. Taken together these observations suggest that the presence of the global effect is a signature of conscious processing, although it can be absent in conscious subjects who are not aware of the global auditory regularities. This simple electrophysiological marker could thus serve as a useful clinical tool. PMID:19164526
Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.
Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing
2016-01-01
Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J
2007-02-01
Seeing a speaker's facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the "McGurk illusion", where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at approximately 290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350-400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process.
A P300 event related potential technique for assessment of sexually oriented interest.
Vardi, Yoram; Volos, Michal; Sprecher, Elliot; Granovsky, Yelena; Gruenwald, Ilan; Yarnitsky, David
2006-12-01
Despite all of the modern, sophisticated tests that exist for diagnosing and assessing male and female sexual disorders, to our knowledge there is no objective psychophysiological test to evaluate sexual arousal and interest. We provide preliminary data showing a decrease in auditory P300 wave amplitude during exposure to sexually explicit video clips and a significant correlation between the auditory P300 amplitude decrease and self-reported scores of sexual arousal and interest in the clips. A total of 30 healthy subjects were exposed to several blocks of auditory stimuli administered using an oddball paradigm. Baseline auditory P300 amplitudes were obtained and auditory stimuli were then delivered while viewing visual clips with 3 types of content, including sport, scenery and sex. Auditory P300 amplitude significantly decreased during viewing clips of all contents. Viewing sexual content clips caused a maximal decrease in P300 amplitude (p <0.0001). In addition, a high correlation was found between the amplitude decrease and scores on the sexual arousal questionnaire regarding the viewed clips (r = 0.61, p <0.001). In addition, the P300 amplitude decrease was significantly related to the sexual interest score (r = 0.37, p = 0.042) but not to interest in clips of nonsexual content. The change in auditory P300 amplitude during exposure to visual stimuli with sexual context seems to be an objective measure of subject sexual interest. This method might be applied to assess therapeutic intervention and as a diagnostic tool for assessing disorders of impaired libido or psychogenic sexual dysfunction.
Halder, S; Käthner, I; Kübler, A
2016-02-01
Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming
Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.
2013-01-01
Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the
Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.
Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin
2018-02-21
In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Practiced musical style shapes auditory skills.
Vuust, Peter; Brattico, Elvira; Seppänen, Miia; Näätänen, Risto; Tervaniemi, Mari
2012-04-01
Musicians' processing of sounds depends highly on instrument, performance practice, and level of expertise. Here, we measured the mismatch negativity (MMN), a preattentive brain response, to six types of musical feature change in musicians playing three distinct styles of music (classical, jazz, and rock/pop) and in nonmusicians using a novel, fast, and musical sounding multifeature MMN paradigm. We found MMN to all six deviants, showing that MMN paradigms can be adapted to resemble a musical context. Furthermore, we found that jazz musicians had larger MMN amplitude than all other experimental groups across all sound features, indicating greater overall sensitivity to auditory outliers. Furthermore, we observed a tendency toward shorter latency of the MMN to all feature changes in jazz musicians compared to band musicians. These findings indicate that the characteristics of the style of music played by musicians influence their perceptual skills and the brain processing of sound features embedded in music. © 2012 New York Academy of Sciences.
BALDEY: A database of auditory lexical decisions.
Ernestus, Mirjam; Cutler, Anne
2015-01-01
In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.
Cognitive mechanisms associated with auditory sensory gating
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
2016-01-01
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Combaz, Adrien; Van Hulle, Marc M
2015-01-01
We study the feasibility of a hybrid Brain-Computer Interface (BCI) combining simultaneous visual oddball and Steady-State Visually Evoked Potential (SSVEP) paradigms, where both types of stimuli are superimposed on a computer screen. Potentially, such a combination could result in a system being able to operate faster than a purely P300-based BCI and encode more targets than a purely SSVEP-based BCI. We analyse the interactions between the brain responses of the two paradigms, and assess the possibility to detect simultaneously the brain activity evoked by both paradigms, in a series of 3 experiments where EEG data are analysed offline. Despite differences in the shape of the P300 response between pure oddball and hybrid condition, we observe that the classification accuracy of this P300 response is not affected by the SSVEP stimulation. We do not observe either any effect of the oddball stimulation on the power of the SSVEP response in the frequency of stimulation. Finally results from the last experiment show the possibility of detecting both types of brain responses simultaneously and suggest not only the feasibility of such hybrid BCI but also a gain over pure oddball- and pure SSVEP-based BCIs in terms of communication rate.
Tactile and bone-conduction auditory brain computer interface for vision and hearing impaired users.
Rutkowski, Tomasz M; Mori, Hiromu
2015-04-15
The paper presents a report on the recently developed BCI alternative for users suffering from impaired vision (lack of focus or eye-movements) or from the so-called "ear-blocking-syndrome" (limited hearing). We report on our recent studies of the extents to which vibrotactile stimuli delivered to the head of a user can serve as a platform for a brain computer interface (BCI) paradigm. In the proposed tactile and bone-conduction auditory BCI novel multiple head positions are used to evoke combined somatosensory and auditory (via the bone conduction effect) P300 brain responses, in order to define a multimodal tactile and bone-conduction auditory brain computer interface (tbcaBCI). In order to further remove EEG interferences and to improve P300 response classification synchrosqueezing transform (SST) is applied. SST outperforms the classical time-frequency analysis methods of the non-linear and non-stationary signals such as EEG. The proposed method is also computationally more effective comparing to the empirical mode decomposition. The SST filtering allows for online EEG preprocessing application which is essential in the case of BCI. Experimental results with healthy BCI-naive users performing online tbcaBCI, validate the paradigm, while the feasibility of the concept is illuminated through information transfer rate case studies. We present a comparison of the proposed SST-based preprocessing method, combined with a logistic regression (LR) classifier, together with classical preprocessing and LDA-based classification BCI techniques. The proposed tbcaBCI paradigm together with data-driven preprocessing methods are a step forward in robust BCI applications research. Copyright © 2014 Elsevier B.V. All rights reserved.
Cannabis cue reactivity and craving among never, infrequent and heavy cannabis users.
Henry, Erika A; Kaye, Jesse T; Bryan, Angela D; Hutchison, Kent E; Ito, Tiffany A
2014-04-01
Substance cue reactivity is theorized as having a significant role in addiction processes, promoting compulsive patterns of drug-seeking and drug-taking behavior. However, research extending this phenomenon to cannabis has been limited. To that end, the goal of the current work was to examine the relationship between cannabis cue reactivity and craving in a sample of 353 participants varying in self-reported cannabis use. Participants completed a visual oddball task whereby neutral, exercise, and cannabis cue images were presented, and a neutral auditory oddball task while event-related brain potentials (ERPs) were recorded. Consistent with past research, greater cannabis use was associated with greater reactivity to cannabis images, as reflected in the P300 component of the ERP, but not to neutral auditory oddball cues. The latter indicates the specificity of cue reactivity differences as a function of substance-related cues and not generalized cue reactivity. Additionally, cannabis cue reactivity was significantly related to self-reported cannabis craving as well as problems associated with cannabis use. Implications for cannabis use and addiction more generally are discussed.
Cannabis Cue Reactivity and Craving Among Never, Infrequent and Heavy Cannabis Users
Henry, Erika A; Kaye, Jesse T; Bryan, Angela D; Hutchison, Kent E; Ito, Tiffany A
2014-01-01
Substance cue reactivity is theorized as having a significant role in addiction processes, promoting compulsive patterns of drug-seeking and drug-taking behavior. However, research extending this phenomenon to cannabis has been limited. To that end, the goal of the current work was to examine the relationship between cannabis cue reactivity and craving in a sample of 353 participants varying in self-reported cannabis use. Participants completed a visual oddball task whereby neutral, exercise, and cannabis cue images were presented, and a neutral auditory oddball task while event-related brain potentials (ERPs) were recorded. Consistent with past research, greater cannabis use was associated with greater reactivity to cannabis images, as reflected in the P300 component of the ERP, but not to neutral auditory oddball cues. The latter indicates the specificity of cue reactivity differences as a function of substance-related cues and not generalized cue reactivity. Additionally, cannabis cue reactivity was significantly related to self-reported cannabis craving as well as problems associated with cannabis use. Implications for cannabis use and addiction more generally are discussed. PMID:24264815
P300 event-related potentials in children with dyslexia.
Papagiannopoulou, Eleni A; Lagopoulos, Jim
2017-04-01
To elucidate the timing and the nature of neural disturbances in dyslexia and to further understand the topographical distribution of these, we examined entire brain regions employing the non-invasive auditory oddball P300 paradigm in children with dyslexia and neurotypical controls. Our findings revealed abnormalities for the dyslexia group in (i) P300 latency, globally, but greatest in frontal brain regions and (ii) decreased P300 amplitude confined to the central brain regions (Fig. 1). These findings reflect abnormalities associated with a diminished capacity to process mental workload as well as delayed processing of this information in children with dyslexia. Furthermore, the topographical distribution of these findings suggests a distinct spatial distribution for the observed P300 abnormalities. This information may be useful in future therapeutic or brain stimulation intervention trials.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Stability of auditory discrimination and novelty processing in physiological aging.
Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele
2013-01-01
Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.
Bigelow, James; Poremba, Amy
2014-01-01
Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.
Shtyrov, Yury; Osswald, Katja; Pulvermüller, Friedemann
2008-01-01
The mismatch negativity response, considered a brain correlate of automatic preattentive auditory processing, is enhanced for word stimuli as compared with acoustically matched pseudowords. This lexical enhancement, taken as a signature of activation of language-specific long-term memory traces, was investigated here using functional magnetic resonance imaging to complement the previous electrophysiological studies. In passive oddball paradigm, word stimuli were randomly presented as rare deviants among frequent pseudowords; the reverse conditions employed infrequent pseudowords among word stimuli. Random-effect analysis indicated clearly distinct patterns for the different lexical types. Whereas the hemodynamic mismatch response was significant for the word deviants, it did not reach significance for the pseudoword conditions. This difference, more pronounced in the left than right hemisphere, was also assessed by analyzing average parameter estimates in regions of interests within both temporal lobes. A significant hemisphere-by-lexicality interaction confirmed stronger blood oxygenation level-dependent mismatch responses to words than pseudowords in the left but not in the right superior temporal cortex. The increased left superior temporal activation and the laterality of cortical sources elicited by spoken words compared with pseudowords may indicate the activation of cortical circuits for lexical material even in passive oddball conditions and suggest involvement of the left superior temporal areas in housing such word-processing neuronal circuits.
Response to own name in children: ERP study of auditory social information processing.
Key, Alexandra P; Jones, Dorita; Peters, Sarika U
2016-09-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other's names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. Copyright © 2016 Elsevier B.V. All rights reserved.
Response to Own Name in Children: ERP Study of Auditory Social Information Processing
Key, Alexandra P.; Jones, Dorita; Peters, Sarika U.
2016-01-01
Auditory processing is an important component of cognitive development, and names are among the most frequently occurring receptive language stimuli. Although own name processing has been examined in infants and adults, surprisingly little data exist on responses to own name in children. The present ERP study examined spoken name processing in 32 children (M=7.85 years) using a passive listening paradigm. Our results demonstrated that children differentiate own and close other’s names from unknown names, as reflected by the enhanced parietal P300 response. The responses to own and close other names did not differ between each other. Repeated presentations of an unknown name did not result in the same familiarity as the known names. These results suggest that auditory ERPs to known/unknown names are a feasible means to evaluate complex auditory processing without the need for overt behavioral responses. PMID:27456543
Rapid extraction of auditory feature contingencies.
Bendixen, Alexandra; Prinz, Wolfgang; Horváth, János; Trujillo-Barreto, Nelson J; Schröger, Erich
2008-07-01
Contingent relations between sensory events render the environment predictable and thus facilitate adaptive behavior. The human capacity to detect such relations has been comprehensively demonstrated in paradigms in which contingency rules were task-relevant or in which they applied to motor behavior. The extent to which contingencies can also be extracted from events that are unrelated to the current goals of the organism has remained largely unclear. The present study addressed the emergence of contingency-related effects for behaviorally irrelevant auditory stimuli and the cortical areas involved in the processing of such contingency rules. Contingent relations between different features of temporally separate events were embedded in a new dynamic protocol. Participants were presented with the auditory stimulus sequences while their attention was captured by a video. The mismatch negativity (MMN) component of the event-related brain potential (ERP) was employed as an electrophysiological correlate of contingency detection. MMN generators were localized by means of scalp current density (SCD) and primary current density (PCD) analyses with variable resolution electromagnetic tomography (VARETA). Results show that task-irrelevant contingencies can be extracted from about fifteen to twenty successive events conforming to the contingent relation. Topographic and tomographic analyses reveal the involvement of the auditory cortex in the processing of contingency violations. The present data provide evidence for the rapid encoding of complex extrapolative relations in sensory areas. This capacity is of fundamental importance for the organism in its attempt to model the sensory environment outside the focus of attention.
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter
2018-05-01
Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.
Slevc, L Robert; Shell, Alison R
2015-01-01
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
The Time-Course of Auditory and Visual Distraction Effects in a New Crossmodal Paradigm
ERIC Educational Resources Information Center
Bendixen, Alexandra; Grimm, Sabine; Deouell, Leon Y.; Wetzel, Nicole; Madebach, Andreas; Schroger, Erich
2010-01-01
Vision often dominates audition when attentive processes are involved (e.g., the ventriloquist effect), yet little is known about the relative potential of the two modalities to initiate a "break through of the unattended". The present study was designed to systematically compare the capacity of task-irrelevant auditory and visual events to…
Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.
Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard
2018-01-01
The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
Youssofzadeh, Vahab; Prasad, Girijesh; Naeem, Muhammad; Wong-Lin, KongFatt
2016-01-01
Partial Granger causality (PGC) has been applied to analyse causal functional neural connectivity after effectively mitigating confounding influences caused by endogenous latent variables and exogenous environmental inputs. However, it is not known how this connectivity obtained from PGC evolves over time. Furthermore, PGC has yet to be tested on realistic nonlinear neural circuit models and multi-trial event-related potentials (ERPs) data. In this work, we first applied a time-domain PGC technique to evaluate simulated neural circuit models, and demonstrated that the PGC measure is more accurate and robust in detecting connectivity patterns as compared to conditional Granger causality and partial directed coherence, especially when the circuit is intrinsically nonlinear. Moreover, the connectivity in PGC settles faster into a stable and correct configuration over time. After method verification, we applied PGC to reveal the causal connections of ERP trials of a mismatch negativity auditory oddball paradigm. The PGC analysis revealed a significant bilateral but asymmetrical localised activity in the temporal lobe close to the auditory cortex, and causal influences in the frontal, parietal and cingulate cortical areas, consistent with previous studies. Interestingly, the time to reach a stable connectivity configuration (~250–300 ms) coincides with the deviation of ensemble ERPs of oddball from standard tones. Finally, using a sliding time window, we showed higher resolution dynamics of causal connectivity within an ERP trial. In summary, time-domain PGC is promising in deciphering directed functional connectivity in nonlinear and ERP trials accurately, and at a sufficiently early stage. This data-driven approach can reduce computational time, and determine the key architecture for neural circuit modeling.
Saint-Amour, Dave; De Sanctis, Pierfilippo; Molholm, Sophie; Ritter, Walter; Foxe, John J.
2006-01-01
Seeing a speaker’s facial articulatory gestures powerfully affects speech perception, helping us overcome noisy acoustical environments. One particularly dramatic illustration of visual influences on speech perception is the “McGurk illusion”, where dubbing an auditory phoneme onto video of an incongruent articulatory movement can often lead to illusory auditory percepts. This illusion is so strong that even in the absence of any real change in auditory stimulation, it activates the automatic auditory change-detection system, as indexed by the mismatch negativity (MMN) component of the auditory event-related potential (ERP). We investigated the putative left hemispheric dominance of McGurk-MMN using high-density ERPs in an oddball paradigm. Topographic mapping of the initial McGurk-MMN response showed a highly lateralized left hemisphere distribution, beginning at 175 ms. Subsequently, scalp activity was also observed over bilateral fronto-central scalp with a maximal amplitude at ~290 ms, suggesting later recruitment of right temporal cortices. Strong left hemisphere dominance was again observed during the last phase of the McGurk-MMN waveform (350–400 ms). Source analysis indicated bilateral sources in the temporal lobe just posterior to primary auditory cortex. While a single source in the right superior temporal gyrus (STG) accounted for the right hemisphere activity, two separate sources were required, one in the left transverse gyrus and the other in STG, to account for left hemisphere activity. These findings support the notion that visually driven multisensory illusory phonetic percepts produce an auditory-MMN cortical response and that left hemisphere temporal cortex plays a crucial role in this process. PMID:16757004
Neural correlates of distraction and conflict resolution for nonverbal auditory events.
Stewart, Hannah J; Amitay, Sygal; Alain, Claude
2017-05-09
In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block's instruction. For the distraction measure, a positive component peaking at ~250 ms was found - a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270-450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory
Burnham, Denis; Dodd, Barbara
2004-12-01
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Linguistic processing in idiopathic generalized epilepsy: an auditory event-related potential study.
Henkin, Yael; Kishon-Rabin, Liat; Pratt, Hillel; Kivity, Sara; Sadeh, Michelle; Gadoth, Natan
2003-09-01
Auditory processing of increasing acoustic and linguistic complexity was assessed in children with idiopathic generalized epilepsy (IGE) by using auditory event-related potentials (AERPs) as well as reaction time and performance accuracy. Twenty-four children with IGE [12 with generalized tonic-clonic seizures (GTCSs), and 12 with absence seizures (ASs)] with average intelligence and age-appropriate scholastic skills, uniformly medicated with valproic acid (VPA), and 20 healthy controls, performed oddball discrimination tasks that consisted of the following stimuli: (a) pure tones; (b) nonmeaningful monosyllables that differed by their phonetic features (i.e., phonetic stimuli); and (c) meaningful monosyllabic words from two semantic categories (i.e., semantic stimuli). AERPs elicited by nonlinguistic stimuli were similar in healthy and epilepsy children, whereas those elicited by linguistic stimuli (i.e., phonetic and semantic) differed significantly in latency, amplitude, and scalp distribution. In children with GTCSs, phonetic and semantic processing were characterized by slower processing time, manifested by prolonged N2 and P3 latencies during phonetic processing, and prolongation of all AERPs latencies during semantic processing. In children with ASs, phonetic and semantic processing were characterized by increased allocation of attentional resources, manifested by enhanced N2 amplitudes. Semantic processing also was characterized by prolonged P3 latency. In both patient groups, processing of linguistic stimuli resulted in different patterns of brain-activity lateralization compared with that in healthy controls. Reaction time and performance accuracy did not differ among the study groups. AERPs exposed linguistic-processing deficits related to seizure type in children with IGE. Neurologic follow-up should therefore include evaluation of linguistic functions, and remedial intervention should be provided, accordingly.
Impact of language on development of auditory-visual speech perception.
Sekiyama, Kaoru; Burnham, Denis
2008-03-01
The McGurk effect paradigm was used to examine the developmental onset of inter-language differences between Japanese and English in auditory-visual speech perception. Participants were asked to identify syllables in audiovisual (with congruent or discrepant auditory and visual components), audio-only, and video-only presentations at various signal-to-noise levels. In Experiment 1 with two groups of adults, native speakers of Japanese and native speakers of English, the results on both percent visually influenced responses and reaction time supported previous reports of a weaker visual influence for Japanese participants. In Experiment 2, an additional three age groups (6, 8, and 11 years) in each language group were tested. The results showed that the degree of visual influence was low and equivalent for Japanese and English language 6-year-olds, and increased over age for English language participants, especially between 6 and 8 years, but remained the same for Japanese participants. This may be related to the fact that English language adults and older children processed visual speech information relatively faster than auditory information whereas no such inter-modal differences were found in the Japanese participants' reaction times.
ERIC Educational Resources Information Center
Hessler, Dorte; Jonkers, Roel; Stowe, Laurie; Bastiaanse, Roelien
2013-01-01
In the current ERP study, an active oddball task was carried out, testing pure tones and auditory, visual and audiovisual syllables. For pure tones, an MMN, an N2b, and a P3 were found, confirming traditional findings. Auditory syllables evoked an N2 and a P3. We found that the amplitude of the P3 depended on the distance between standard and…
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Frontal P300 decrement and executive dysfunction in adolescents with conduct problems.
Kim, M S; Kim, J J; Kwon, J S
2001-01-01
This study investigated the cognitive and cerebral function of adolescents with conduct problems by neuropsychological battery (STIM) and event-related potential (ERP). Eighteen adolescents with conduct disorder, and 18 age-matched normal subjects were included. Such cognitive functions as attention, memory, executive function and problem solving were evaluated using subtests of STIM. ERP was measured using an auditory oddball paradigm. The conduct group showed a significantly lower hit rate on the Wisconsin Card Sorting Test (WCST) than the control group. In addition, the conduct group showed reduced P300 amplitude at Fz and Cz, and prolonged P300 latency at Fz, and there was a significant correlation between P300 amplitude and Stroop test performance. These results indicate that adolescents with conduct problems have impairments of executive function and inhibition, and that these impairments are associated with frontal dysfunction.
Auditory Reserve and the Legacy of Auditory Experience
Skoe, Erika; Kraus, Nina
2014-01-01
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function. PMID:25405381
Stimulation of the human auditory nerve with optical radiation
NASA Astrophysics Data System (ADS)
Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter
2009-02-01
A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also
Cogné, Mélanie; Knebel, Jean-François; Klinger, Evelyne; Bindschaedler, Claire; Rapin, Pierre-André; Joseph, Pierre-Alain; Clarke, Stephanie
2018-01-01
Topographical disorientation is a frequent deficit among patients suffering from brain injury. Spatial navigation can be explored in this population using virtual reality environments, even in the presence of motor or sensory disorders. Furthermore, the positive or negative impact of specific stimuli can be investigated. We studied how auditory stimuli influence the performance of brain-injured patients in a navigational task, using the Virtual Action Planning-Supermarket (VAP-S) with the addition of contextual ("sonar effect" and "name of product") and non-contextual ("periodic randomised noises") auditory stimuli. The study included 22 patients with a first unilateral hemispheric brain lesion and 17 healthy age-matched control subjects. After a software familiarisation, all subjects were tested without auditory stimuli, with a sonar effect or periodic random sounds in a random order, and with the stimulus "name of product". Contextual auditory stimuli improved patient performance more than control group performance. Contextual stimuli benefited most patients with severe executive dysfunction or with severe unilateral neglect. These results indicate that contextual auditory stimuli are useful in the assessment of navigational abilities in brain-damaged patients and that they should be used in rehabilitation paradigms.
Stevens, Courtney; Paulsen, David; Yasen, Alia; Mitsunaga, Leila; Neville, Helen
2012-01-01
Previous research indicates that at least some children with specific language impairment (SLI) show a reduced neural response when non-linguistic tones were presented at rapid rates. However, this past research has examined older children, and it is unclear whether such deficits emerge earlier in development. It is also unclear whether atypical refractory effects differ for linguistic versus non-linguistic stimuli or can be explained by deficits in selective auditory attention reported among children with SLI. In the present study, auditory refractory periods were compared in a group of 24 young children with SLI (age 3–8 years) and 24 matched control children. Event-related brain potentials (ERPs) were recorded and compared to 100 ms linguistic and non-linguistic probe stimuli presented at inter-stimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting an experimental manipulation of selective attention within the same paradigm. Across participants, clear refractory effects were observed with this paradigm, evidenced as a reduced amplitude response from 100 to 200 ms at shorter ISIs. Children with SLI showed reduced amplitude ERPs relative to the typically-developing group at only the shortest, 200 ms, ISI and this difference was over the left-hemisphere for linguistic probes and over the right-hemisphere for non-linguistic probes. None of these effects was influenced by the direction of selective attention. Taken together, these findings suggest that deficits in the neural representation of rapidly presented auditory stimuli may be one risk factor for atypical language development. PMID:22265331
Top-down and bottom-up neurodynamic evidence in patients with tinnitus.
Hong, Sung Kwang; Park, Sejik; Ahn, Min-Hee; Min, Byoung-Kyong
2016-12-01
Although a peripheral auditory (bottom-up) deficit is an essential prerequisite for the generation of tinnitus, central cognitive (top-down) impairment has also been shown to be an inherent neuropathological mechanism. Using an auditory oddball paradigm (for top-down analyses) and a passive listening paradigm (for bottom-up analyses) while recording electroencephalograms (EEGs), we investigated whether top-down or bottom-up components were more critical in the neuropathology of tinnitus, independent of peripheral hearing loss. We observed significantly reduced P300 amplitudes (reflecting fundamental cognitive processes such as attention) and evoked theta power (reflecting top-down regulation in memory systems) for target stimuli at the tinnitus frequency of patients with tinnitus but without hearing loss. The contingent negative variation (reflecting top-down expectation of a subsequent event prior to stimulation) and N100 (reflecting auditory bottom-up selective attention) were different between the healthy and patient groups. Interestingly, when tinnitus patients were divided into two subgroups based on their P300 amplitudes, their P170 and N200 components, and annoyance and distress indices to their tinnitus sound were different. EEG theta-band power and its Granger causal neurodynamic results consistently support a double dissociation of these two groups in both top-down and bottom-up tasks. Directed cortical connectivity corroborates that the tinnitus network involves the anterior cingulate and the parahippocampal areas, where higher-order top-down control is generated. Together, our observations provide neurophysiological and neurodynamic evidence revealing a differential engagement of top-down impairment along with deficits in bottom-up processing in patients with tinnitus but without hearing loss. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
2015-11-01
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary
2012-01-01
We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019
Using neuroimaging to understand the cortical mechanisms of auditory selective attention
Lee, Adrian KC; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G
2013-01-01
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the “cocktail party” problem. PMID:23850664
Auditory evoked potentials in patients with major depressive disorder measured by Emotiv system.
Wang, Dongcui; Mo, Fongming; Zhang, Yangde; Yang, Chao; Liu, Jun; Chen, Zhencheng; Zhao, Jinfeng
2015-01-01
In a previous study (unpublished), Emotiv headset was validated for capturing event-related potentials (ERPs) from normal subjects. In the present follow-up study, the signal quality of Emotiv headset was tested by the accuracy rate of discriminating Major Depressive Disorder (MDD) patients from the normal subjects. ERPs of 22 MDD patients and 15 normal subjects were induced by an auditory oddball task and the amplitude of N1, N2 and P3 of ERP components were specifically analyzed. The features of ERPs were statistically investigated. It is found that Emotiv headset is capable of discriminating the abnormal N1, N2 and P3 components in MDD patients. Relief-F algorithm was applied to all features for feature selection. The selected features were then input to a linear discriminant analysis (LDA) classifier with leave-one-out cross-validation to characterize the ERP features of MDD. 127 possible combinations out of the selected 7 ERP features were classified using LDA. The best classification accuracy was achieved to be 89.66%. These results suggest that MDD patients are identifiable from normal subjects by ERPs measured by Emotiv headset.
Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S
2017-08-01
The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for
Detection of P300 waves in single trials by the wavelet transform (WT).
Demiralp, T; Ademoglu, A; Schürmann, M; Başar-Eroglu, C; Başar, E
1999-01-01
The P300 response is conventionally obtained by averaging the responses to the task-relevant (target) stimuli of the oddball paradigm. However, it is well known that cognitive ERP components show a high variability due to changes of cognitive state during an experimental session. With simple tasks such changes may not be demonstrable by the conventional method of averaging the sweeps chosen according to task-relevance. Therefore, the present work employed a response-based classification procedure to choose the trials containing the P300 component from the whole set of sweeps of an auditory oddball paradigm. For this purpose, the most significant response property reflecting the P300 wave was identified by using the wavelet transform (WT). The application of a 5 octave quadratic B-spline-WT on single sweeps yielded discrete coefficients in each octave with an appropriate time resolution for each frequency range. The main feature indicating a P300 response was the positivity of the 4th delta (0.5-4 Hz) coefficient (310-430 ms) after stimulus onset. The average of selected single sweeps from the whole set of data according to this criterion yielded more enhanced P300 waves compared with the average of the target responses, and the average of the remaining sweeps showed a significantly smaller positivity in the P300 latency range compared with the average of the non-target responses. The combination of sweeps classified according to the task-based and response-based criteria differed significantly. This suggests an influence of changes in cognitive state on the presence of the P300 wave which cannot be assessed by task performance alone. Copyright 1999 Academic Press.
Mittermeier, Verena; Leicht, Gregor; Karch, Susanne; Hegerl, Ulrich; Möller, Hans-Jürgen; Pogarell, Oliver; Mulert, Christoph
2011-03-01
Several studies suggest that attention to emotional content is related to specific changes in central information processing. In particular, event-related potential (ERP) studies focusing on emotion recognition in pictures and faces or word processing have pointed toward a distinct component of the visual-evoked potential, the EPN ('early posterior negativity'), which has been shown to be related to attention to emotional content. In the present study, we were interested in the existence of a corresponding ERP component in the auditory modality and a possible relationship with the personality dimension extraversion-introversion, as assessed by the NEO Five-Factors Inventory. We investigated 29 healthy subjects using three types of auditory choice tasks: (1) the distinction of syllables with emotional intonation, (2) the identification of the emotional content of adjectives and (3) a purely cognitive control task. Compared with the cognitive control task, emotional paradigms using auditory stimuli evoked an EPN component with a distinct peak after 170 ms (EPN 170). Interestingly, subjects with high scores in the personality trait extraversion showed significantly higher EPN amplitudes for emotional paradigms (syllables and words) than introverted subjects.
ERIC Educational Resources Information Center
Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel
2012-01-01
This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…
Effects of aging on neuromagnetic mismatch responses to pitch changes.
Cheng, Chia-Hsiung; Baillet, Sylvain; Hsiao, Fu-Jung; Lin, Yung-Yang
2013-06-07
Although aging-related alterations in the auditory sensory memory and involuntary change discrimination have been widely studied, it remains controversial whether the mismatch negativity (MMN) or its magnetic counterpart (MMNm) is modulated by physiological aging. This study aimed to examine the effects of aging on mismatch activity to pitch deviants by using a whole-head magnetoencephalography (MEG) together with distributed source modeling analysis. The neuromagnetic responses to oddball paradigms consisting of standards (1000 Hz, p=0.85) and deviants (1100 Hz, p=0.15) were recorded in healthy young (n=20) and aged (n=18) male adults. We used minimum norm estimate of source reconstruction to characterize the spatiotemporal neural dynamics of MMNm responses. Distributed activations to MMNm were identified in the bilateral fronto-temporo-parietal areas. Compared to younger participants, the elderly exhibited a significant reduction of cortical activation in bilateral superior temporal guri, superior temporal sulci, inferior fontal gyri, orbitofrontal cortices and right inferior parietal lobules. In conclusion, our results suggest an aging-related decline in auditory sensory memory and automatic change detection as indexed by MMNm. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Natural stimuli improve auditory BCIs with respect to ergonomics and performance
NASA Astrophysics Data System (ADS)
Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael
2012-08-01
Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.
ERIC Educational Resources Information Center
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Auditory training improves auditory performance in cochlear implanted children.
Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel
2016-07-01
While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that
Auditory Perceptual Abilities Are Associated with Specific Auditory Experience
Zaltz, Yael; Globerson, Eitan; Amir, Noam
2017-01-01
The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination
Meyer, Georg F.; Wong, Li Ting; Timson, Emma; Perfect, Philip; White, Mark D.
2012-01-01
We argue that objective fidelity evaluation of virtual environments, such as flight simulation, should be human-performance-centred and task-specific rather than measure the match between simulation and physical reality. We show how principled experimental paradigms and behavioural models to quantify human performance in simulated environments that have emerged from research in multisensory perception provide a framework for the objective evaluation of the contribution of individual cues to human performance measures of fidelity. We present three examples in a flight simulation environment as a case study: Experiment 1: Detection and categorisation of auditory and kinematic motion cues; Experiment 2: Performance evaluation in a target-tracking task; Experiment 3: Transferrable learning of auditory motion cues. We show how the contribution of individual cues to human performance can be robustly evaluated for each task and that the contribution is highly task dependent. The same auditory cues that can be discriminated and are optimally integrated in experiment 1, do not contribute to target-tracking performance in an in-flight refuelling simulation without training, experiment 2. In experiment 3, however, we demonstrate that the auditory cue leads to significant, transferrable, performance improvements with training. We conclude that objective fidelity evaluation requires a task-specific analysis of the contribution of individual cues. PMID:22957068
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Neural effects of cognitive control load on auditory selective attention.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
2014-08-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Neural effects of cognitive control load on auditory selective attention
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali
2014-01-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314
Early electrophysiological markers of atypical language processing in prematurely born infants.
Paquette, Natacha; Vannasing, Phetsamone; Tremblay, Julie; Lefebvre, Francine; Roy, Marie-Sylvie; McKerral, Michelle; Lepore, Franco; Lassonde, Maryse; Gallagher, Anne
2015-12-01
Because nervous system development may be affected by prematurity, many prematurely born children present language or cognitive disorders at school age. The goal of this study is to investigate whether these impairments can be identified early in life using electrophysiological auditory event-related potentials (AERPs) and mismatch negativity (MMN). Brain responses to speech and non-speech stimuli were assessed in prematurely born children to identify early electrophysiological markers of language and cognitive impairments. Participants were 74 children (41 full-term, 33 preterm) aged 3, 12, and 36 months. Pre-attentional auditory responses (MMN and AERPs) were assessed using an oddball paradigm, with speech and non-speech stimuli presented in counterbalanced order between participants. Language and cognitive development were assessed using the Bayley Scale of Infant Development, Third Edition (BSID-III). Results show that preterms as young as 3 months old had delayed MMN response to speech stimuli compared to full-terms. A significant negative correlation was also found between MMN latency to speech sounds and the BSID-III expressive language subscale. However, no significant differences between full-terms and preterms were found for the MMN to non-speech stimuli, suggesting preserved pre-attentional auditory discrimination abilities in these children. Identification of early electrophysiological markers for delayed language development could facilitate timely interventions. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Neurophysiological and Behavioral Responses of Mandarin Lexical Tone Processing
Yu, Yan H.; Shafer, Valerie L.; Sussman, Elyse S.
2017-01-01
Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination. PMID:28321179
Pauletti, C; Mannarelli, D; Locuratolo, N; Vanacore, N; De Lucia, M C; Fattapposta, F
2014-04-01
To investigate whether pre-attentive auditory discrimination is impaired in patients with essential tremor (ET) and to evaluate the role of age at onset in this function. Seventeen non-demented patients with ET and seventeen age- and sex-matched healthy controls underwent an EEG recording during a classical auditory MMN paradigm. MMN latency was significantly prolonged in patients with elderly-onset ET (>65 years) (p=0.046), while no differences emerged in either latency or amplitude between young-onset ET patients and controls. This study represents a tentative indication of a dysfunction of auditory automatic change detection in elderly-onset ET patients, pointing to a selective attentive deficit in this subgroup of ET patients. The delay in pre-attentive auditory discrimination, which affects elderly-onset ET patients alone, further supports the hypothesis that ET represents a heterogeneous family of diseases united by tremor; these diseases are characterized by cognitive differences that may range from a disturbance in a selective cognitive function, such as the automatic part of the orienting response, to more widespread and complex cognitive dysfunctions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Tesche, Claudia D; Kodituwakku, Piyadasa W; Garcia, Christopher M; Houck, Jon M
2015-01-01
Children exposed to substantial amounts of alcohol in utero display a broad range of morphological and behavioral outcomes, which are collectively referred to as fetal alcohol spectrum disorders (FASDs). Common to all children on the spectrum are cognitive and behavioral problems that reflect central nervous system dysfunction. Little is known, however, about the potential effects of variables such as sex on alcohol-induced brain damage. The goal of the current research was to utilize magnetoencephalography (MEG) to examine the effect of sex on brain dynamics in adolescents and young adults with FASD during the performance of an auditory oddball task. The stimuli were short trains of 1 kHz "standard" tone bursts (80%) randomly interleaved with 1.5 kHz "target" tone bursts (10%) and "novel" digital sounds (10%). Participants made motor responses to the target tones. Results are reported for 44 individuals (18 males and 26 females) ages 12 through 22 years. Nine males and 13 females had a diagnosis of FASD and the remainder were typically-developing age- and sex-matched controls. The main finding was widespread sex-specific differential activation of the frontal, medial and temporal cortex in adolescents with FASD compared to typically developing controls. Significant differences in evoked-response and time-frequency measures of brain dynamics were observed for all stimulus types in the auditory cortex, inferior frontal sulcus and hippocampus. These results underscore the importance of considering the influence of sex when analyzing neurophysiological data in children with FASD.
EEG phase reset due to auditory attention: an inverse time-scale approach.
Low, Yin Fen; Strauss, Daniel J
2009-08-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.
Direct recordings from the auditory cortex in a cochlear implant user.
Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A
2013-06-01
Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings.
Auditory-visual object recognition time suggests specific processing for animal sounds.
Suied, Clara; Viaud-Delmon, Isabelle
2009-01-01
Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. We used an auditory-visual object-recognition task (go/no-go paradigm). We compared recognition times for two categories: a biologically relevant one (animals) and a non-biologically relevant one (means of transport). Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object), similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.
Sensory-motor interactions for vocal pitch monitoring in non-primary human auditory cortex.
Greenlee, Jeremy D W; Behroozmand, Roozbeh; Larson, Charles R; Jackson, Adam W; Chen, Fangxiang; Hansen, Daniel R; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (-100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70-150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control.
Sensory-Motor Interactions for Vocal Pitch Monitoring in Non-Primary Human Auditory Cortex
Larson, Charles R.; Jackson, Adam W.; Chen, Fangxiang; Hansen, Daniel R.; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A.
2013-01-01
The neural mechanisms underlying processing of auditory feedback during self-vocalization are poorly understood. One technique used to study the role of auditory feedback involves shifting the pitch of the feedback that a speaker receives, known as pitch-shifted feedback. We utilized a pitch shift self-vocalization and playback paradigm to investigate the underlying neural mechanisms of audio-vocal interaction. High-resolution electrocorticography (ECoG) signals were recorded directly from auditory cortex of 10 human subjects while they vocalized and received brief downward (−100 cents) pitch perturbations in their voice auditory feedback (speaking task). ECoG was also recorded when subjects passively listened to playback of their own pitch-shifted vocalizations. Feedback pitch perturbations elicited average evoked potential (AEP) and event-related band power (ERBP) responses, primarily in the high gamma (70–150 Hz) range, in focal areas of non-primary auditory cortex on superior temporal gyrus (STG). The AEPs and high gamma responses were both modulated by speaking compared with playback in a subset of STG contacts. From these contacts, a majority showed significant enhancement of high gamma power and AEP responses during speaking while the remaining contacts showed attenuated response amplitudes. The speaking-induced enhancement effect suggests that engaging the vocal motor system can modulate auditory cortical processing of self-produced sounds in such a way as to increase neural sensitivity for feedback pitch error detection. It is likely that mechanisms such as efference copies may be involved in this process, and modulation of AEP and high gamma responses imply that such modulatory effects may affect different cortical generators within distinctive functional networks that drive voice production and control. PMID:23577157
Seeing the Song: Left Auditory Structures May Track Auditory-Visual Dynamic Alignment
Mossbridge, Julia A.; Grabowecky, Marcia; Suzuki, Satoru
2013-01-01
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Kiiski, Hanni; Jollans, Lee; Donnchadha, Seán Ó; Nolan, Hugh; Lonergan, Róisín; Kelly, Siobhán; O'Brien, Marie Claire; Kinsella, Katie; Bramham, Jessica; Burke, Teresa; Hutchinson, Michael; Tubridy, Niall; Reilly, Richard B; Whelan, Robert
2018-05-01
Event-related potentials (ERPs) show promise to be objective indicators of cognitive functioning. The aim of the study was to examine if ERPs recorded during an oddball task would predict cognitive functioning and information processing speed in Multiple Sclerosis (MS) patients and controls at the individual level. Seventy-eight participants (35 MS patients, 43 healthy age-matched controls) completed visual and auditory 2- and 3-stimulus oddball tasks with 128-channel EEG, and a neuropsychological battery, at baseline (month 0) and at Months 13 and 26. ERPs from 0 to 700 ms and across the whole scalp were transformed into 1728 individual spatio-temporal datapoints per participant. A machine learning method that included penalized linear regression used the entire spatio-temporal ERP to predict composite scores of both cognitive functioning and processing speed at baseline (month 0), and months 13 and 26. The results showed ERPs during the visual oddball tasks could predict cognitive functioning and information processing speed at baseline and a year later in a sample of MS patients and healthy controls. In contrast, ERPs during auditory tasks were not predictive of cognitive performance. These objective neurophysiological indicators of cognitive functioning and processing speed, and machine learning methods that can interrogate high-dimensional data, show promise in outcome prediction.
Decoding power-spectral profiles from FMRI brain activities during naturalistic auditory experience.
Hu, Xintao; Guo, Lei; Han, Junwei; Liu, Tianming
2017-02-01
Recent studies have demonstrated a close relationship between computational acoustic features and neural brain activities, and have largely advanced our understanding of auditory information processing in the human brain. Along this line, we proposed a multidisciplinary study to examine whether power spectral density (PSD) profiles can be decoded from brain activities during naturalistic auditory experience. The study was performed on a high resolution functional magnetic resonance imaging (fMRI) dataset acquired when participants freely listened to the audio-description of the movie "Forrest Gump". Representative PSD profiles existing in the audio-movie were identified by clustering the audio samples according to their PSD descriptors. Support vector machine (SVM) classifiers were trained to differentiate the representative PSD profiles using corresponding fMRI brain activities. Based on PSD profile decoding, we explored how the neural decodability correlated to power intensity and frequency deviants. Our experimental results demonstrated that PSD profiles can be reliably decoded from brain activities. We also suggested a sigmoidal relationship between the neural decodability and power intensity deviants of PSD profiles. Our study in addition substantiates the feasibility and advantage of naturalistic paradigm for studying neural encoding of complex auditory information.
Bruggemann, Jason M; Stockill, Helen V; Lenroot, Rhoshel K; Laurens, Kristin R
2013-09-01
Identification of markers of abnormal brain function in children at-risk of schizophrenia may inform early intervention and prevention programs. Individuals with schizophrenia are characterised by attenuation of MMN amplitude, which indexes automatic auditory sensory processing. The current aim was to examine whether children who may be at increased risk of schizophrenia due to their presenting multiple putative antecedents of schizophrenia (ASz) are similarly characterised by MMN amplitude reductions, relative to typically developing (TD) children. EEG was recorded from 22 ASz and 24 TD children aged 9 to 12 years (matched on age, sex, and IQ) during a passive auditory oddball task (15% duration deviant). ASz children were those presenting: (1) speech and/or motor development lags/problems; (2) social, emotional, or behavioural problems in the clinical range; and (3) psychotic-like experiences. TD children presented no antecedents, and had no family history of a schizophrenia spectrum disorder. MMN amplitude, but not latency, was significantly greater at frontal sites in the ASz group than in the TD group. Although the MMN exhibited by the children at risk of schizophrenia was unlike that of their typically developing peers, it also differed from the reduced MMN amplitude observed in adults with schizophrenia. This may reflect developmental and disease effects in a pre-prodromal phase of psychosis onset. Longitudinal follow-up is necessary to establish the developmental trajectory of MMN in at-risk children. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
EEG Responses to Auditory Stimuli for Automatic Affect Recognition
Hettich, Dirk T.; Bolinger, Elaina; Matuz, Tamara; Birbaumer, Niels; Rosenstiel, Wolfgang; Spüler, Martin
2016-01-01
Brain state classification for communication and control has been well established in the area of brain-computer interfaces over the last decades. Recently, the passive and automatic extraction of additional information regarding the psychological state of users from neurophysiological signals has gained increased attention in the interdisciplinary field of affective computing. We investigated how well specific emotional reactions, induced by auditory stimuli, can be detected in EEG recordings. We introduce an auditory emotion induction paradigm based on the International Affective Digitized Sounds 2nd Edition (IADS-2) database also suitable for disabled individuals. Stimuli are grouped in three valence categories: unpleasant, neutral, and pleasant. Significant differences in time domain domain event-related potentials are found in the electroencephalogram (EEG) between unpleasant and neutral, as well as pleasant and neutral conditions over midline electrodes. Time domain data were classified in three binary classification problems using a linear support vector machine (SVM) classifier. We discuss three classification performance measures in the context of affective computing and outline some strategies for conducting and reporting affect classification studies. PMID:27375410
Zingiber officinale Improves Cognitive Function of the Middle-Aged Healthy Women
Saenghong, Naritsara; Wattanathorn, Jintanaporn; Muchimapura, Supaporn; Tongun, Terdthai; Piyavhatkul, Nawanant; Banchonglikitkul, Chuleratana; Kajsongkram, Tanwarat
2012-01-01
The development of cognitive enhancers from plants possessing antioxidants has gained much attention due to the role of oxidative stress-induced cognitive impairment. Thus, this study aimed to determine the effect of ginger extract, or Zingiber officinale, on the cognitive function of middle-aged, healthy women. Sixty participants were randomly assigned to receive a placebo or standardized plant extract at doses of 400 and 800 mg once daily for 2 months. They were evaluated for working memory and cognitive function using computerized battery tests and the auditory oddball paradigm of event-related potentials at three different time periods: before receiving the intervention, one month, and two months. We found that the ginger-treated groups had significantly decreased P300 latencies, increased N100 and P300 amplitudes, and exhibited enhanced working memory. Therefore, ginger is a potential cognitive enhancer for middle-aged women. PMID:22235230
Blom, Jan Dirk
2015-01-01
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.
The Development of Auditory Perception in Children Following Auditory Brainstem Implantation
Colletti, Liliana; Shannon, Robert V.; Colletti, Vittorio
2014-01-01
Auditory brainstem implants (ABI) can provide useful auditory perception and language development in deaf children who are not able to use a cochlear implant (CI). We prospectively followed-up a consecutive group of 64 deaf children up to 12 years following ABI implantation. The etiology of deafness in these children was: cochlear nerve aplasia in 49, auditory neuropathy in 1, cochlear malformations in 8, bilateral cochlear post-meningitic ossification in 3, NF2 in 2, and bilateral cochlear fractures due to a head injury in 1. Thirty five children had other congenital non-auditory disabilities. Twenty two children had previous CIs with no benefit. Fifty eight children were fitted with the Cochlear 24 ABI device and six with the MedEl ABI device and all children followed the same rehabilitation program. Auditory perceptual abilities were evaluated on the Categories of Auditory Performance (CAP) scale. No child was lost to follow-up and there were no exclusions from the study. All children showed significant improvement in auditory perception with implant experience. Seven children (11%) were able to achieve the highest score on the CAP test; they were able to converse on the telephone within 3 years of implantation. Twenty children (31.3%) achieved open set speech recognition (CAP score of 5 or greater) and 30 (46.9%) achieved a CAP level of 4 or greater. Of the 29 children without non-auditory disabilities, 18 (62%) achieved a CAP score of 5 or greater with the ABI. All children showed continued improvements in auditory skills over time. The long-term results of ABI implantation reveal significant auditory benefit in most children, and open set auditory recognition in many. PMID:25377987
Tavakoli, Paniz; Boafo, Addo; Dale, Allyson; Robillard, Rebecca; Greenham, Stephanie L; Campbell, Kenneth
2018-01-01
Impaired executive functions, modulated by the frontal lobes, have been suggested to be associated with suicidal behavior. The present study examines one of these executive functions, attentional control, maintaining attention to the task-at-hand. A group of inpatient adolescents with acute suicidal behavior and healthy controls were studied using a passively presented auditory optimal paradigm. This "optimal" paradigm consisted of a series of frequently presented homogenous pure tone "standards" and different "deviants," constructed by changing one or more features of the standard. The optimal paradigm has been shown to be a more time-efficient replacement to the traditional oddball paradigm, which makes it suitable for use in clinical populations. The extent of processing of these "to-be-ignored" auditory stimuli was measured by recording event-related potentials (ERPs). The P3a ERP component is thought to reflect processes associated with the capturing of attention. Rare and novel stimuli may result in an executive decision to switch attention away from the current cognitive task and toward a probe of the potentially more relevant "interrupting" auditory input. On the other hand, stimuli that are quite similar to the standard should not elicit P3a. The P3a has been shown to be larger in immature brains in early compared to later adolescence. An overall enhanced P3a was observed in the suicidal group. The P3a was larger in this group for both the environmental sound and white noise deviants, although only the environmental sound P3a attained significance. Other deviants representing only a small change from the standard did not elicit a P3a in healthy controls. They did elicit a small P3a in the suicidal group. These findings suggest a lowered threshold for the triggering of the involuntary switch of attention in these patients, which may play a role in their reported distractibility. The enhanced P3a is also suggestive of an immature frontal central executive
Auditory Spatial Perception: Auditory Localization
2012-05-01
cochlear nucleus, TB – trapezoid body, SOC – superior olivary complex, LL – lateral lemniscus, IC – inferior colliculus. Adapted from Aharonson and...Figure 5. Auditory pathways in the central nervous system. LE – left ear, RE – right ear, AN – auditory nerve, CN – cochlear nucleus, TB...fibers leaving the left and right inner ear connect directly to the synaptic inputs of the cochlear nucleus (CN) on the same (ipsilateral) side of
Duque, Daniel; Wang, Xin; Nieto-Diego, Javier; Krumbholz, Katrin; Malmierca, Manuel S.
2016-01-01
Electrophysiological and psychophysical responses to a low-intensity probe sound tend to be suppressed by a preceding high-intensity adaptor sound. Nevertheless, rare low-intensity deviant sounds presented among frequent high-intensity standard sounds in an intensity oddball paradigm can elicit an electroencephalographic mismatch negativity (MMN) response. This has been taken to suggest that the MMN is a correlate of true change or “deviance” detection. A key question is where in the ascending auditory pathway true deviance sensitivity first emerges. Here, we addressed this question by measuring low-intensity deviant responses from single units in the inferior colliculus (IC) of anesthetized rats. If the IC exhibits true deviance sensitivity to intensity, IC neurons should show enhanced responses to low-intensity deviant sounds presented among high-intensity standards. Contrary to this prediction, deviant responses were only enhanced when the standards and deviants differed in frequency. The results could be explained with a model assuming that IC neurons integrate over multiple frequency-tuned channels and that adaptation occurs within each channel independently. We used an adaptation paradigm with multiple repeated adaptors to measure the tuning widths of these adaption channels in relation to the neurons’ overall tuning widths. PMID:27066835
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
2011-07-01
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Beckers, Gabriël J L; Gahr, Manfred
2012-08-01
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although
Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de
2017-12-07
To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.
Speech processing: from peripheral to hemispheric asymmetry of the auditory system.
Lazard, Diane S; Collette, Jean-Louis; Perrot, Xavier
2012-01-01
Language processing from the cochlea to auditory association cortices shows side-dependent specificities with an apparent left hemispheric dominance. The aim of this article was to propose to nonspeech specialists a didactic review of two complementary theories about hemispheric asymmetry in speech processing. Starting from anatomico-physiological and clinical observations of auditory asymmetry and interhemispheric connections, this review then exposes behavioral (dichotic listening paradigm) as well as functional (functional magnetic resonance imaging and positron emission tomography) experiments that assessed hemispheric specialization for speech processing. Even though speech at an early phonological level is regarded as being processed bilaterally, a left-hemispheric dominance exists for higher-level processing. This asymmetry may arise from a segregation of the speech signal, broken apart within nonprimary auditory areas in two distinct temporal integration windows--a fast one on the left and a slower one on the right--modeled through the asymmetric sampling in time theory or a spectro-temporal trade-off, with a higher temporal resolution in the left hemisphere and a higher spectral resolution in the right hemisphere, modeled through the spectral/temporal resolution trade-off theory. Both theories deal with the concept that lower-order tuning principles for acoustic signal might drive higher-order organization for speech processing. However, the precise nature, mechanisms, and origin of speech processing asymmetry are still being debated. Finally, an example of hemispheric asymmetry alteration, which has direct clinical implications, is given through the case of auditory aging that mixes peripheral disorder and modifications of central processing. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
P300 component of event-related potentials in persons with asperger disorder.
Iwanami, Akira; Okajima, Yuka; Ota, Haruhisa; Tani, Masayuki; Yamada, Takashi; Yamagata, Bun; Hashimoto, Ryuichiro; Kanai, Chieko; Takashio, Osamu; Inamoto, Atsuko; Ono, Taisei; Takayama, Yukiko; Kato, Nobumasa
2014-10-01
In the present study, we investigated auditory event-related potentials in adults with Asperger disorder and normal controls using an auditory oddball task and a novelty oddball task. Task performance and the latencies of P300 evoked by both target and novel stimuli in the two tasks did not differ between the two groups. Analysis of variance revealed that there was a significant interaction effect between group and electrode site on the mean amplitude of the P300 evoked by novel stimuli, which indicated that there was an altered distribution of the P300 in persons with Asperger disorder. In contrast, there was no significant interaction effect on the mean P300 amplitude elicited by target stimuli. Considering that P300 comprises two main subcomponents, frontal-central-dominant P3a and parietal-dominant P3b, our results suggested that persons with Asperger disorder have enhanced amplitude of P3a, which indicated activated prefrontal function in this task.
Auditory short-term memory in the primate auditory cortex
Scott, Brian H.; Mishkin, Mortimer
2015-01-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
2012-01-25
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
NASA Astrophysics Data System (ADS)
Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.
2017-08-01
Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional
Exploring Modality Compatibility in the Response-Effect Compatibility Paradigm.
Földes, Noémi; Philipp, Andrea M; Badets, Arnaud; Koch, Iring
2017-01-01
According to ideomotor theory , action planning is based on anticipatory perceptual representations of action-effects. This aspect of action control has been investigated in studies using the response-effect compatibility (REC) paradigm, in which responses have been shown to be facilitated if ensuing perceptual effects share codes with the response based on dimensional overlap (i.e., REC). Additionally, according to the notion of ideomotor compatibility, certain response-effect (R-E) mappings will be stronger than others because some response features resemble the anticipated sensory response effects more strongly than others (e.g., since vocal responses usually produce auditory effects, an auditory stimulus should be anticipated in a stronger manner following vocal responses rather than following manual responses). Yet, systematic research on this matter is lacking. In the present study, two REC experiments aimed to explore the influence of R-E modality mappings. In Experiment 1, vocal number word responses produced visual effects on the screen (digits vs. number words; i.e., visual-symbolic vs. visual-verbal effect codes). The REC effect was only marginally larger for visual-verbal than for visual-symbolic effects. Using verbal effect codes in Experiment 2, we found that the REC effect was larger with auditory-verbal R-E mapping than with visual-verbal R-E mapping. Overall, the findings support the hypothesis of a role of R-E modality mappings in REC effects, suggesting both further evidence for ideomotor accounts as well as code-specific and modality-specific contributions to effect anticipation.
Hearing with Two Ears: Evidence for Cortical Binaural Interaction during Auditory Processing.
Henkin, Yael; Yaar-Soffer, Yifat; Givon, Lihi; Hildesheimer, Minka
2015-04-01
Integration of information presented to the two ears has been shown to manifest in binaural interaction components (BICs) that occur along the ascending auditory pathways. In humans, BICs have been studied predominantly at the brainstem and thalamocortical levels; however, understanding of higher cortically driven mechanisms of binaural hearing is limited. To explore whether BICs are evident in auditory event-related potentials (AERPs) during the advanced perceptual and postperceptual stages of cortical processing. The AERPs N1, P3, and a late negative component (LNC) were recorded from multiple site electrodes while participants performed an oddball discrimination task that consisted of natural speech syllables (/ka/ vs. /ta/) that differed by place-of-articulation. Participants were instructed to respond to the target stimulus (/ta/) while performing the task in three listening conditions: monaural right, monaural left, and binaural. Fifteen (21-32 yr) young adults (6 females) with normal hearing sensitivity. By subtracting the response to target stimuli elicited in the binaural condition from the sum of responses elicited in the monaural right and left conditions, the BIC waveform was derived and the latencies and amplitudes of the components were measured. The maximal interaction was calculated by dividing BIC amplitude by the summed right and left response amplitudes. In addition, the latencies and amplitudes of the AERPs to target stimuli elicited in the monaural right, monaural left, and binaural listening conditions were measured and subjected to analysis of variance with repeated measures testing the effect of listening condition and laterality. Three consecutive BICs were identified at a mean latency of 129, 406, and 554 msec, and were labeled N1-BIC, P3-BIC, and LNC-BIC, respectively. Maximal interaction increased significantly with progression of auditory processing from perceptual to postperceptual stages and amounted to 51%, 55%, and 75% of the sum of
Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J
2015-06-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
2015-01-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Jenison, Rick
1995-01-01
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Transplantation of conditionally immortal auditory neuroblasts to the auditory nerve.
Sekiya, Tetsuji; Holley, Matthew C; Kojima, Ken; Matsumoto, Masahiro; Helyer, Richard; Ito, Juichi
2007-04-01
Cell transplantation is a realistic potential therapy for replacement of auditory sensory neurons and could benefit patients with cochlear implants or acoustic neuropathies. The procedure involves many experimental variables, including the nature and conditioning of donor cells, surgical technique and degree of degeneration in the host tissue. It is essential to control these variables in order to develop cell transplantation techniques effectively. We have characterized a conditionally immortal, mouse cell line suitable for transplantation to the auditory nerve. Structural and physiological markers defined the cells as early auditory neuroblasts that lacked neuronal, voltage-gated sodium or calcium currents and had an undifferentiated morphology. When transplanted into the auditory nerves of rats in vivo, the cells migrated peripherally and centrally and aggregated to form coherent, ectopic 'ganglia'. After 7 days they expressed beta 3-tubulin and adopted a similar morphology to native spiral ganglion neurons. They also developed bipolar projections aligned with the host nerves. There was no evidence for uncontrolled proliferation in vivo and cells survived for at least 63 days. If cells were transplanted with the appropriate surgical technique then the auditory brainstem responses were preserved. We have shown that immortal cell lines can potentially be used in the mammalian ear, that it is possible to differentiate significant numbers of cells within the auditory nerve tract and that surgery and cell injection can be achieved with no damage to the cochlea and with minimal degradation of the auditory brainstem response.
2014-01-01
Background We propose a mathematical model for multichannel assessment of the trial-to-trial variability of auditory evoked brain responses in magnetoencephalography (MEG). Methods Following the work of de Munck et al., our approach is based on the maximum likelihood estimation and involves an approximation of the spatio-temporal covariance of the contaminating background noise by means of the Kronecker product of its spatial and temporal covariance matrices. Extending the work of de Munck et al., where the trial-to-trial variability of the responses was considered identical to all channels, we evaluate it for each individual channel. Results Simulations with two equivalent current dipoles (ECDs) with different trial-to-trial variability, one seeded in each of the auditory cortices, were used to study the applicability of the proposed methodology on the sensor level and revealed spatial selectivity of the trial-to-trial estimates. In addition, we simulated a scenario with neighboring ECDs, to show limitations of the method. We also present an illustrative example of the application of this methodology to real MEG data taken from an auditory experimental paradigm, where we found hemispheric lateralization of the habituation effect to multiple stimulus presentation. Conclusions The proposed algorithm is capable of reconstructing lateralization effects of the trial-to-trial variability of evoked responses, i.e. when an ECD of only one hemisphere habituates, whereas the activity of the other hemisphere is not subject to habituation. Hence, it may be a useful tool in paradigms that assume lateralization effects, like, e.g., those involving language processing. PMID:24939398
Studying auditory verbal hallucinations using the RDoC framework.
Ford, Judith M
2016-03-01
In this paper, I explain why I adopted a Research Domain Criteria (RDoC) approach to study the neurobiology of auditory verbal hallucinations (AVH), or voices. I explain that the RDoC construct of "agency" fits well with AVH phenomenology. To the extent that voices sound nonself, voice hearers lack a sense of agency over the voices. Using a vocalization paradigm like those used with nonhuman primates to study mechanisms subserving the sense of agency, we find that the auditory N1 ERP is suppressed during vocalization, that EEG synchrony preceding speech onset is related to N1 suppression, and that both are reduced in patients with schizophrenia. Reduced cortical suppression is also seen across multiple psychotic disorders and in clinically high-risk youth, but it is not related to AVH. The motor activity preceding talking and connectivity between frontal and temporal lobes during talking have both proved sensitive to AVH, suggesting neural activity and connectivity associated with intentions to act may be a better way to study agency and predictions based on agency. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Auditory evoked potential could reflect emotional sensitivity and impulsivity
Kim, Ji Sun; Kim, Sungkean; Jung, Wookyoung; Im, Chang-Hwan; Lee, Seung-Hwan
2016-01-01
Emotional sensitivity and impulsivity could cause interpersonal conflicts and neuropsychiatric problems. Serotonin is correlated with behavioral inhibition and impulsivity. This study evaluated whether the loudness dependence of auditory evoked potential (LDAEP), a potential biological marker of central serotonergic activity, could reflect emotional sensitivity and impulsivity. A total of 157 healthy individuals were recruited, who performed LDAEP and Go/Nogo paradigms during electroencephalogram measurement. Barratt impulsivity scale (BIS), Conners’ Adult ADHD rating scale (CAARS), and affective lability scale (ALS) were evaluated. Comparison between low and high LDAEP groups was conducted for behavioural, psychological, and event-related potential (ERP) measures. The high LDAEP group showed significantly increased BIS, a subscale of the CAARS, ALS, and false alarm rate of Nogo stimuli compared to the low LDAEP group. LDAEP showed significant positive correlations with the depression scale, ALS scores, subscale of the CAARS and Nogo-P3 amplitude. In the source activity of Nogo-P3, the cuneus, lingual gyrus, and precentral gyrus activities were significantly increased in the high LDAEP group. Our study revealed that LDAEP could reflect emotional sensitivity and impulsivity. LDAEP, an auditory evoked potential could be a useful tool to evaluate emotional regulation. PMID:27910865
Segregation and Integration of Auditory Streams when Listening to Multi-Part Music
Ragert, Marie; Fairhurst, Merle T.; Keller, Peter E.
2014-01-01
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams
Segregation and integration of auditory streams when listening to multi-part music.
Ragert, Marie; Fairhurst, Merle T; Keller, Peter E
2014-01-01
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams
Ouyang, Jessica; Pace, Edward; Lepczyk, Laura; Kaufman, Michael; Zhang, Jessica; Perrine, Shane A; Zhang, Jinsheng
2017-07-07
Blast-induced tinitus is the number one service-connected disability that currently affects military personnel and veterans. To elucidate its underlying mechanisms, we subjected 13 Sprague Dawley adult rats to unilateral 14 psi blast exposure to induce tinnitus and measured auditory and limbic brain activity using manganese-enhanced MRI (MEMRI). Tinnitus was evaluated with a gap detection acoustic startle reflex paradigm, while hearing status was assessed with prepulse inhibition (PPI) and auditory brainstem responses (ABRs). Both anxiety and cognitive functioning were assessed using elevated plus maze and Morris water maze, respectively. Five weeks after blast exposure, 8 of the 13 blasted rats exhibited chronic tinnitus. While acoustic PPI remained intact and ABR thresholds recovered, the ABR wave P1-N1 amplitude reduction persisted in all blast-exposed rats. No differences in spatial cognition were observed, but blasted rats as a whole exhibited increased anxiety. MEMRI data revealed a bilateral increase in activity along the auditory pathway and in certain limbic regions of rats with tinnitus compared to age-matched controls. Taken together, our data suggest that while blast-induced tinnitus may play a role in auditory and limbic hyperactivity, the non-auditory effects of blast and potential traumatic brain injury may also exert an effect.
Kaipio, M L; Novitski, N; Tervaniemi, M; Alho, K; Ohman, J; Salonen, O; Näätänen, R
2001-05-25
Event-related potentials (ERPs) were measured from 24 chronic closed head injury (CHI) patients and 18 age- and education-matched controls. The oddball paradigm was applied while subjects were watching a silent movie. The standard (p=0.8) sound of 75 ms duration had a basic frequency of 500 Hz with harmonic partials of 1000 Hz and 1500 Hz, whereas these frequencies for the pitch deviant were each 10% higher. The frequencies of the duration deviant matched with those of the standard but was 25 ms in duration. The MMN (mismatch negativity), generated by the brain's automatic auditory change-detector mechanism, was elicited by both deviants. No significant differences in the MMN latency or amplitude for either pitch or duration deviants were found between the groups. However, the MMN amplitude for the pitch deviant decreased in the patient group during the experiment considerably faster than in controls, suggesting a faster vigilance decrement in the patients.
[FMRI-study of speech perception impairment in post-stroke patients with sensory aphasia].
Maĭorova, L A; Martynova, O V; Fedina, O N; Petrushevskiĭ, A G
2013-01-01
The aim of the study was to find neurophysiological correlates of the primary stage impairment of speech perception, namely phonemic discrimination, in patients with sensory aphasia after acute ischemic stroke in the left hemisphere by noninvasive method of fMRI. For this purpose we registered the fMRI-equivalent of mismatch negativity (MMN) in response to the speech phonemes--syllables "ba" and "pa" in odd-ball paradigm in 20 healthy subjects and 23 patients with post-stroke sensory aphasia. In healthy subjects active brain areas depending from the MMN contrast were observed in the superior temporal and inferior frontal gyri in the right and left hemispheres. In the group of patients there was a significant activation of the auditory cortex in the right hemisphere only, and this activation was less in a volume and intensity than in healthy subjects and correlated to the degree of preservation of speech. Thus, the method of recording fMRI equivalent of MMN is sensitive to study the speech perception impairment.
Using ERPs for assessing the (sub) conscious perception of noise.
Porbadnigk, Anne K; Antons, Jan-N; Blankertz, Benjamin; Treder, Matthias S; Schleicher, Robert; Moller, Sebastian; Curio, Gabriel
2010-01-01
In this paper, we investigate the use of event-related potentials (ERPs) as a quantitative measure for quality assessment of disturbed audio signals. For this purpose, we ran an EEG study (N=11) using an oddball paradigm, during which subjects were presented with the phoneme /a/, superimposed with varying degrees of signal-correlated noise. Based on this data set, we address the question to which degree the degradation of the auditory stimuli is reflected on a neural level, even if the disturbance is below the threshold of conscious perception. For those stimuli that are consciously recognized as being disturbed, we suggest the use of the amplitude and latency of the P300 component for assessing the level of disturbance. For disturbed stimuli for which the noise is not perceived consciously, we show for two subjects that a classifier based on shrinkage LDA can be applied successfully to single out stimuli, for which the noise was presumably processed subconsciously.
Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon
2010-09-01
Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.
Degrading emotional memories induced by a virtual reality paradigm.
Cuperus, Anne A; Laken, Maarten; van den Hout, Marcel A; Engelhard, Iris M
2016-09-01
In Eye Movement and Desensitization and Reprocessing (EMDR) therapy, a dual-task approach is used: patients make horizontal eye movements while they recall aversive memories. Studies showed that this reduces memory vividness and/or emotionality. A strong explanation is provided by working memory theory, which suggests that other taxing dual-tasks are also effective. Experiment 1 tested whether a visuospatial task which was carried out while participants were blindfolded taxes working memory. Experiment 2 tested whether this task degrades negative memories induced by a virtual reality (VR) paradigm. In experiment 1, participants responded to auditory cues with or without simultaneously carrying out the visuospatial task. In experiment 2, participants recalled negative memories induced by a VR paradigm. The experimental group simultaneously carried out the visuospatial task, and a control group merely recalled the memories. Changes in self-rated memory vividness and emotionality were measured. The slowing down of reaction times due to the visuospatial task indicated that its cognitive load was greater than the load of the eye movements task in previous studies. The task also led to reductions in emotionality (but not vividness) of memories induced by the VR paradigm. Weaknesses are that only males were tested in experiment 1, and the effectiveness of the VR fear/trauma induction was not assessed with ratings of mood or intrusions in experiment 2. The results suggest that the visuospatial task may be applicable in clinical settings, and the VR paradigm may provide a useful method of inducing negative memories. Copyright © 2016 Elsevier Ltd. All rights reserved.
Su, Bobo; Wang, Sha; Sumich, Alexander; Li, Shaomei; Yang, Ling; Cai, Yueyue; Wang, Grace Y
2017-11-01
Chronic heroin use can cause deficits in response inhibition, leading to a loss of control over drug use, particularly in the context of drug-related cues. Unfortunately, heightened incentive salience and motivational bias in response to drug-related cues may exist following abstinence from heroin use. The present study aimed to examine the effect of drug-related cues on response inhibition in long-term heroin abstainers. Sixteen long-term (8-24 months) male heroin abstainers and 16 male healthy controls completed a modified two-choice oddball paradigm, in which a neutral "chair" picture served as frequent standard stimuli; the neutral and drug-related pictures served as infrequent deviant stimuli of different conditions respectively. Event-related potentials were compared across groups and conditions. Our results showed that heroin abstainers exhibited smaller N2d amplitude (deviant minus standard) in the drug cue condition compared to the neutral condition, due to smaller drug-cue deviant-N2 amplitude compared to neutral deviant-N2. Moreover, heroin abstainers had smaller N2d amplitude compared with the healthy controls in the drug cue condition, due to the heroin abstainers having reduced deviant-N2 amplitude compared to standard-N2 in the drug cue condition, which reversed in the healthy controls. Our findings suggested that heroin addicts still show response inhibition deficits specifically for drug-related cues after longer-term abstinence. The inhibition-related N2 modulation for drug-related could be used as a novel electrophysiological index with clinical implications for assessing the risk of relapse and treatment outcome for heroin users.
Wang, Qiuju; Gu, Rui; Han, Dongyi; Yang, Weiyan
2003-09-01
Auditory neuropathy is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses and normal cochlear outer hair cell function as measured by otoacoustic emission recordings. Many risk factors are thought to be involved in its etiology and pathophysiology. Four Chinese pedigrees with familial auditory neuropathy were presented to demonstrate involvement of genetic factors in the etiology of auditory neuropathy. Probands of the above-mentioned pedigrees, who had been diagnosed with auditory neuropathy, were evaluated and followed in the Department of Otolaryngology-Head and Neck Surgery, China People Liberation Army General Hospital (Beijing, China). Their family members were studied, and the pedigree maps established. History of illness, physical examination, pure-tone audiometry, acoustic reflex, auditory brainstem responses, and transient evoked and distortion-product otoacoustic emissions were obtained from members of these families. Some subjects received vestibular caloric testing, computed tomography scan of the temporal bone, and electrocardiography to exclude other possible neuropathic disorders. In most affected patients, hearing loss of various degrees and speech discrimination difficulties started at 10 to 16 years of age. Their audiological evaluation showed absence of acoustic reflex and auditory brainstem responses. As expected in auditory neuropathy, these patients exhibited near-normal cochlear outer hair cell function as shown in distortion product otoacoustic emission recordings. Pure-tone audiometry revealed hearing loss ranging from mild to profound in these patients. Different inheritance patterns were observed in the four families. In Pedigree I, 7 male patients were identified among 43 family members, exhibiting an X-linked recessive pattern. Affected brothers were found in Pedigrees II and III, whereas in pedigree IV, two sisters were affected. All the patients were otherwise normal without evidence of
Maclin, Edward L; Mathewson, Kyle E; Low, Kathy A; Boot, Walter R; Kramer, Arthur F; Fabiani, Monica; Gratton, Gabriele
2011-09-01
Changes in attention allocation with complex task learning reflect processing automatization and more efficient control. We studied these changes using ERP and EEG spectral analyses in subjects playing Space Fortress, a complex video game comprising standard cognitive task components. We hypothesized that training would free up attentional resources for a secondary auditory oddball task. Both P3 and delta EEG showed a processing trade-off between game and oddball tasks, but only some game events showed reduced attention requirements with practice. Training magnified a transient increase in alpha power following both primary and secondary task events. This contrasted with alpha suppression observed when the oddball task was performed alone, suggesting that alpha may be related to attention switching. Hence, P3 and EEG spectral data are differentially sensitive to changes in attentional processing occurring with complex task training. Copyright © 2011 Society for Psychophysiological Research.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Auditory hallucinations induced by trazodone
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
2014-01-01
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
Response properties of the refractory auditory nerve fiber.
Miller, C A; Abbas, P J; Robinson, B K
2001-09-01
The refractory characteristics of auditory nerve fibers limit their ability to accurately encode temporal information. Therefore, they are relevant to the design of cochlear prostheses. It is also possible that the refractory property could be exploited by prosthetic devices to improve information transfer, as refractoriness may enhance the nerve's stochastic properties. Furthermore, refractory data are needed for the development of accurate computational models of auditory nerve fibers. We applied a two-pulse forward-masking paradigm to a feline model of the human auditory nerve to assess refractory properties of single fibers. Each fiber was driven to refractoriness by a single (masker) current pulse delivered intracochlearly. Properties of firing efficiency, latency, jitter, spike amplitude, and relative spread (a measure of dynamic range and stochasticity) were examined by exciting fibers with a second (probe) pulse and systematically varying the masker-probe interval (MPI). Responses to monophasic cathodic current pulses were analyzed. We estimated the mean absolute refractory period to be about 330 micros and the mean recovery time constant to be about 410 micros. A significant proportion of fibers (13 of 34) responded to the probe pulse with MPIs as short as 500 micros. Spike amplitude decreased with decreasing MPI, a finding relevant to the development of computational nerve-fiber models, interpretation of gross evoked potentials, and models of more central neural processing. A small mean decrement in spike jitter was noted at small MPI values. Some trends (such as spike latency-vs-MPI) varied across fibers, suggesting that sites of excitation varied across fibers. Relative spread was found to increase with decreasing MPI values, providing direct evidence that stochastic properties of fibers are altered under conditions of refractoriness.
He hears, she hears: are there sex differences in auditory processing?
Yoder, Kathleen M; Phan, Mimi L; Lu, Kai; Vicario, David S
2015-03-01
Songbirds learn individually unique songs through vocal imitation and use them in courtship and territorial displays. Previous work has identified a forebrain auditory area, the caudomedial nidopallium (NCM), that appears specialized for discriminating and remembering conspecific vocalizations. In zebra finches (ZFs), only males produce learned vocalizations, but both sexes process these and other signals. This study assessed sex differences in auditory processing by recording extracellular multiunit activity at multiple sites within NCM. Juvenile female ZFs (n = 46) were reared in individual isolation and artificially tutored with song. In adulthood, songs were played back to assess auditory responses, stimulus-specific adaptation, neural bias for conspecific song, and memory for the tutor's song, as well as recently heard songs. In a subset of females (n = 36), estradiol (E2) levels were manipulated to test the contribution of E2, known to be synthesized in the brain, to auditory responses. Untreated females (n = 10) showed significant differences in response magnitude and stimulus-specific adaptation compared to males reared in the same paradigm (n = 9). In hormone-manipulated females, E2 augmentation facilitated the memory for recently heard songs in adulthood, but neither E2 augmentation (n = 15) nor E2 synthesis blockade (n = 9) affected tutor song memory or the neural bias for conspecific song. The results demonstrate subtle sex differences in processing communication signals, and show that E2 levels in female songbirds can affect the memory for songs of potential suitors, thus contributing to the process of mate selection. The results also have potential relevance to clinical interventions that manipulate E2 in human patients. © 2014 Wiley Periodicals, Inc.
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2016-06-01
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; de Vent, Nathalie; Huotilainen, Minna
2014-04-01
Sensitivity to changes in various musical features was investigated by recording the mismatch negativity (MMN) auditory event-related potential (ERP) in musically trained and nontrained children semi-longitudinally at the ages of 9, 11, and 13 years. The responses were recorded using a novel Melodic multi-feature paradigm which allows fast (<15 min) recording of an MMN profile for changes in melody, rhythm, musical key, timbre, tuning and timing. When compared to the nontrained children, the musically trained children displayed enlarged MMNs for the melody modulations by the age 13 and for the rhythm modulations, timbre deviants and slightly mistuned tones already at the age of 11. Also, a positive mismatch response elicited by delayed tones was larger in amplitude in the musically trained than in the nontrained children at age 13. No group differences were found at the age 9 suggesting that the later enhancement of the MMN in the musically trained children resulted from training and not pre-existing difference between the groups. The current study demonstrates the applicability of the Melodic multi-feature paradigm in school-aged children and indicates that musical training enhances auditory discrimination for musically central sound dimensions in pre-adolescence. Copyright © 2014 Elsevier Inc. All rights reserved.
Gong, Diankun; Hu, Jiehui; Yao, Dezhong
2012-04-01
With the two-choice go/no-go paradigm, we investigated whether timbre attribute can be transmitted as partial information from the stimulus identification stage to the response preparation stage in auditory tone processing. We manipulated two attributes of the stimulus: timbre (piano vs. violin) and acoustic intensity (soft vs. loud) to ensure an earlier processing of timbre than intensity. We associated the timbre attribute more with go trials. Results showed that lateralized readiness potentials (LRPs) were consistently elicited in no-go trials. This showed that the timbre attribute had been transmitted to the response preparation stage before the intensity attribute was processed in the stimuli identification stage. Such a result provides evidence for the continuous model and asynchronous discrete coding (ADC) model in information processing. We suggest that partial information can be transmitted in an auditory channel. Copyright © 2011 Society for Psychophysiological Research.
Word learning in deaf children with cochlear implants: effects of early auditory experience.
Houston, Derek M; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T
2012-05-01
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. © 2012 Blackwell Publishing Ltd.
Word learning in deaf children with cochlear implants: effects of early auditory experience
Houston, Derek M.; Stewart, Jessica; Moberly, Aaron; Hollich, George; Miyamoto, Richard T.
2013-01-01
Word-learning skills were tested in normal-hearing 12- to 40-month-olds and in deaf 22- to 40-month-olds 12 to 18 months after cochlear implantation. Using the Intermodal Preferential Looking Paradigm (IPLP), children were tested for their ability to learn two novel-word/novel-object pairings. Normal-hearing children demonstrated learning on this task at approximately 18 months of age and older. For deaf children, performance on this task was significantly correlated with early auditory experience: Children whose cochlear implants were switched on by 14 months of age or who had relatively more hearing before implantation demonstrated learning in this task, but later implanted profoundly deaf children did not. Performance on this task also correlated with later measures of vocabulary size. Taken together, these findings suggest that early auditory experience facilitates word learning and that the IPLP may be useful for identifying children who may be at high risk for poor vocabulary development. PMID:22490184
Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex.
Razak, Khaleel A
2013-01-01
Auditory neurons in bats that use frequency modulated (FM) sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI) exhibiting faster arrival times compared to high-frequency inhibition (HFI). Using the two-tone inhibition over time (TTI) stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI vs. HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI vs. HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.
Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.
ERIC Educational Resources Information Center
Hack, Zarita Caplan; Erber, Norman P.
1982-01-01
Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…
Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia
Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail
2015-01-01
data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635
Penhune, V B; Zatorre, R J; Feindel, W H
1999-03-01
This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.
Mismatch negativity to acoustical illusion of beat: how and where the change detection takes place?
Chakalov, Ivan; Paraskevopoulos, Evangelos; Wollbrink, Andreas; Pantev, Christo
2014-10-15
In case of binaural presentation of two tones with slightly different frequencies the structures of brainstem can no longer follow the interaural time differences (ITD) resulting in an illusionary perception of beat corresponding to frequency difference between the two prime tones. Hence, the beat-frequency does not exist in the prime tones presented to either ear. This study used binaural beats to explore the nature of acoustic deviance detection in humans by means of magnetoencephalography (MEG). Recent research suggests that the auditory change detection is a multistage process. To test this, we employed 26 Hz-binaural beats in a classical oddball paradigm. However, the prime tones (250 Hz and 276 Hz) were switched between the ears in the case of the deviant-beat. Consequently, when the deviant is presented, the cochleae and auditory nerves receive a "new afferent", although the standards and the deviants are heard identical (26 Hz-beats). This allowed us to explore the contribution of auditory periphery to change detection process, and furthermore, to evaluate its influence on beats-related auditory steady-state responses (ASSRs). LORETA-source current density estimates of the evoked fields in a typical mismatch negativity time-window (MMN) and the subsequent difference-ASSRs were determined and compared. The results revealed an MMN generated by a complex neural network including the right parietal lobe and the left middle frontal gyrus. Furthermore, difference-ASSR was generated in the paracentral gyrus. Additionally, psychophysical measures showed no perceptual difference between the standard- and deviant-beats when isolated by noise. These results suggest that the auditory periphery has an important contribution to novelty detection already at sub-cortical level. Overall, the present findings support the notion of hierarchically organized acoustic novelty detection system. Copyright © 2014 Elsevier Inc. All rights reserved.
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
Choudhury, Naseem; Leppanen, Paavo H.T.; Leevers, Hilary J.; Benasich, April A.
2007-01-01
An infant’s ability to process auditory signals presented in rapid succession (i.e. rapid auditory processing abilities [RAP]) has been shown to predict differences in language outcomes in toddlers and preschool children. Early deficits in RAP abilities may serve as a behavioral marker for language-based learning disabilities. The purpose of this study is to determine if performance on infant information processing measures designed to tap RAP and global processing skills differ as a function of family history of specific language impairment (SLI) and/or the particular demand characteristics of the paradigm used. Seventeen 6- to 9-month-old infants from families with a history of specific language impairment (FH+) and 29 control infants (FH−) participated in this study. Infants’ performance on two different RAP paradigms (head-turn procedure [HT] and auditory-visual habituation/recognition memory [AVH/RM]) and on a global processing task (visual habituation/recognition memory [VH/RM]) was assessed at 6 and 9 months. Toddler language and cognitive skills were evaluated at 12 and 16 months. A number of significant group differences were seen: FH+ infants showed significantly poorer discrimination of fast rate stimuli on both RAP tasks, took longer to habituate on both habituation/recognition memory measures, and had lower novelty preference scores on the visual habituation/recognition memory task. Infants’ performance on the two RAP measures provided independent but converging contributions to outcome. Thus, different mechanisms appear to underlie performance on operantly conditioned tasks as compared to habituation/recognition memory paradigms. Further, infant RAP processing abilities predicted to 12- and 16-month language scores above and beyond family history of SLI. The results of this study provide additional support for the validity of infant RAP abilities as a behavioral marker for later language outcome. Finally, this is the first study to use a
Distraction and Facilitation--Two Faces of the Same Coin?
ERIC Educational Resources Information Center
Wetzel, Nicole; Widmann, Andreas; Schroger, Erich
2012-01-01
Unexpected and task-irrelevant sounds can capture our attention and may cause distraction effects reflected by impaired performance in a primary task unrelated to the perturbing sound. The present auditory-visual oddball study examines the effect of the informational content of a sound on the performance in a visual discrimination task. The…
The Effect of Spatial Smoothing on Representational Similarity in a Simple Motor Paradigm
Hendriks, Michelle H. A.; Daniels, Nicky; Pegado, Felipe; Op de Beeck, Hans P.
2017-01-01
Multi-voxel pattern analyses (MVPA) are often performed on unsmoothed data, which is very different from the general practice of large smoothing extents in standard voxel-based analyses. In this report, we studied the effect of smoothing on MVPA results in a motor paradigm. Subjects pressed four buttons with two different fingers of the two hands in response to auditory commands. Overall, independent of the degree of smoothing, correlational MVPA showed distinctive patterns for the different hands in all studied regions of interest (motor cortex, prefrontal cortex, and auditory cortices). With regard to the effect of smoothing, our findings suggest that results from correlational MVPA show a minor sensitivity to smoothing. Moderate amounts of smoothing (in this case, 1−4 times the voxel size) improved MVPA correlations, from a slight improvement to large improvements depending on the region involved. None of the regions showed signs of a detrimental effect of moderate levels of smoothing. Even higher amounts of smoothing sometimes had a positive effect, most clearly in low-level auditory cortex. We conclude that smoothing seems to have a minor positive effect on MVPA results, thus researchers should be mindful about the choices they make regarding the level of smoothing. PMID:28611726
Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe
2015-04-01
cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Auditory Imagery: Empirical Findings
ERIC Educational Resources Information Center
Hubbard, Timothy L.
2010-01-01
The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…
Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2018-01-01
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602
Auditory models for speech analysis
NASA Astrophysics Data System (ADS)
Maybury, Mark T.
This paper reviews the psychophysical basis for auditory models and discusses their application to automatic speech recognition. First an overview of the human auditory system is presented, followed by a review of current knowledge gleaned from neurological and psychoacoustic experimentation. Next, a general framework describes established peripheral auditory models which are based on well-understood properties of the peripheral auditory system. This is followed by a discussion of current enhancements to that models to include nonlinearities and synchrony information as well as other higher auditory functions. Finally, the initial performance of auditory models in the task of speech recognition is examined and additional applications are mentioned.
Larson, Eric; Lee, Adrian K C
2014-01-01
Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.
Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet
2017-06-01
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Familiar auditory sensory training in chronic traumatic brain injury: a case study.
Sullivan, Emily Galassi; Guernon, Ann; Blabas, Brett; Herrold, Amy A; Pape, Theresa L-B
2018-04-01
The evaluation and treatment for patients with prolonged periods of seriously impaired consciousness following traumatic brain injury (TBI), such as a vegetative or minimally conscious state, poses considerable challenges, particularly in the chronic phases of recovery. This blinded crossover study explored the effects of familiar auditory sensory training (FAST) compared with a sham stimulation in a patient seven years post severe TBI. Baseline data were collected over 4 weeks to account for variability in status with neurobehavioral measures, including the Disorders of Consciousness scale (DOCS), Coma Near Coma scale (CNC), and Consciousness Screening Algorithm. Pre-stimulation neurophysiological assessments were completed as well, namely Brainstem Auditory Evoked Potentials (BAEP) and Somatosensory Evoked Potentials (SSEP). Results revealed that a significant improvement in the DOCS neurobehavioral findings after FAST, which was not maintained during the sham. BAEP findings also improved with maintenance of these improvements following sham stimulation as evidenced by repeat testing. The results emphasize the importance for continued evaluation and treatment of individuals in chronic states of seriously impaired consciousness with a variety of tools. Further study of auditory stimulation as a passive treatment paradigm for this population is warranted. Implications for Rehabilitation Clinicians should be equipped with treatment options to enhance neurobehavioral improvements when traditional treatment methods fail to deliver or maintain functional behavioral changes. Routine assessment is crucial to detect subtle changes in neurobehavioral function even in chronic states of disordered consciousness and determine potential preserved cognitive abilities that may not be evident due to unreliable motor responses given motoric impairments. Familiar Auditory Stimulation Training (FAST) is an ideal passive stimulation that can be supplied by families, allied health
Salicylate-induced changes in auditory thresholds of adolescent and adult rats.
Brennan, J F; Brown, C A; Jastreboff, P J
1996-01-01
Shifts in auditory intensity thresholds after salicylate administration were examined in postweanling and adult pigmented rats at frequencies ranging from 1 to 35 kHz. A total of 132 subjects from both age levels were tested under two-way active avoidance or one-way active avoidance paradigms. Estimated thresholds were inferred from behavioral responses to presentations of descending and ascending series of intensities for each test frequency value. Reliable threshold estimates were found under both avoidance conditioning methods, and compared to controls, subjects at both age levels showed threshold shifts at selective higher frequency values after salicylate injection, and the extent of shifts was related to salicylate dose level.
Lu, Xi; Siu, Ka-Chun; Fu, Siu N; Hui-Chan, Christina W Y; Tsang, William W N
2013-08-01
To compare the performance of older experienced Tai Chi practitioners and healthy controls in dual-task versus single-task paradigms, namely stepping down with and without performing an auditory response task, a cross-sectional study was conducted in the Center for East-meets-West in Rehabilitation Sciences at The Hong Kong Polytechnic University, Hong Kong. Twenty-eight Tai Chi practitioners (73.6 ± 4.2 years) and 30 healthy control subjects (72.4 ± 6.1 years) were recruited. Participants were asked to step down from a 19-cm-high platform and maintain a single-leg stance for 10 s with and without a concurrent cognitive task. The cognitive task was an auditory Stroop test in which the participants were required to respond to different tones of voices regardless of their word meanings. Postural stability after stepping down under single- and dual-task paradigms, in terms of excursion of the subject's center of pressure (COP) and cognitive performance, was measured for comparison between the two groups. Our findings demonstrated significant between-group differences in more outcome measures during dual-task than single-task performance. Thus, the auditory Stroop test showed that Tai Chi practitioners achieved not only significantly less error rate in single-task, but also significantly faster reaction time in dual-task, when compared with healthy controls similar in age and other relevant demographics. Similarly, the stepping-down task showed that Tai Chi practitioners not only displayed significantly less COP sway area in single-task, but also significantly less COP sway path than healthy controls in dual-task. These results showed that Tai Chi practitioners achieved better postural stability after stepping down as well as better performance in auditory response task than healthy controls. The improved performance that was magnified by dual motor-cognitive task performance may point to the benefits of Tai Chi being a mind-and-body exercise.
Cognitive Processing in Non-Communicative Patients: What Can Event-Related Potentials Tell Us?
Lugo, Zulay R.; Quitadamo, Lucia R.; Bianchi, Luigi; Pellas, Fréderic; Veser, Sandra; Lesenfants, Damien; Real, Ruben G. L.; Herbert, Cornelia; Guger, Christoph; Kotchoubey, Boris; Mattia, Donatella; Kübler, Andrea; Laureys, Steven; Noirhomme, Quentin
2016-01-01
Event-related potentials (ERP) have been proposed to improve the differential diagnosis of non-responsive patients. We investigated the potential of the P300 as a reliable marker of conscious processing in patients with locked-in syndrome (LIS). Eleven chronic LIS patients and 10 healthy subjects (HS) listened to a complex-tone auditory oddball paradigm, first in a passive condition (listen to the sounds) and then in an active condition (counting the deviant tones). Seven out of nine HS displayed a P300 waveform in the passive condition and all in the active condition. HS showed statistically significant changes in peak and area amplitude between conditions. Three out of seven LIS patients showed the P3 waveform in the passive condition and five of seven in the active condition. No changes in peak amplitude and only a significant difference at one electrode in area amplitude were observed in this group between conditions. We conclude that, in spite of keeping full consciousness and intact or nearly intact cortical functions, compared to HS, LIS patients present less reliable results when testing with ERP, specifically in the passive condition. We thus strongly recommend applying ERP paradigms in an active condition when evaluating consciousness in non-responsive patients. PMID:27895567
Evaluating the loudness of phantom auditory perception (tinnitus) in rats.
Jastreboff, P J; Brennan, J F
1994-01-01
Using our behavioral paradigm for evaluating tinnitus, the loudness of salicylate-induced tinnitus was evaluated in 144 rats by comparing their behavioral responses induced by different doses of salicylate to those induced by different intensities of a continuous reference tone mimicking tinnitus. Group differences in resistance to extinction were linearly related to salicylate dose and, at moderate intensities, to the reference tone as well. Comparison of regression equations for salicylate versus tone effects permitted estimation of the loudness of salicylate-induced tinnitus. These results extend the animal model of tinnitus and provide evidence that the loudness of phantom auditory perception is expressed through observable behavior, can be evaluated, and its changes detected.
Expectation, information processing, and subjective duration.
Simchy-Gross, Rhimmon; Margulis, Elizabeth Hellmuth
2018-01-01
In research on psychological time, it is important to examine the subjective duration of entire stimulus sequences, such as those produced by music (Teki, Frontiers in Neuroscience, 10, 2016). Yet research on the temporal oddball illusion (according to which oddball stimuli seem longer than standard stimuli of the same duration) has examined only the subjective duration of single events contained within sequences, not the subjective duration of sequences themselves. Does the finding that oddballs seem longer than standards translate to entire sequences, such that entire sequences that contain oddballs seem longer than those that do not? Is this potential translation influenced by the mode of information processing-whether people are engaged in direct or indirect temporal processing? Two experiments aimed to answer both questions using different manipulations of information processing. In both experiments, musical sequences either did or did not contain oddballs (auditory sliding tones). To manipulate information processing, we varied the task (Experiment 1), the sequence event structure (Experiments 1 and 2), and the sequence familiarity (Experiment 2) independently within subjects. Overall, in both experiments, the sequences that contained oddballs seemed shorter than those that did not when people were engaged in direct temporal processing, but longer when people were engaged in indirect temporal processing. These findings support the dual-process contingency model of time estimation (Zakay, Attention, Perception & Psychophysics, 54, 656-664, 1993). Theoretical implications for attention-based and memory-based models of time estimation, the pacemaker accumulator and coding efficiency hypotheses of time perception, and dynamic attending theory are discussed.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Mehraei, Golbarg; Gallardo, Andreu Paredes; Shinn-Cunningham, Barbara G.; Dau, Torsten
2017-01-01
In rodent models, acoustic exposure too modest to elevate hearing thresholds can nonetheless cause auditory nerve fiber deafferentation, interfering with the coding of supra-threshold sound. Low-spontaneous rate nerve fibers, important for encoding acoustic information at supra-threshold levels and in noise, are more susceptible to degeneration than high-spontaneous rate fibers. The change in auditory brainstem response (ABR) wave-V latency with noise level has been shown to be associated with auditory nerve deafferentation. Here, we measured ABR in a forward masking paradigm and evaluated wave-V latency changes with increasing masker-to-probe intervals. In the same listeners, behavioral forward masking detection thresholds were measured. We hypothesized that 1) auditory nerve fiber deafferentation increases forward masking thresholds and increases wave-V latency and 2) a preferential loss of low-SR fibers results in a faster recovery of wave-V latency as the slow contribution of these fibers is reduced. Results showed that in young audiometrically normal listeners, a larger change in wave-V latency with increasing masker-to-probe interval was related to a greater effect of a preceding masker behaviorally. Further, the amount of wave-V latency change with masker-to-probe interval was positively correlated with the rate of change in forward masking detection thresholds. Although we cannot rule out central contributions, these findings are consistent with the hypothesis that auditory nerve fiber deafferentation occurs in humans and may predict how well individuals can hear in noisy environments. PMID:28159652
Auditory psychophysics and perception.
Hirsh, I J; Watson, C S
1996-01-01
In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.
Barham, Michael P; Clark, Gillian M; Hayden, Melissa J; Enticott, Peter G; Conduit, Russell; Lum, Jarrad A G
2017-09-01
This study compared the performance of a low-cost wireless EEG system to a research-grade EEG system on an auditory oddball task designed to elicit N200 and P300 ERP components. Participants were 15 healthy adults (6 female) aged between 19 and 40 (M = 28.56; SD = 6.38). An auditory oddball task was presented comprising 1,200 presentations of a standard tone interspersed by 300 trials comprising a deviant tone. EEG was simultaneously recorded from a modified Emotiv EPOC and a NeuroScan SynAmps RT EEG system. The modifications made to the Emotiv system included attaching research grade electrodes to the Bluetooth transmitter. Additional modifications enabled the Emotiv system to connect to a portable impedance meter. The cost of these modifications and portable impedance meter approached the purchase value of the Emotiv system. Preliminary analyses revealed significantly more trials were rejected from data acquired by the modified Emotiv compared to the SynAmps system. However, the ERP waveforms captured by the Emotiv system were found to be highly similar to the corresponding waveform from the SynAmps system. The latency and peak amplitude of N200 and P300 components were also found to be similar between systems. Overall, the results indicate that, in the context of an oddball task, the ERP acquired by a low-cost wireless EEG system can be of comparable quality to research-grade EEG acquisition equipment. © 2017 Society for Psychophysiological Research.
Cutanda, Diana; Correa, Ángel; Sanabria, Daniel
2015-06-01
The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
2017-04-01
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.
Broderick, Patricia A.; Rosenbaum, Taylor
2013-01-01
Cocaine is a psychostimulant in the pharmacological class of drugs called Local Anesthetics. Interestingly, cocaine is the only drug in this class that has a chemical formula comprised of a tropane ring and is, moreover, addictive. The correlation between tropane and addiction is well-studied. Another well-studied correlation is that between psychosis induced by cocaine and that psychosis endogenously present in the schizophrenic patient. Indeed, both of these psychoses exhibit much the same behavioral as well as neurochemical properties across species. Therefore, in order to study the link between schizophrenia and cocaine addiction, we used a behavioral paradigm called Acoustic Startle. We used this acoustic startle paradigm in female versus male Sprague-Dawley animals to discriminate possible sex differences in responses to startle. The startle method operates through auditory pathways in brain via a network of sensorimotor gating processes within auditory cortex, cochlear nuclei, inferior and superior colliculi, pontine reticular nuclei, in addition to mesocorticolimbic brain reward and nigrostriatal motor circuitries. This paper is the first to report sex differences to acoustic stimuli in Sprague-Dawley animals (Rattus norvegicus) although such gender responses to acoustic startle have been reported in humans (Swerdlow et al. 1997 [1]). The startle method monitors pre-pulse inhibition (PPI) as a measure of the loss of sensorimotor gating in the brain's neuronal auditory network; auditory deficiencies can lead to sensory overload and subsequently cognitive dysfunction. Cocaine addicts and schizophrenic patients as well as cocaine treated animals are reported to exhibit symptoms of defective PPI (Geyer et al., 2001 [2]). Key findings are: (a) Cocaine significantly reduced PPI in both sexes. (b) Females were significantly more sensitive than males; reduced PPI was greater in females than in males. (c) Physiological saline had no effect on startle in either sex
Valence and arousal of emotional stimuli impact cognitive-motor performance in an oddball task.
Lu, Yingzhi; Jaquess, Kyle J; Hatfield, Bradley D; Zhou, Chenglin; Li, Hong
2017-04-01
It is widely recognized that emotions impact an individual's ability to perform in a given task. However, little is known about how emotion impacts the various aspects of cognitive -motor performance. We recorded event-related potentials (ERPs) and chronometric responses from twenty-six participants while they performed a cognitive-motor oddball task in regard to four categories of emotional stimuli (high-arousing positive-valence, low-arousing positive-valence, high-arousing negative-valence, and low-arousing negative-valence) as "deviant" stimuli. Six chronometric responses (reaction time, press time, return time, choice time, movement time, and total time) and three ERP components (P2, N2 and late positive potential) were measured. Results indicated that reaction time was significantly affected by the presentation of emotional stimuli. Also observed was a negative relationship between N2 amplitude and elements of performance featuring reaction time in the low-arousing positive-valence condition. This study provides further evidence that emotional stimuli influence cognitive-motor performance in a specific manner. Copyright © 2017 Elsevier B.V. All rights reserved.
Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong
2012-01-01
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800
Corticofugal modulation of peripheral auditory responses
Terreros, Gonzalo; Delano, Paul H.
2015-01-01
The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647
Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
2009-01-01
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Beneficial auditory and cognitive effects of auditory brainstem implantation in children.
Colletti, Liliana
2007-09-01
This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Chernyshev, Boris V; Bryzgalov, Dmitri V; Lazarev, Ivan E; Chernysheva, Elena G
2016-08-03
Current understanding of feature binding remains controversial. Studies involving mismatch negativity (MMN) measurement show a low level of binding, whereas behavioral experiments suggest a higher level. We examined the possibility that the two levels of feature binding coexist and may be shown within one experiment. The electroencephalogram was recorded while participants were engaged in an auditory two-alternative choice task, which was a combination of the oddball and the condensation tasks. Two types of deviant target stimuli were used - complex stimuli, which required feature conjunction to be identified, and simple stimuli, which differed from standard stimuli in a single feature. Two behavioral outcomes - correct responses and errors - were analyzed separately. Responses to complex stimuli were slower and less accurate than responses to simple stimuli. MMN was prominent and its amplitude was similar for both simple and complex stimuli, whereas the respective stimuli differed from standards in a single feature or two features respectively. Errors in response only to complex stimuli were associated with decreased MMN amplitude. P300 amplitude was greater for complex stimuli than for simple stimuli. Our data are compatible with the explanation that feature binding in auditory modality depends on two concurrent levels of processing. We speculate that the earlier level related to MMN generation is an essential and critical stage. Yet, a later analysis is also carried out, affecting P300 amplitude and response time. The current findings provide resolution to conflicting views on the nature of feature binding and show that feature binding is a distributed multilevel process.
Rogenmoser, Lars; Elmer, Stefan; Jäncke, Lutz
2015-03-01
Absolute pitch (AP) is the rare ability to identify or produce different pitches without using reference tones. At least two sequential processing stages are assumed to contribute to this phenomenon. The first recruits a pitch memory mechanism at an early stage of auditory processing, whereas the second is driven by a later cognitive mechanism (pitch labeling). Several investigations have used active tasks, but it is unclear how these two mechanisms contribute to AP during passive listening. The present work investigated the temporal dynamics of tone processing in AP and non-AP (NAP) participants by using EEG. We applied a passive oddball paradigm with between- and within-tone category manipulations and analyzed the MMN reflecting the early stage of auditory processing and the P3a response reflecting the later cognitive mechanism during the second processing stage. Results did not reveal between-group differences in MMN waveforms. By contrast, the P3a response was specifically associated with AP and sensitive to the processing of different pitch types. Specifically, AP participants exhibited smaller P3a amplitudes, especially in between-tone category conditions, and P3a responses correlated significantly with the age of commencement of musical training, suggesting an influence of early musical exposure on AP. Our results reinforce the current opinion that the representation of pitches at the processing level of the auditory-related cortex is comparable among AP and NAP participants, whereas the later processing stage is critical for AP. Results are interpreted as reflecting cognitive facilitation in AP participants, possibly driven by the availability of multiple codes for tones.
Tamayo-Orrego, Lukas; Osorio Forero, Alejandro; Quintero Giraldo, Lina Paola; Parra Sánchez, José Hernán; Varela, Vilma; Restrepo, Francia
2015-01-01
To better understand the neurophysiological substrates in attention deficit/hyperactivity disorder (ADHD), a study was performed on of event-related potentials (ERPs) in Colombian patients with inattentive and combined ADHD. A case-control, cross-sectional study was designed. The sample was composed of 180 subjects between 5 and 15 years of age (mean, 9.25±2.6), from local schools in Manizales. The sample was divided equally in ADHD or control groups and the subjects were paired by age and gender. The diagnosis was made using the DSM-IV-TR criteria, the Conners and WISC-III test, a psychiatric interview (MINIKID), and a medical evaluation. ERPs were recorded in a visual and auditory passive oddball paradigm. Latency and amplitude of N100, N200 and P300 components for common and rare stimuli were used for statistical comparisons. ADHD subjects show differences in the N200 amplitude and P300 latency in the auditory task. The N200 amplitude was reduced in response to visual stimuli. ADHD subjects with combined symptoms show a delayed P300 in response to auditory stimuli, whereas inattentive subjects exhibited differences in the amplitude of N100 and N200. Combined ADHD patients showed longer N100 latency and smaller N200-P300 amplitude compared to inattentive ADHD subjects. The results show differences in the event-related potentials between combined and inattentive ADHD subjects. Copyright © 2014 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
2011-01-01
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Henshall, Katherine R; Sergejew, Alex A; McKay, Colette M; Rance, Gary; Shea, Tracey L; Hayden, Melissa J; Innes-Brown, Hamish; Copolov, David L
2012-05-01
Central auditory processing in schizophrenia patients with a history of auditory hallucinations has been reported to be impaired, and abnormalities of interhemispheric transfer have been implicated in these patients. This study examined interhemispheric functional connectivity between auditory cortical regions, using temporal information obtained from latency measures of the auditory N1 evoked potential. Interhemispheric Transfer Times (IHTTs) were compared across 3 subject groups: schizophrenia patients who had experienced auditory hallucinations, schizophrenia patients without a history of auditory hallucinations, and normal controls. Pure tones and single-syllable words were presented monaurally to each ear, while EEG was recorded continuously. IHTT was calculated for each stimulus type by comparing the latencies of the auditory N1 evoked potential recorded contralaterally and ipsilaterally to the ear of stimulation. The IHTTs for pure tones did not differ between groups. For word stimuli, the IHTT was significantly different across the 3 groups: the IHTT was close to zero in normal controls, was highest in the AH group, and was negative (shorter latencies ipsilaterally) in the nonAH group. Differences in IHTTs may be attributed to transcallosal dysfunction in the AH group, but altered or reversed cerebral lateralization in nonAH participants is also possible. Copyright © 2012 Elsevier B.V. All rights reserved.
Scarbel, Lucie; Beautemps, Denis; Schwartz, Jean-Luc; Sato, Marc
2014-01-01
One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available. PMID:25009512
Neuromechanistic Model of Auditory Bistability
Rankin, James; Sussman, Elyse; Rinzel, John
2015-01-01
Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept—a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507
Auditory Learning. Dimensions in Early Learning Series.
ERIC Educational Resources Information Center
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy
2013-03-01
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.
Electrocortical changes associated with minocycline treatment in fragile X syndrome.
Schneider, Andrea; Leigh, Mary Jacena; Adams, Patrick; Nanakul, Rawi; Chechi, Tasleem; Olichney, John; Hagerman, Randi; Hessl, David
2013-10-01
Minocycline normalizes synaptic connections and behavior in the knockout mouse model of fragile X syndrome (FXS). Human-targeted treatment trials with minocycline have shown benefits in behavioral measures and parent reports. Event-related potentials (ERPs) may provide a sensitive method of monitoring treatment response and changes in coordinated brain activity. Measurement of electrocortical changes due to minocycline was done in a double-blind, placebo-controlled crossover treatment trial in children with FXS. Children with FXS (Meanage 10.5 years) were randomized to minocycline or placebo treatment for 3 months then changed to the other treatment for 3 months. The minocycline dosage ranged from 25-100 mg daily, based on weight. Twelve individuals with FXS (eight male, four female) completed ERP studies using a passive auditory oddball paradigm. Current source density (CSD) and ERP analysis at baseline showed high-amplitude, long-latency components over temporal regions. After 3 months of treatment with minocycline, the temporal N1 and P2 amplitudes were significantly reduced compared with placebo. There was a significant amplitude increase of the central P2 component on minocycline. Electrocortical habituation to auditory stimuli improved with minocycline treatment. Our study demonstrated improvements of the ERP in children with FXS treated with minocycline, and the potential feasibility and sensitivity of ERPs as a cognitive biomarker in FXS treatment trials.
Verhulst, Sarah; Altoè, Alessandro; Vasilkov, Viacheslav
2018-03-01
Models of the human auditory periphery range from very basic functional descriptions of auditory filtering to detailed computational models of cochlear mechanics, inner-hair cell (IHC), auditory-nerve (AN) and brainstem signal processing. It is challenging to include detailed physiological descriptions of cellular components into human auditory models because single-cell data stems from invasive animal recordings while human reference data only exists in the form of population responses (e.g., otoacoustic emissions, auditory evoked potentials). To embed physiological models within a comprehensive human auditory periphery framework, it is important to capitalize on the success of basic functional models of hearing and render their descriptions more biophysical where possible. At the same time, comprehensive models should capture a variety of key auditory features, rather than fitting their parameters to a single reference dataset. In this study, we review and improve existing models of the IHC-AN complex by updating their equations and expressing their fitting parameters into biophysical quantities. The quality of the model framework for human auditory processing is evaluated using recorded auditory brainstem response (ABR) and envelope-following response (EFR) reference data from normal and hearing-impaired listeners. We present a model with 12 fitting parameters from the cochlea to the brainstem that can be rendered hearing impaired to simulate how cochlear gain loss and synaptopathy affect human population responses. The model description forms a compromise between capturing well-described single-unit IHC and AN properties and human population response features. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
2016-12-01
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.
Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M
1991-06-01
An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.
SanMiguel, Iria; Corral, María-José; Escera, Carles
2008-07-01
The sensitivity of involuntary attention to top-down modulation was tested using an auditory-visual distraction task and a working memory (WM) load manipulation in subjects performing a simple visual classification task while ignoring contingent auditory stimulation. The sounds were repetitive standard tones (80%) and environmental novel sounds (20%). Distraction caused by the novel sounds was compared across a 1-back WM condition and a no-memory control condition, both involving the comparison of two digits. Event-related brain potentials (ERPs) to the sounds were recorded, and the N1/MMN (mismatch negativity), novelty-P3, and RON components were identified in the novel minus standard difference waveforms. Distraction was reduced in the WM condition, both behaviorally and as indexed by an attenuation of the late phase of the novelty-P3. The transient/change detection mechanism indexed by MMN was not affected by the WM manipulation. Sustained slow frontal and parietal waveforms related to WM processes were found on the standard ERPs. The present results indicate that distraction caused by irrelevant novel sounds is reduced when a WM component is involved in the task, and that this modulation by WM load takes place at a late state of the orienting response, all in all confirming that involuntary attention is under the control of top-down mechanisms. Moreover, as these results contradict predictions of the load theory of selective attention and cognitive control, it is suggested that the WM load effects on distraction depend on the nature of the distractor-target relationships.
Auditory and non-auditory effects of noise on health
Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen
2014-01-01
Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105
Prinz, P; Ronacher, B
2002-08-01
The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.
Predictive uncertainty in auditory sequence processing
Hansen, Niels Chr.; Pearce, Marcus T.
2014-01-01
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
ERIC Educational Resources Information Center
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
2000-01-01
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Engineer, C.T.; Centanni, T.M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R.S.; Wilson, L.G.; Kilgard, M.P.
2014-01-01
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. PMID:24639033
NASA Astrophysics Data System (ADS)
Bohórquez, Jorge; Özdamar, Özcan; Morawski, Krzysztof; Telischi, Fred F.; Delgado, Rafael E.; Yavuz, Erdem
2005-06-01
A system capable of comprehensive and detailed monitoring of the cochlea and the auditory nerve during intraoperative surgery was developed. The cochlear blood flow (CBF) and the electrocochleogram (ECochGm) were recorded at the round window (RW) niche using a specially designed otic probe. The ECochGm was further processed to obtain cochlear microphonics (CM) and compound action potentials (CAP).The amplitude and phase of the CM were used to quantify the activity of outer hair cells (OHC); CAP amplitude and latency were used to describe the auditory nerve and the synaptic activity of the inner hair cells (IHC). In addition, concurrent monitoring with a second electrophysiological channel was achieved by recording compound nerve action potential (CNAP) obtained directly from the auditory nerve. Stimulation paradigms, instrumentation and signal processing methods were developed to extract and differentiate the activity of the OHC and the IHC in response to three different frequencies. Narrow band acoustical stimuli elicited CM signals indicating mainly nonlinear operation of the mechano-electrical transduction of the OHCs. Special envelope detectors were developed and applied to the ECochGm to extract the CM fundamental component and its harmonics in real time. The system was extensively validated in experimental animal surgeries by performing nerve compressions and manipulations.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Bauer, Julia; Widmer, Susann; Meyer, Martin
2018-01-01
Cognitive abilities such as attention or working memory can support older adults during speech perception. However, cognitive abilities as well as speech perception decline with age, leading to the expenditure of effort during speech processing. This longitudinal study therefore investigated age-related differences in electrophysiological processes during speech discrimination and assessed the extent of enhancement to such cognitive auditory processes through repeated auditory exposure. For that purpose, accuracy and reaction time were compared between 13 older adults (62-76 years) and 15 middle-aged (28-52 years) controls in an active oddball paradigm which was administered at three consecutive measurement time points at an interval of 2 wk, while EEG was recorded. As a standard stimulus, the nonsense syllable /'a:ʃa/was used, while the nonsense syllable /'a:sa/ and a morphing between /'a:ʃa/ and /'a:sa/ served as deviants. N2b and P3b ERP responses were evaluated as a function of age, deviant, and measurement time point using a data-driven topographical microstate analysis. From middle age to old age, age-related decline in attentive perception (as reflected in the N2b-related microstates) and in memory updating and attentional processes (as reflected in the P3b-related microstates) was found, as indicated by both lower neural responses and later onsets of the respective cortical networks, and in age-related changes in frontal activation during attentional stimulus processing. Importantly, N2b- and P3b-related microstates changed as a function of repeated stimulus exposure in both groups. This research therefore suggests that experience with auditory stimuli can support auditory neurocognitive processes in normal hearing adults into advanced age. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Threlkeld, Steven W; McClure, Melissa M; Rosen, Glenn D; Fitch, R Holly
2006-09-13
Induction of a focal freeze lesion to the skullcap of a 1-day-old rat pup leads to the formation of microgyria similar to those identified postmortem in human dyslexics. Rats with microgyria exhibit rapid auditory processing deficits similar to those seen in language-impaired (LI) children, and infants at risk for LI and these effects are particularly marked in juvenile as compared to adult subjects. In the current study, a startle response paradigm was used to investigate gap detection in juvenile and adult rats that received bilateral freezing lesions or sham surgery on postnatal day (P) 1, 3 or 5. Microgyria were confirmed in P1 and 3 lesion rats, but not in the P5 lesion group. We found a significant reduction in brain weight and neocortical volume in P1 and 3 lesioned brains relative to shams. Juvenile (P27-39) behavioral data indicated significant rapid auditory processing deficits in all three lesion groups as compared to sham subjects, while adult (P60+) data revealed a persistent disparity only between P1-lesioned rats and shams. Combined results suggest that generalized pathology affecting neocortical development is responsible for the presence of rapid auditory processing deficits, rather than factors specific to the formation of microgyria per se. Finally, results show that the window for the induction of rapid auditory processing deficits through disruption of neurodevelopment appears to extend beyond the endpoint for cortical neuronal migration, although, the persistent deficits exhibited by P1 lesion subjects suggest a secondary neurodevelopmental window at the time of cortical neuromigration representing a peak period of vulnerability.
McGurk illusion recalibrates subsequent auditory perception
Lüttke, Claudia S.; Ekman, Matthias; van Gerven, Marcel A. J.; de Lange, Floris P.
2016-01-01
Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of ‘ada’. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as ‘ada’. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as ‘ada’, activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960
[Auditory training in workshops: group therapy option].
Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa
2006-01-01
auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.
Kawasaki, Toshihiko; Tanaka, Shin; Wang, Jijun; Hokama, Hiroto; Hiramatsu, Kenichi
2004-02-01
The purpose of the present study was to investigate the neural substrates underlying event-related potential (ERP) abnormalities, with respect to the generators of the ERP components in depressed patients. Using an oddball paradigm, ERP from auditory stimuli were recorded from 22 unmedicated patients with current depressive episodes and compared with those from 22 age- and gender-matched normal controls. Cortical current densities of the N100 and P300 components were analyzed using low-resolution electromagnetic tomography (LORETA). Group differences in cortical current density were mapped on a 3-D cortex model. The results revealed that N100 cortical current densities did not differ between the two groups, while P300 cortical current densities were significantly lower in depressed patients over the bilateral temporal lobes, the left frontal region, and the right temporal-parietal area. Furthermore, the cortical area in which the group difference in P300 current density had been identified was remarkably larger over the right than the left hemisphere, thus supporting the hypothesis of right hemisphere dysfunction in depression.
Study of cognitive functions in newly diagnosed cases of subclinical and clinical hypothyroidism.
Sharma, Kirti; Behera, Joshil Kumar; Sood, Sushma; Rajput, Rajesh; Satpal; Praveen, Prashant
2014-01-01
Hypothyroidism is associated with significant neurocognitive deficits because hypothyroidism prevents the brain from adequately sustaining the energy consuming processes needed for neurotransmission, memory, and other higher brain functions. Hence, the study was done to assess the cognitive functions of newly diagnosed subclinical and clinical hypothyroid patients by evoked response potential P300. 75 patients each of newly diagnosed subclinical and clinical hypothyroid patients attending endocrinology clinic and 75 healthy age and sex matched euthyroid controls were considered for the study. P300 was recorded with Record Medicare System Polyrite, Chandigarh using auditory "oddball paradigm". The data was analyzed using ANOVA followed by post Tukey's test. Newly diagnosed clinical hypothyroid patients showed a significant increase in P300 latency compared to control (P < 0.05) and subclinical cases (P < 0.01) while there was no significant difference between the P300 latency of subclinical cases and control group. Also, there was no significant difference in P300 amplitude among the three groups. P300 latency in case of newly diagnosed hypothyroid clinical cases is significantly increased compared to newly diagnosed subclinical cases and control.
Cai, Shanqing; Beal, Deryk S.; Ghosh, Satrajit S.; Tiede, Mark K.; Guenther, Frank H.; Perkell, Joseph S.
2012-01-01
Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking) functions abnormally in the speech motor systems of persons who stutter (PWS). Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants’ compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls’ and had close-to-normal latencies (∼150 ms), but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05). Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands. PMID:22911857
Cortical auditory evoked potentials in the assessment of auditory neuropathy: two case studies.
Pearce, Wendy; Golding, Maryanne; Dillon, Harvey
2007-05-01
Infants with auditory neuropathy and possible hearing impairment are being identified at very young ages through the implementation of hearing screening programs. The diagnosis is commonly based on evidence of normal cochlear function but abnormal brainstem function. This lack of normal brainstem function is highly problematic when prescribing amplification in young infants because prescriptive formulae require the input of hearing thresholds that are normally estimated from auditory brainstem responses to tonal stimuli. Without this information, there is great uncertainty surrounding the final fitting. Cortical auditory evoked potentials may, however, still be evident and reliably recorded to speech stimuli presented at conversational levels. The case studies of two infants are presented that demonstrate how these higher order electrophysiological responses may be utilized in the audiological management of some infants with auditory neuropathy.
Procedures for central auditory processing screening in schoolchildren.
Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella
2018-03-22
Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that
Auditory motion processing after early blindness
Jiang, Fang; Stecker, G. Christopher; Fine, Ione
2014-01-01
Studies showing that occipital cortex responds to auditory and tactile stimuli after early blindness are often interpreted as demonstrating that early blind subjects “see” auditory and tactile stimuli. However, it is not clear whether these occipital responses directly mediate the perception of auditory/tactile stimuli, or simply modulate or augment responses within other sensory areas. We used fMRI pattern classification to categorize the perceived direction of motion for both coherent and ambiguous auditory motion stimuli. In sighted individuals, perceived motion direction was accurately categorized based on neural responses within the planum temporale (PT) and right lateral occipital cortex (LOC). Within early blind individuals, auditory motion decisions for both stimuli were successfully categorized from responses within the human middle temporal complex (hMT+), but not the PT or right LOC. These findings suggest that early blind responses within hMT+ are associated with the perception of auditory motion, and that these responses in hMT+ may usurp some of the functions of nondeprived PT. Thus, our results provide further evidence that blind individuals do indeed “see” auditory motion. PMID:25378368
Neural coding strategies in auditory cortex.
Wang, Xiaoqin
2007-07-01
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.
Neuroplasticity in the auditory system.
Gil-Loyzaga, P
2005-01-01
An increasing interest on neuroplasticity and nerve regeneration within the auditory receptor and pathway has developed in recent years. The receptor and the auditory pathway are controlled by highly complex circuits that appear during embryonic development. During this early maturation process of the auditory sensory elements, we observe the development of two types of nerve fibers: permanent fibers that will remain to reach full-term maturity and other transient fibers that will ultimately disappear. Both stable and transitory fibers however, as well as developing sensory cells, express, and probably release, their respective neuro-transmitters that could be involved in neuroplasticity. Cell culture experiments have added significant information; the in vitro administration of glutamate or GABA to isolated spiral ganglion neurons clearly modified neural development. Neuroplasticity has been also found in the adult. Nerve regeneration and neuroplasticity have been demonstrated in the adult auditory receptors as well as throughout the auditory pathway. Neuroplasticity studies could prove interesting in the elaboration of current or future therapy strategies (e.g.: cochlear implants or stem cells), but also to really understand the pathogenesis of auditory or language diseases (e.g.: deafness, tinnitus, dyslexia, etc.).
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
Auditory hallucinations: nomenclature and classification.
Blom, Jan Dirk; Sommer, Iris E C
2010-03-01
The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an overview of the nomenclature and classification of auditory hallucinations. Relevant data were obtained from books, PubMed, Embase, and the Cochrane Library. The results are presented in the form of several classificatory arrangements of auditory hallucinations, governed by the principles of content, perceived source, perceived vivacity, relation to the sleep-wake cycle, and association with suspected neurobiologic correlates. This overview underscores the necessity to reappraise the concepts of auditory hallucinations developed during the era of classic psychiatry, to incorporate them into our current nomenclature and classification of auditory hallucinations, and to test them empirically with the aid of the structural and functional imaging techniques currently available.
Phase stability analysis of chirp evoked auditory brainstem responses by Gabor frame operators.
Corona-Strauss, Farah I; Delb, Wolfgang; Schick, Bernhard; Strauss, Daniel J
2009-12-01
We have recently shown that click evoked auditory brainstem responses (ABRs) can be efficiently processed using a novelty detection paradigm. Here, ABRs as a large-scale reflection of a stimulus locked neuronal group synchronization at the brainstem level are detected as novel instance-novel as compared to the spontaneous activity which does not exhibit a regular stimulus locked synchronization. In this paper we propose for the first time Gabor frame operators as an efficient feature extraction technique for ABR single sweep sequences that is in line with this paradigm. In particular, we use this decomposition technique to derive the Gabor frame phase stability (GFPS) of sweep sequences of click and chirp evoked ABRs. We show that the GFPS of chirp evoked ABRs provides a stable discrimination of the spontaneous activity from stimulations above the hearing threshold with a small number of sweeps, even at low stimulation intensities. It is concluded that the GFPS analysis represents a robust feature extraction method for ABR single sweep sequences. Further studies are necessary to evaluate the value of the presented approach for clinical applications.
Behavioral Indications of Auditory Processing Disorders.
ERIC Educational Resources Information Center
Hartman, Kerry McGoldrick
1988-01-01
Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)
Is Rest Really Rest? Resting State Functional Connectivity during Rest and Motor Task Paradigms.
Jurkiewicz, Michael T; Crawley, Adrian P; Mikulis, David J
2018-04-18
Numerous studies have identified the default mode network (DMN) within the brain of healthy individuals, which has been attributed to the ongoing mental activity of the brain during the wakeful resting-state. While engaged during specific resting-state fMRI paradigms, it remains unclear as to whether traditional block-design simple movement fMRI experiments significantly influence the default mode network or other areas. Using blood-oxygen level dependent (BOLD) fMRI we characterized the pattern of functional connectivity in healthy subjects during a resting-state paradigm and compared this to the same resting-state analysis performed on motor task data residual time courses after regressing out the task paradigm. Using seed-voxel analysis to define the DMN, the executive control network (ECN), and sensorimotor, auditory and visual networks, the resting-state analysis of the residual time courses demonstrated reduced functional connectivity in the motor network and reduced connectivity between the insula and the ECN compared to the standard resting-state datasets. Overall, performance of simple self-directed motor tasks does little to change the resting-state functional connectivity across the brain, especially in non-motor areas. This would suggest that previously acquired fMRI studies incorporating simple block-design motor tasks could be mined retrospectively for assessment of the resting-state connectivity.
Auditory sequence analysis and phonological skill
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.
2012-01-01
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
Dlouha, Olga; Novak, Alexej; Vokral, Jan
2007-06-01
The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.
40 Hz Auditory Steady-State Response Is a Pharmacodynamic Biomarker for Cortical NMDA Receptors.
Sivarao, Digavalli V; Chen, Ping; Senapati, Arun; Yang, Yili; Fernandes, Alda; Benitex, Yulia; Whiterock, Valerie; Li, Yu-Wen; Ahlijanian, Michael K
2016-08-01
Schizophrenia patients exhibit dysfunctional gamma oscillations in response to simple auditory stimuli or more complex cognitive tasks, a phenomenon explained by reduced NMDA transmission within inhibitory/excitatory cortical networks. Indeed, a simple steady-state auditory click stimulation paradigm at gamma frequency (~40 Hz) has been reproducibly shown to reduce entrainment as measured by electroencephalography (EEG) in patients. However, some investigators have reported increased phase locking factor (PLF) and power in response to 40 Hz auditory stimulus in patients. Interestingly, preclinical literature also reflects this contradiction. We investigated whether a graded deficiency in NMDA transmission can account for such disparate findings by administering subanesthetic ketamine (1-30 mg/kg, i.v.) or vehicle to conscious rats (n=12) and testing their EEG entrainment to 40 Hz click stimuli at various time points (~7-62 min after treatment). In separate cohorts, we examined in vivo NMDA channel occupancy and tissue exposure to contextualize ketamine effects. We report a robust inverse relationship between PLF and NMDA occupancy 7 min after dosing. Moreover, ketamine could produce inhibition or disinhibition of the 40 Hz response in a temporally dynamic manner. These results provide for the first time empirical data to understand how cortical NMDA transmission deficit may lead to opposite modulation of the auditory steady-state response (ASSR). Importantly, our findings posit that 40 Hz ASSR is a pharmacodynamic biomarker for cortical NMDA function that is also robustly translatable. Besides schizophrenia, such a functional biomarker may be of value to neuropsychiatric disorders like bipolar and autism spectrum where 40 Hz ASSR deficits have been documented.
Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor
2014-08-01
The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.
Calhoun, V. D.; Pearlson, G. D.
2011-01-01
Naturalistic paradigms such as movie watching or simulated driving that mimic closely real-world complex activities are becoming more widely used in functional magnetic resonance imaging (fMRI) studies both because of their ability to robustly stimulate brain connectivity and the availability of analysis methods which are able to capitalize on connectivity within and among intrinsic brain networks identified both during a task and in resting fMRI data. In this paper we review over a decade of work from our group and others on the use of simulated driving paradigms to study both the healthy brain as well as the effects of acute alcohol administration on functional connectivity during such paradigms. We briefly review our initial work focused on the configuration of the driving simulator and the analysis strategies. We then describe in more detail several recent studies from our group including a hybrid study examining distracted driving and compare resulting data with those from a separate visual oddball task. The analysis of these data were performed primarily using a combination of group independent component analysis (ICA) and the general linear model (GLM) and in the various studies we highlight novel findings which result from an analysis of either 1) within-network connectivity, 2) inter-network connectivity, also called functional network connectivity, or 3) the degree to which the modulation of the various intrinsic networks were associated with the alcohol administration and the task context. Despite the fact that the behavioral effects of alcohol intoxication are relatively well known, there is still much to discover on how acute alcohol exposure modulates brain function in a selective manner, associated with behavioral alterations. Through the above studies, we have learned more regarding the impact of acute alcohol intoxication on organization of the brain’s intrinsic connectivity networks during performance of a complex, real-world cognitive operation
The Perception of Auditory Motion
Leung, Johahn
2016-01-01
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Current understanding of auditory neuropathy.
Boo, Nem-Yun
2008-12-01
Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Zupan, Barbra; Sussman, Joan E
2009-01-01
Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both experiments was to evaluate the role of familiarity in these preferences. Participants were exposed to randomized blocks of photographs and sounds of ten familiar and ten unfamiliar animals in auditory-only, visual-only and auditory-visual trials. Results indicated an overall auditory preference in children, regardless of hearing status, and a visual preference in adults. Familiarity only affected modality preferences in adults who showed a strong visual preference to unfamiliar stimuli only. The similar degree of auditory responses in children with hearing loss to those from children with normal hearing is an original finding and lends support to an auditory emphasis for habilitation. Readers will be able to (1) Describe the pattern of modality preferences reported in young children without hearing loss; (2) Recognize that differences in communication mode may affect modality preferences in young children with hearing loss; and (3) Understand the role of familiarity in modality preferences in children with and without hearing loss.
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
Rhesus monkeys (Macaca mulatta) detect rhythmic groups in music, but not the beat.
Honing, Henkjan; Merchant, Hugo; Háden, Gábor P; Prado, Luis; Bartolo, Ramón
2012-01-01
It was recently shown that rhythmic entrainment, long considered a human-specific mechanism, can be demonstrated in a selected group of bird species, and, somewhat surprisingly, not in more closely related species such as nonhuman primates. This observation supports the vocal learning hypothesis that suggests rhythmic entrainment to be a by-product of the vocal learning mechanisms that are shared by several bird and mammal species, including humans, but that are only weakly developed, or missing entirely, in nonhuman primates. To test this hypothesis we measured auditory event-related potentials (ERPs) in two rhesus monkeys (Macaca mulatta), probing a well-documented component in humans, the mismatch negativity (MMN) to study rhythmic expectation. We demonstrate for the first time in rhesus monkeys that, in response to infrequent deviants in pitch that were presented in a continuous sound stream using an oddball paradigm, a comparable ERP component can be detected with negative deflections in early latencies (Experiment 1). Subsequently we tested whether rhesus monkeys can detect gaps (omissions at random positions in the sound stream; Experiment 2) and, using more complex stimuli, also the beat (omissions at the first position of a musical unit, i.e. the 'downbeat'; Experiment 3). In contrast to what has been shown in human adults and newborns (using identical stimuli and experimental paradigm), the results suggest that rhesus monkeys are not able to detect the beat in music. These findings are in support of the hypothesis that beat induction (the cognitive mechanism that supports the perception of a regular pulse from a varying rhythm) is species-specific and absent in nonhuman primates. In addition, the findings support the auditory timing dissociation hypothesis, with rhesus monkeys being sensitive to rhythmic grouping (detecting the start of a rhythmic group), but not to the induced beat (detecting a regularity from a varying rhythm).
Gap prepulse inhibition of the auditory late response in healthy subjects.
Ku, Yunseo; Ahn, Joong Woo; Kwon, Chiheon; Suh, Myung-Whan; Lee, Jun Ho; Oh, Seung Ha; Kim, Hee Chan
2015-11-01
The gap-startle paradigm has been used as a behavioral method for tinnitus screening in animal studies. This study aimed to investigate gap prepulse inhibition (GPI) of the auditory late response (ALR) as the objective response of the gap-intense sound paradigm in humans. ALRs were recorded in response to gap-intense and no-gap-intense sound stimuli in 27 healthy subjects. The amplitudes of the baseline-to-peak (N1, P2, and N2) and the peak-to-peak (N1P2 and P2N2) were compared between two averaged ALRs. The variations in the inhibition ratios of N1P2 and P2N2 during the experiment were analyzed by increasing stimuli repetitions. The effect of stimulus parameter adjustments on GPI ratios was evaluated. No-gap-intense sound stimuli elicited greater peak amplitudes than gap-intense sound stimuli, and significant differences were found across all peaks. The overall mean inhibition ratios were significantly lower than 1.0, where the value 1.0 indicates that there were no differences between gap-intense and no-gap-intense sound responses. The initial decline in GPI ratios was shown in N1P2 and P2N2 complexes, and this reduction was nearly complete after 100 stimulus repetitions. Significant effects of gap length and interstimulus interval on GPI ratios were observed. We found significant inhibition of ALR peak amplitudes in performing the gap-intense sound paradigm in healthy subjects. The N1P2 complex represented GPI well in terms of suppression degree and test-retest reliability. Our findings offer practical information for the comparative study of healthy subjects and tinnitus patients using the gap-intense sound paradigm with the ALR. © 2015 Society for Psychophysiological Research.
Binaural Interaction Effects of 30-50 Hz Auditory Steady State Responses.
Gransier, Robin; van Wieringen, Astrid; Wouters, Jan
Auditory stimuli modulated by modulation frequencies within the 30 to 50 Hz region evoke auditory steady state responses (ASSRs) with high signal to noise ratios in adults, and can be used to determine the frequency-specific hearing thresholds of adults who are unable to give behavioral feedback reliably. To measure ASSRs as efficiently as possible a multiple stimulus paradigm can be used, stimulating both ears simultaneously. The response strength of 30 to 50Hz ASSRs is, however, affected when both ears are stimulated simultaneously. The aim of the present study is to gain insight in the measurement efficiency of 30 to 50 Hz ASSRs evoked with a 2-ear stimulation paradigm, by systematically investigating the binaural interaction effects of 30 to 50 Hz ASSRs in normal-hearing adults. ASSRs were obtained with a 64-channel EEG system in 23 normal-hearing adults. All participants participated in one diotic, multiple dichotic, and multiple monaural conditions. Stimuli consisted of a modulated one-octave noise band, centered at 1 kHz, and presented at 70 dB SPL. The diotic condition contained 40 Hz modulated stimuli presented to both ears. In the dichotic conditions, the modulation frequency of the left ear stimulus was kept constant at 40 Hz, while the stimulus at the right ear was either the unmodulated or modulated carrier. In case of the modulated carrier, the modulation frequency varied between 30 and 50 Hz in steps of 2 Hz across conditions. The monaural conditions consisted of all stimuli included in the diotic and dichotic conditions. Modulation frequencies ≥36 Hz resulted in prominent ASSRs in all participants for the monaural conditions. A significant enhancement effect was observed (average: ~3 dB) in the diotic condition, whereas a significant reduction effect was observed in the dichotic conditions. There was no distinct effect of the temporal characteristics of the stimuli on the amount of reduction. The attenuation was in 33% of the cases >3 dB for
Auditory Processing Disorder in Children
... News & Events NIDCD News Inside NIDCD Newsletter Shareable Images ... Info » Hearing, Ear Infections, and Deafness Auditory Processing Disorder Auditory processing disorder (APD) describes a condition ...
Donohue, Sarah E.; Liotti, Mario; Perez, Rick; Woldorff, Marty G.
2011-01-01
The electrophysiological correlates of conflict processing and cognitive control have been well characterized for the visual modality in paradigms such as the Stroop task. Much less is known about corresponding processes in the auditory modality. Here, electroencephalographic recordings of brain activity were measured during an auditory Stroop task, using three different forms of behavioral response (Overt verbal, Covert verbal, and Manual), that closely paralleled our previous visual-Stroop study. As expected, behavioral responses were slower and less accurate for incongruent compared to congruent trials. Neurally, incongruent trials showed an enhanced fronto-central negative-polarity wave (Ninc), similar to the N450 in visual-Stroop tasks, with similar variations as a function of behavioral response mode, but peaking ~150 ms earlier, followed by an enhanced positive posterior wave. In addition, sequential behavioral and neural effects were observed that supported the conflict-monitoring and cognitive-adjustment hypothesis. Thus, while some aspects of the conflict detection processes, such as timing, may be modality-dependent, the general mechanisms would appear to be supramodal. PMID:21964643
Touch activates human auditory cortex.
Schürmann, Martin; Caetano, Gina; Hlushchuk, Yevhen; Jousmäki, Veikko; Hari, Riitta
2006-05-01
Vibrotactile stimuli can facilitate hearing, both in hearing-impaired and in normally hearing people. Accordingly, the sounds of hands exploring a surface contribute to the explorer's haptic percepts. As a possible brain basis of such phenomena, functional brain imaging has identified activations specific to audiotactile interaction in secondary somatosensory cortex, auditory belt area, and posterior parietal cortex, depending on the quality and relative salience of the stimuli. We studied 13 subjects with non-invasive functional magnetic resonance imaging (fMRI) to search for auditory brain areas that would be activated by touch. Vibration bursts of 200 Hz were delivered to the subjects' fingers and palm and tactile pressure pulses to their fingertips. Noise bursts served to identify auditory cortex. Vibrotactile-auditory co-activation, addressed with minimal smoothing to obtain a conservative estimate, was found in an 85-mm3 region in the posterior auditory belt area. This co-activation could be related to facilitated hearing at the behavioral level, reflecting the analysis of sound-like temporal patterns in vibration. However, even tactile pulses (without any vibration) activated parts of the posterior auditory belt area, which therefore might subserve processing of audiotactile events that arise during dynamic contact between hands and environment.
Denham, Susan; Bõhm, Tamás M.; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István
2014-01-01
The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the “ABA-” auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception
Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István
2014-01-01
The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.
Information flow in the auditory cortical network
Hackett, Troy A.
2011-01-01
Auditory processing in the cerebral cortex is comprised of an interconnected network of auditory and auditory-related areas distributed throughout the forebrain. The nexus of auditory activity is located in temporal cortex among several specialized areas, or fields, that receive dense inputs from the medial geniculate complex. These areas are collectively referred to as auditory cortex. Auditory activity is extended beyond auditory cortex via connections with auditory-related areas elsewhere in the cortex. Within this network, information flows between areas to and from countless targets, but in a manner that is characterized by orderly regional, areal and laminar patterns. These patterns reflect some of the structural constraints that passively govern the flow of information at all levels of the network. In addition, the exchange of information within these circuits is dynamically regulated by intrinsic neurochemical properties of projecting neurons and their targets. This article begins with an overview of the principal circuits and how each is related to information flow along major axes of the network. The discussion then turns to a description of neurochemical gradients along these axes, highlighting recent work on glutamate transporters in the thalamocortical projections to auditory cortex. The article concludes with a brief discussion of relevant neurophysiological findings as they relate to structural gradients in the network. PMID:20116421
Foxp2 mutations impair auditory-motor association learning.
Kurt, Simone; Fisher, Simon E; Ehret, Günter
2012-01-01
Heterozygous mutations of the human FOXP2 transcription factor gene cause the best-described examples of monogenic speech and language disorders. Acquisition of proficient spoken language involves auditory-guided vocal learning, a specialized form of sensory-motor association learning. The impact of etiological Foxp2 mutations on learning of auditory-motor associations in mammals has not been determined yet. Here, we directly assess this type of learning using a newly developed conditioned avoidance paradigm in a shuttle-box for mice. We show striking deficits in mice heterozygous for either of two different Foxp2 mutations previously implicated in human speech disorders. Both mutations cause delays in acquiring new motor skills. The magnitude of impairments in association learning, however, depends on the nature of the mutation. Mice with a missense mutation in the DNA-binding domain are able to learn, but at a much slower rate than wild type animals, while mice carrying an early nonsense mutation learn very little. These results are consistent with expression of Foxp2 in distributed circuits of the cortex, striatum and cerebellum that are known to play key roles in acquisition of motor skills and sensory-motor association learning, and suggest differing in vivo effects for distinct variants of the Foxp2 protein. Given the importance of such networks for the acquisition of human spoken language, and the fact that similar mutations in human FOXP2 cause problems with speech development, this work opens up a new perspective on the use of mouse models for understanding pathways underlying speech and language disorders.
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
Human auditory event-related potentials predict duration judgments.
Bendixen, Alexandra; Grimm, Sabine; Schröger, Erich
2005-08-05
Internal clock models postulate a pulse accumulation process underlying timing activities, with more accumulated pulses resulting in longer perceived durations. We investigated whether this accumulation is reflected in the amplitude of event-related brain potentials (ERPs) elicited by auditory stimuli with durations of 400-600 ms. In a duration discrimination paradigm, we found more negative amplitudes to physically identical stimuli when they were judged as longer than the memorized standard duration (500 ms) as compared to being classified as shorter. This sustained negativity was already developing during the first 100 ms after stimulus onset. It could not be explained as a bias to respond with a particular hand (lateralized readiness potential), but rather reflects a processing difference between the tones to be judged as shorter or longer. Our results are in line with models of time processing which assume that higher numbers of accumulated pulses of a temporal processor result in an increase in perceived duration.
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
2008-03-01
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Bell, Brittany A; Phan, Mimi L; Vicario, David S
2015-03-01
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.
Seydell-Greenwald, Anna; Raven, Erika P.; Leaver, Amber M.; Turesky, Ted K.; Rauschecker, Josef P.
2014-01-01
Subjective tinnitus, or “ringing in the ears,” is perceived by 10 to 15 percent of the adult population and causes significant suffering in a subset of patients. While it was originally thought of as a purely auditory phenomenon, there is increasing evidence that the limbic system influences whether and how tinnitus is perceived, far beyond merely determining the patient's emotional reaction to the phantom sound. Based on functional imaging and electrophysiological data, recent articles frame tinnitus as a “network problem” arising from abnormalities in auditory-limbic interactions. Diffusion-weighted magnetic resonance imaging is a noninvasive method for investigating anatomical connections in vivo. It thus has the potential to provide anatomical evidence for the proposed changes in auditory-limbic connectivity. However, the few diffusion imaging studies of tinnitus performed to date have inconsistent results. In the present paper, we briefly summarize the results of previous studies, aiming to reconcile their results. After detailing analysis methods, we then report findings from a new dataset. We conclude that while there is some evidence for tinnitus-related increases in auditory and auditory-limbic connectivity that counteract hearing-loss related decreases in auditory connectivity, these results should be considered preliminary until several technical challenges have been overcome. PMID:25050181
Auditory Task Irrelevance: A Basis for Inattentional Deafness
Scheer, Menja; Bülthoff, Heinrich H.; Chuang, Lewis L.
2018-01-01
Objective This study investigates the neural basis of inattentional deafness, which could result from task irrelevance in the auditory modality. Background Humans can fail to respond to auditory alarms under high workload situations. This failure, termed inattentional deafness, is often attributed to high workload in the visual modality, which reduces one’s capacity for information processing. Besides this, our capacity for processing auditory information could also be selectively diminished if there is no obvious task relevance in the auditory channel. This could be another contributing factor given the rarity of auditory warnings. Method Forty-eight participants performed a visuomotor tracking task while auditory stimuli were presented: a frequent pure tone, an infrequent pure tone, and infrequent environmental sounds. Participants were required either to respond to the presentation of the infrequent pure tone (auditory task-relevant) or not (auditory task-irrelevant). We recorded and compared the event-related potentials (ERPs) that were generated by environmental sounds, which were always task-irrelevant for both groups. These ERPs served as an index for our participants’ awareness of the task-irrelevant auditory scene. Results Manipulation of auditory task relevance influenced the brain’s response to task-irrelevant environmental sounds. Specifically, the late novelty-P3 to irrelevant environmental sounds, which underlies working memory updating, was found to be selectively enhanced by auditory task relevance independent of visuomotor workload. Conclusion Task irrelevance in the auditory modality selectively reduces our brain’s responses to unexpected and irrelevant sounds regardless of visuomotor workload. Application Presenting relevant auditory information more often could mitigate the risk of inattentional deafness. PMID:29578754
The effect of auditory verbal imagery on signal detection in hallucination-prone individuals
Moseley, Peter; Smailes, David; Ellison, Amanda; Fernyhough, Charles
2016-01-01
Cognitive models have suggested that auditory hallucinations occur when internal mental events, such as inner speech or auditory verbal imagery (AVI), are misattributed to an external source. This has been supported by numerous studies indicating that individuals who experience hallucinations tend to perform in a biased manner on tasks that require them to distinguish self-generated from non-self-generated perceptions. However, these tasks have typically been of limited relevance to inner speech models of hallucinations, because they have not manipulated the AVI that participants used during the task. Here, a new paradigm was employed to investigate the interaction between imagery and perception, in which a healthy, non-clinical sample of participants were instructed to use AVI whilst completing an auditory signal detection task. It was hypothesized that AVI-usage would cause participants to perform in a biased manner, therefore falsely detecting more voices in bursts of noise. In Experiment 1, when cued to generate AVI, highly hallucination-prone participants showed a lower response bias than when performing a standard signal detection task, being more willing to report the presence of a voice in the noise. Participants not prone to hallucinations performed no differently between the two conditions. In Experiment 2, participants were not specifically instructed to use AVI, but retrospectively reported how often they engaged in AVI during the task. Highly hallucination-prone participants who retrospectively reported using imagery showed a lower response bias than did participants with lower proneness who also reported using AVI. Results are discussed in relation to prominent inner speech models of hallucinations. PMID:26435050
Auditory Memory Distortion for Spoken Prose
Hutchison, Joanna L.; Hubbard, Timothy L.; Ferrandino, Blaise; Brigante, Ryan; Wright, Jamie M.; Rypma, Bart
2013-01-01
Observers often remember a scene as containing information that was not presented but that would have likely been located just beyond the observed boundaries of the scene. This effect is called boundary extension (BE; e.g., Intraub & Richardson, 1989). Previous studies have observed BE in memory for visual and haptic stimuli, and the present experiments examined whether BE occurred in memory for auditory stimuli (prose, music). Experiments 1 and 2 varied the amount of auditory content to be remembered. BE was not observed, but when auditory targets contained more content, boundary restriction (BR) occurred. Experiment 3 presented auditory stimuli with less content and BR also occurred. In Experiment 4, white noise was added to stimuli with less content to equalize the durations of auditory stimuli, and BR still occurred. Experiments 5 and 6 presented trained stories and popular music, and BR still occurred. This latter finding ruled out the hypothesis that the lack of BE in Experiments 1–4 reflected a lack of familiarity with the stimuli. Overall, memory for auditory content exhibited BR rather than BE, and this pattern was stronger if auditory stimuli contained more content. Implications for the understanding of general perceptual processing and directions for future research are discussed. PMID:22612172
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.
McHugh, Joanna E; Kearney, Gavin; Rice, Henry; Newell, Fiona N
2012-02-01
Although both auditory and visual information can influence the perceived emotion of an individual, how these modalities contribute to the perceived emotion of a crowd of characters was hitherto unknown. Here, we manipulated the ambiguity of the emotion of either a visual or auditory crowd of characters by varying the proportions of characters expressing one of two emotional states. Using an intersensory bias paradigm, unambiguous emotional information from an unattended modality was presented while participants determined the emotion of a crowd in an attended, but different, modality. We found that emotional information in an unattended modality can disambiguate the perceived emotion of a crowd. Moreover, the size of the crowd had little effect on these crossmodal influences. The role of audiovisual information appears to be similar in perceiving emotion from individuals or crowds. Our findings provide novel insights into the role of multisensory influences on the perception of social information from crowds of individuals. PsycINFO Database Record (c) 2012 APA, all rights reserved
Maess, Burkhard; Jacobsen, Thomas; Schröger, Erich; Friederici, Angela D
2007-08-15
Changes in the pitch of repetitive sounds elicit the mismatch negativity (MMN) of the event-related brain potential (ERP). There exist two alternative accounts for this index of automatic change detection: (1) A sensorial, non-comparator account according to which ERPs in oddball sequences are affected by differential refractory states of frequency-specific afferent cortical neurons. (2) A cognitive, comparator account stating that MMN reflects the outcome of a memory comparison between a neuronal model of the frequently presented standard sound with the sensory memory representation of the changed sound. Using a condition controlling for refractoriness effects, the two contributions to MMN can be disentangled. The present study used whole-head MEG to further elucidate the sensorial and cognitive contributions to frequency MMN. Results replicated ERP findings that MMN to pitch change is a compound of the activity of a sensorial, non-comparator mechanism and a cognitive, comparator mechanism which could be separated in time. The sensorial part of frequency MMN consisting of spatially dipolar patterns was maximal in the late N1 range (105-125 ms), while the cognitive part peaked in the late MMN-range (170-200 ms). Spatial principal component analyses revealed that the early part of the traditionally measured MMN (deviant minus standard) is mainly due to the sensorial mechanism while the later mainly due to the cognitive mechanism. Inverse modeling revealed sources for both MMN contributions in the gyrus temporales transversus, bilaterally. These MEG results suggest temporally distinct but spatially overlapping activities of non-comparator-based and comparator-based mechanisms of automatic frequency change detection in auditory cortex.
Wang, Tao; Huang, Jiang-hua; Lin, Lin; Zhan, Chang'an A
2013-01-01
To obtain reliable transient auditory evoked potentials (AEPs) from EEGs recorded using high stimulus rate (HSR) paradigm, it is critical to design the stimulus sequences of appropriate frequency properties. Traditionally, the individual stimulus events in a stimulus sequence occur only at discrete time points dependent on the sampling frequency of the recording system and the duration of stimulus sequence. This dependency likely causes the implementation of suboptimal stimulus sequences, sacrificing the reliability of resulting AEPs. In this paper, we explicate the use of continuous-time stimulus sequence for HSR paradigm, which is independent of the discrete electroencephalogram (EEG) recording system. We employ simulation studies to examine the applicability of the continuous-time stimulus sequences and the impacts of sampling frequency on AEPs in traditional studies using discrete-time design. Results from these studies show that the continuous-time sequences can offer better frequency properties and improve the reliability of recovered AEPs. Furthermore, we find that the errors in the recovered AEPs depend critically on the sampling frequencies of experimental systems, and their relationship can be fitted using a reciprocal function. As such, our study contributes to the literature by demonstrating the applicability and advantages of continuous-time stimulus sequences for HSR paradigm and by revealing the relationship between the reliability of AEPs and sampling frequencies of the experimental systems when discrete-time stimulus sequences are used in traditional manner for the HSR paradigm.
A Brain System for Auditory Working Memory.
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
2016-04-20
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Interactions between the nucleus accumbens and auditory cortices predict music reward value.
Salimpoor, Valorie N; van den Bosch, Iris; Kovacevic, Natasa; McIntosh, Anthony Randal; Dagher, Alain; Zatorre, Robert J
2013-04-12
We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.
ERP effects and perceived exclusion in the Cyberball paradigm: Correlates of expectancy violation?
Weschke, Sarah; Niedeggen, Michael
2015-10-22
A virtual ball-tossing game called Cyberball has allowed the identification of neural structures involved in the processing of social exclusion by using neurocognitive methods. However, there is still an ongoing debate if structures involved are either pain- or exclusion-specific or part of a broader network. In electrophysiological Cyberball studies we have shown that the P3b component is sensitive to exclusion manipulations, possibly modulated by the probability of ball possession of the participant (event "self") or the presumed co-players (event "other"). Since it is known from oddball studies that the P3b is not only modulated by the objective probability of an event, but also by subjective expectancy, we independently manipulated the probability of the events "self" and "other" and the expectancy for these events. Questionnaire data indicate that social need threat is only induced when the expectancy for involvement in the ball-tossing game is violated. Similarly, the P3b amplitude of both "self" and "other" events was a correlate of expectancy violation. We conclude that both the subjective report of exclusion and the P3b effect induced in the Cyberball paradigm are primarily based on a cognitive process sensitive to expectancy violations, and that the P3b is not related to the activation of an exclusion-specific neural alarm system. Copyright © 2015 Elsevier B.V. All rights reserved.
Strait, Dana L.; Kraus, Nina
2013-01-01
Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583
Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva
2018-01-01
Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying
Noto, M; Nishikawa, J; Tateno, T
2016-03-24
A sound interrupted by silence is perceived as discontinuous. However, when high-intensity noise is inserted during the silence, the missing sound may be perceptually restored and be heard as uninterrupted. This illusory phenomenon is called auditory induction. Recent electrophysiological studies have revealed that auditory induction is associated with the primary auditory cortex (A1). Although experimental evidence has been accumulating, the neural mechanisms underlying auditory induction in A1 neurons are poorly understood. To elucidate this, we used both experimental and computational approaches. First, using an optical imaging method, we characterized population responses across auditory cortical fields to sound and identified five subfields in rats. Next, we examined neural population activity related to auditory induction with high temporal and spatial resolution in the rat auditory cortex (AC), including the A1 and several other AC subfields. Our imaging results showed that tone-burst stimuli interrupted by a silent gap elicited early phasic responses to the first tone and similar or smaller responses to the second tone following the gap. In contrast, tone stimuli interrupted by broadband noise (BN), considered to cause auditory induction, considerably suppressed or eliminated responses to the tone following the noise. Additionally, tone-burst stimuli that were interrupted by notched noise centered at the tone frequency, which is considered to decrease the strength of auditory induction, partially restored the second responses from the suppression caused by BN. To phenomenologically mimic the neural population activity in the A1 and thus investigate the mechanisms underlying auditory induction, we constructed a computational model from the periphery through the AC, including a nonlinear dynamical system. The computational model successively reproduced some of the above-mentioned experimental results. Therefore, our results suggest that a nonlinear, self
Auditory-vocal mirroring in songbirds.
Mooney, Richard
2014-01-01
Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.
Experience and information loss in auditory and visual memory.
Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K
2017-07-01
Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.
Auditory and motor imagery modulate learning in music performance
Brown, Rachel M.; Palmer, Caroline
2013-01-01
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of
Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed
2016-03-01
This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.
Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
2016-01-01
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Repetition suppression and reactivation in auditory-verbal short-term recognition memory.
Buchsbaum, Bradley R; D'Esposito, Mark
2009-06-01
The neural response to stimulus repetition is not uniform across brain regions, stimulus modalities, or task contexts. For instance, it has been observed in many functional magnetic resonance imaging (fMRI) studies that sometimes stimulus repetition leads to a relative reduction in neural activity (repetition suppression), whereas in other cases repetition results in a relative increase in activity (repetition enhancement). In the present study, we hypothesized that in the context of a verbal short-term recognition memory task, repetition-related "increases" should be observed in the same posterior temporal regions that have been previously associated with "persistent activity" in working memory rehearsal paradigms. We used fMRI and a continuous recognition memory paradigm with short lags to examine repetition effects in the posterior and anterior regions of the superior temporal cortex. Results showed that, consistent with our hypothesis, the 2 posterior temporal regions consistently associated with working memory maintenance, also show repetition increases during short-term recognition memory. In contrast, a region in the anterior superior temporal lobe showed repetition suppression effects, consistent with previous research work on perceptual adaptation in the auditory-verbal domain. We interpret these results in light of recent theories of the functional specialization along the anterior and posterior axes of the superior temporal lobe.
[Auditory processing evaluation in children born preterm].
Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo
2011-01-01
To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.
Neural circuits in auditory and audiovisual memory.
Plakke, B; Romanski, L M
2016-06-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
2015-09-01
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Neural circuits in Auditory and Audiovisual Memory
Plakke, B.; Romanski, L.M.
2016-01-01
Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. PMID:26656069
Auditory enhancement of visual perception at threshold depends on visual abilities.
Caclin, Anne; Bouchet, Patrick; Djoulah, Farida; Pirat, Elodie; Pernier, Jacques; Giard, Marie-Hélène
2011-06-17
Whether or not multisensory interactions can improve detection thresholds, and thus widen the range of perceptible events is a long-standing debate. Here we revisit this question, by testing the influence of auditory stimuli on visual detection threshold, in subjects exhibiting a wide range of visual-only performance. Above the perceptual threshold, crossmodal interactions have indeed been reported to depend on the subject's performance when the modalities are presented in isolation. We thus tested normal-seeing subjects and short-sighted subjects wearing their usual glasses. We used a paradigm limiting potential shortcomings of previous studies: we chose a criterion-free threshold measurement procedure and precluded exogenous cueing effects by systematically presenting a visual cue whenever a visual target (a faint Gabor patch) might occur. Using this carefully controlled procedure, we found that concurrent sounds only improved visual detection thresholds in the sub-group of subjects exhibiting the poorest performance in the visual-only conditions. In these subjects, for oblique orientations of the visual stimuli (but not for vertical or horizontal targets), the auditory improvement was still present when visual detection was already helped with flanking visual stimuli generating a collinear facilitation effect. These findings highlight that crossmodal interactions are most efficient to improve perceptual performance when an isolated modality is deficient. Copyright © 2011 Elsevier B.V. All rights reserved.
Auditory motion-specific mechanisms in the primate brain
Baumann, Simon; Dheerendra, Pradeep; Joly, Olivier; Hunter, David; Balezeau, Fabien; Sun, Li; Rees, Adrian; Petkov, Christopher I.; Thiele, Alexander; Griffiths, Timothy D.
2017-01-01
This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI). We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream. PMID:28472038
Tillmann, Julian; Swettenham, John
2017-02-01
Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
1984-08-01
90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection
Aliakbaryhosseinabadi, Susan; Kamavuako, Ernest Nlandu; Jiang, Ning; Farina, Dario; Mrachacz-Kersting, Natalie
2017-11-01
Dual tasking is defined as performing two tasks concurrently and has been shown to have a significant effect on attention directed to the performance of the main task. In this study, an attention diversion task with two different levels was administered while participants had to complete a cue-based motor task consisting of foot dorsiflexion. An auditory oddball task with two levels of complexity was implemented to divert the user's attention. Electroencephalographic (EEG) recordings were made from nine single channels. Event-related potentials (ERPs) confirmed that the oddball task of counting a sequence of two tones decreased the auditory P300 amplitude more than the oddball task of counting one target tone among three different tones. Pre-movement features quantified from the movement-related cortical potential (MRCP) were changed significantly between single and dual-task conditions in motor and fronto-central channels. There was a significant delay in movement detection for the case of single tone counting in two motor channels only (237.1-247.4ms). For the task of sequence counting, motor cortex and frontal channels showed a significant delay in MRCP detection (232.1-250.5ms). This study investigated the effect of attention diversion in dual-task conditions by analysing both ERPs and MRCPs in single channels. The higher attention diversion lead to a significant reduction in specific MRCP features of the motor task. These results suggest that attention division in dual-tasking situations plays an important role in movement execution and detection. This has important implications in designing real-time brain-computer interface systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Kostopoulos, Penelope; Petrides, Michael
2016-02-16
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Brain responses to tonal changes in the first two years of life
Jing, Hongkui; Benasich, April A.
2007-01-01
Maturation of auditory perceptual and discrimination process within the first two years of life is investigated in healthy infants by examining event-related potentials (ERPs). High-density EEG signals were recorded from the scalp monthly between 3 and 24 months of age. Two types of stimuli (100 vs. 100 Hz for standard stimuli; 100 vs. 300 Hz for deviant stimuli; occurrence rate: 85:15%) were presented using an oddball paradigm. Latencies and amplitudes were compared across development. The results showed that latencies of P150, N250, P350, and N450 components gradually decreased with increasing age. Amplitudes of the N250 and P350 components gradually increased and reached the maximum at 9 months, and then gradually decreased with the increase of age. Mismatch negativity was not obvious at 3 months of age, but was seen at 4–5 months and became robust after 6 months. Robust late positivity was recorded at all ages. These mismatch responses were noticeable in the frontal, central, and parietal areas, and the maximal MMN amplitude distribution gradually moved from the parietal area to the frontal area across the age range. Two important periods—one around 6 months and the other around 9 months are suggested in the maturation of auditory central system. Dynamical changes in the underlying source strengths and orientations may be principal contributors to ERP morphological changes in infants within the first 24 months. PMID:16373083
Saletu, Michael; Anderer, Peter; Saletu-Zyhlarz, Gerda Maria; Mandl, Magdalena; Saletu, Bernd; Zeitlhofer, Josef
2009-09-01
Recent neuroimaging studies in narcolepsy discovered significant gray matter loss in the right prefrontal and frontomesial cortex, a critical region for executive processing. In the present study, event-related potential (ERP) low-resolution brain electromagnetic tomography (LORETA) was used to investigate cognition before and after modafinil as compared with placebo. In a double-blind, placebo-controlled cross-over design, 15 patients were treated with a 3-week fixed titration scheme of modafinil and placebo. The Epworth Sleepiness Scale (ESS), Maintenance of Wakefulness Test (MWT) and auditory ERPs (odd-ball paradigm) were obtained before and after the 3 weeks of therapy. Latencies, amplitudes and LORETA sources were determined for standard (N1 and P2) and target (N2 and P300) ERP components. The ESS score improved significantly from 15.4 (+/- 4.0) under placebo to 10.2 (+/- 4.1) under 400mg modafinil (p=0.004). In the MWT, latency to sleep increased nonsignificantly after modafinil treatment (11.9+/-6.9 versus 13.3+/-7.1 min). In the ERP, N2 and P300 latencies were shortened significantly. While ERP amplitudes showed only minor changes, LORETA revealed increased source strengths: for N1 in the left auditory cortex and for P300 in the medial and right dorsolateral prefrontal cortex. LORETA revealed that modafinil improved information processing speed and increased energetic resources in prefrontal cortical regions, which is in agreement with other neuroimaging studies.
Vavatzanidis, Niki Katerina; Mürbe, Dirk; Friederici, Angela; Hahne, Anja
2015-12-01
One main incentive for supplying hearing impaired children with a cochlear implant is the prospect of oral language acquisition. Only scarce knowledge exists, however, of what congenitally deaf children actually perceive when receiving their first auditory input, and specifically what speech-relevant features they are able to extract from the new modality. We therefore presented congenitally deaf infants and young children implanted before the age of 4 years with an oddball paradigm of long and short vowel variants of the syllable /ba/. We measured the EEG in regular intervals to study their discriminative ability starting with the first activation of the implant up to 8 months later. We were thus able to time-track the emerging ability to differentiate one of the most basic linguistic features that bears semantic differentiation and helps in word segmentation, namely, vowel length. Results show that already 2 months after the first auditory input, but not directly after implant activation, these early implanted children differentiate between long and short syllables. Surprisingly, after only 4 months of hearing experience, the ERPs have reached the same properties as those of the normal hearing control group, demonstrating the plasticity of the brain with respect to the new modality. We thus show that a simple but linguistically highly relevant feature such as vowel length reaches age-appropriate electrophysiological levels as fast as 4 months after the first acoustic stimulation, providing an important basis for further language acquisition.
MMN and novelty P3 in coma and other altered states of consciousness: a review.
Morlet, Dominique; Fischer, Catherine
2014-07-01
In recent decades, there has been a growing interest in the assessment of patients in altered states of consciousness. There is a need for accurate and early prediction of awakening and recovery from coma. Neurophysiological assessment of coma was once restricted to brainstem auditory and primary cortex somatosensory evoked potentials elicited in the 30 ms range, which have both shown good predictive value for poor coma outcome only. In this paper, we review how passive auditory oddball paradigms including deviant and novel sounds have proved their efficiency in assessing brain function at a higher level, without requiring the patient's active involvement, thus providing an enhanced tool for the prediction of coma outcome. The presence of an MMN in response to deviant stimuli highlights preserved automatic sensory memory processes. Recorded during coma, MMN has shown high specificity as a predictor of recovery of consciousness. The presence of a novelty P3 in response to the subject's own first name presented as a novel (rare) stimulus has shown a good correlation with coma awakening. There is now a growing interest in the search for markers of consciousness, if there are any, in unresponsive patients (chronic vegetative or minimally conscious states). We discuss the different ERP patterns observed in these patients. The presence of novelty P3, including parietal components and possibly followed by a late parietal positivity, raises the possibility that some awareness processes are at work in these unresponsive patients.
MMN and Novelty P3 in Coma and Other Altered States of Consciousness: A Review
Morlet, Dominique; Fischer, Catherine
2014-01-01
In recent decades, there has been a growing interest in the assessment of patients in altered states of consciousness. There is a need for accurate and early prediction of awakening and recovery from coma. Neurophysiological assessment of coma was once restricted to brainstem auditory and primary cortex somatosensory evoked potentials elicited in the thirty millisecond range, which have both shown good predictive value for poor coma outcome only. In this paper, we review how passive auditory oddball paradigms including deviant and novel sounds have proved their efficiency in assessing brain function at a higher level, without requiring the patient’s active involvement, thus providing an enhanced tool for the prediction of coma outcome. The presence of an MMN in response to deviant stimuli highlights preserved automatic sensory memory processes. Recorded during coma, MMN has shown high specificity as a predictor of recovery of consciousness. The presence of a novelty P3 in response to the subject’s own first name presented as a novel (rare) stimulus has shown a good correlation with coma awakening. There is now a growing interest in the search for markers of consciousness, if there are any, in unresponsive patients (chronic vegetative or minimally conscious states). We discuss the different ERP patterns observed in these patients. The presence of novelty P3, including parietal components and possibly followed by a late parietal positivity, raises the possibility that some awareness processes are at work in these unresponsive patients. PMID:24281786
Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan
2016-12-01
Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.
Diukova, Ana; Ware, Jennifer; Smith, Jessica E.; Evans, C. John; Murphy, Kevin; Rogers, Peter J.; Wise, Richard G.
2012-01-01
The effects of caffeine are mediated through its non-selective antagonistic effects on adenosine A1 and A2A adenosine receptors resulting in increased neuronal activity but also vasoconstriction in the brain. Caffeine, therefore, can modify BOLD FMRI signal responses through both its neural and its vascular effects depending on receptor distributions in different brain regions. In this study we aim to distinguish neural and vascular influences of a single dose of caffeine in measurements of task-related brain activity using simultaneous EEG–FMRI. We chose to compare low-level visual and motor (paced finger tapping) tasks with a cognitive (auditory oddball) task, with the expectation that caffeine would differentially affect brain responses in relation to these tasks. To avoid the influence of chronic caffeine intake, we examined the effect of 250 mg of oral caffeine on 14 non and infrequent caffeine consumers in a double-blind placebo-controlled cross-over study. Our results show that the task-related BOLD signal change in visual and primary motor cortex was significantly reduced by caffeine, while the amplitude and latency of visual evoked potentials over occipital cortex remained unaltered. However, during the auditory oddball task (target versus non-target stimuli) caffeine significantly increased the BOLD signal in frontal cortex. Correspondingly, there was also a significant effect of caffeine in reducing the target evoked response potential (P300) latency in the oddball task and this was associated with a positive potential over frontal cortex. Behavioural data showed that caffeine also improved performance in the oddball task with a significantly reduced number of missed responses. Our results are consistent with earlier studies demonstrating altered flow-metabolism coupling after caffeine administration in the context of our observation of a generalised caffeine-induced reduction in cerebral blood flow demonstrated by arterial spin labelling (19
Auditory perception modulated by word reading.
Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja
2016-10-01
Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.
Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed
2016-01-01
Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information. PMID:26989281
Looming auditory collision warnings for driving.
Gray, Rob
2011-02-01
A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.
The what, where and how of auditory-object perception.
Bizley, Jennifer K; Cohen, Yale E
2013-10-01
The fundamental perceptual unit in hearing is the 'auditory object'. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood.
The what, where and how of auditory-object perception
Bizley, Jennifer K.; Cohen, Yale E.
2014-01-01
The fundamental perceptual unit in hearing is the ‘auditory object’. Similar to visual objects, auditory objects are the computational result of the auditory system's capacity to detect, extract, segregate and group spectrotemporal regularities in the acoustic environment; the multitude of acoustic stimuli around us together form the auditory scene. However, unlike the visual scene, resolving the component objects within the auditory scene crucially depends on their temporal structure. Neural correlates of auditory objects are found throughout the auditory system. However, neural responses do not become correlated with a listener's perceptual reports until the level of the cortex. The roles of different neural structures and the contribution of different cognitive states to the perception of auditory objects are not yet fully understood. PMID:24052177
Tracking the Sensory Environment: An ERP Study of Probability and Context Updating in ASD
ERIC Educational Resources Information Center
Westerfield, Marissa A.; Zinni, Marla; Vo, Khang; Townsend, Jeanne
2015-01-01
We recorded visual event-related brain potentials from 32 adult male participants (16 high-functioning participants diagnosed with autism spectrum disorder (ASD) and 16 control participants, ranging in age from 18 to 53 years) during a three-stimulus oddball paradigm. Target and non-target stimulus probability was varied across three probability…