Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Enhanced Generalization of Auditory Conditioned Fear in Juvenile Mice
ERIC Educational Resources Information Center
Ito, Wataru; Pan, Bing-Xing; Yang, Chao; Thakur, Siddarth; Morozov, Alexei
2009-01-01
Increased emotionality is a characteristic of human adolescence, but its animal models are limited. Here we report that generalization of auditory conditioned fear between a conditional stimulus (CS+) and a novel auditory stimulus is stronger in 4-5-wk-old mice (juveniles) than in their 9-10-wk-old counterparts (adults), whereas nonassociative…
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats
ERIC Educational Resources Information Center
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
2007-01-01
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.
Kim, Woong Bin; Cho, Jun-Hyeong
2017-08-30
In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.
Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie
2011-09-01
As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
2013-07-02
amygdala induced by hippocampal formation stimulation in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience 15...6 Figure 1.3. Schematic model of the neural circuitry of Pavlovian auditory fear conditioning. Model shows how an auditory conditioned...stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory thalamus and cortex and somatosensory
Morgan, Simeon J; Paolini, Antonio G
2012-06-06
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
Medial Auditory Thalamus Inactivation Prevents Acquisition and Retention of Eyeblink Conditioning
ERIC Educational Resources Information Center
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
2008-01-01
The auditory conditioned stimulus (CS) pathway that is necessary for delay eyeblink conditioning was investigated using reversible inactivation of the medial auditory thalamic nuclei (MATN) consisting of the medial division of the medial geniculate (MGm), suprageniculate (SG), and posterior intralaminar nucleus (PIN). Rats were given saline or…
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego
2014-06-01
The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.
ERIC Educational Resources Information Center
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
2015-01-01
Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Ansari, M S; Rangasayee, R; Ansari, M A H
2017-03-01
Poor auditory speech perception in geriatrics is attributable to neural de-synchronisation due to structural and degenerative changes of ageing auditory pathways. The speech-evoked auditory brainstem response may be useful for detecting alterations that cause loss of speech discrimination. Therefore, this study aimed to compare the speech-evoked auditory brainstem response in adult and geriatric populations with normal hearing. The auditory brainstem responses to click sounds and to a 40 ms speech sound (the Hindi phoneme |da|) were compared in 25 young adults and 25 geriatric people with normal hearing. The latencies and amplitudes of transient peaks representing neural responses to the onset, offset and sustained portions of the speech stimulus in quiet and noisy conditions were recorded. The older group had significantly smaller amplitudes and longer latencies for the onset and offset responses to |da| in noisy conditions. Stimulus-to-response times were longer and the spectral amplitude of the sustained portion of the stimulus was reduced. The overall stimulus level caused significant shifts in latency across the entire speech-evoked auditory brainstem response in the older group. The reduction in neural speech processing in older adults suggests diminished subcortical responsiveness to acoustically dynamic spectral cues. However, further investigations are needed to encode temporal cues at the brainstem level and determine their relationship to speech perception for developing a routine tool for clinical decision-making.
Cecere, Roberto; Gross, Joachim; Thut, Gregor
2016-06-01
The ability to integrate auditory and visual information is critical for effective perception and interaction with the environment, and is thought to be abnormal in some clinical populations. Several studies have investigated the time window over which audiovisual events are integrated, also called the temporal binding window, and revealed asymmetries depending on the order of audiovisual input (i.e. the leading sense). When judging audiovisual simultaneity, the binding window appears narrower and non-malleable for auditory-leading stimulus pairs and wider and trainable for visual-leading pairs. Here we specifically examined the level of independence of binding mechanisms when auditory-before-visual vs. visual-before-auditory input is bound. Three groups of healthy participants practiced audiovisual simultaneity detection with feedback, selectively training on auditory-leading stimulus pairs (group 1), visual-leading stimulus pairs (group 2) or both (group 3). Subsequently, we tested for learning transfer (crossover) from trained stimulus pairs to non-trained pairs with opposite audiovisual input. Our data confirmed the known asymmetry in size and trainability for auditory-visual vs. visual-auditory binding windows. More importantly, practicing one type of audiovisual integration (e.g. auditory-visual) did not affect the other type (e.g. visual-auditory), even if trainable by within-condition practice. Together, these results provide crucial evidence that audiovisual temporal binding for auditory-leading vs. visual-leading stimulus pairs are independent, possibly tapping into different circuits for audiovisual integration due to engagement of different multisensory sampling mechanisms depending on leading sense. Our results have implications for informing the study of multisensory interactions in healthy participants and clinical populations with dysfunctional multisensory integration. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Stimulus change as a factor in response maintenance with free food available.
Osborne, S R; Shelby, M
1975-01-01
Rats bar pressed for food on a reinforcement schedule in which every response was reinforced, even though a dish of pellets was present. Initially, auditory and visual stimuli accompanied response-produced food presentation. With stimulus feedback as an added consequence of bar pressing, responding was maintained in the presence of free food; without stimulus feedback, responding decreased to a low level. Auditory feedback maintained slightly more responding than did visual feedback, and both together maintained more responding than did either separately. Almost no responding occurred when the only consequence of bar pressing was stimulus feedback. The data indicated conditioned and sensory reinforcement effects of response-produced stimulus feedback. PMID:1202121
Auditory proactive interference in monkeys: The role of stimulus set size and intertrial interval
Bigelow, James; Poremba, Amy
2013-01-01
We conducted two experiments to examine the influence of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (non-match trials). In Experiment 1, we randomly assigned a stimulus set size of 2, 4, 8, 16, 32, 64, or 192 (trial unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect “same” responses on non-match trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved by increasing the ITI from 5 to 10 s, but the 10 and 20 s conditions were the same. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false “match” responses on non-match trials. Taken together, Experiments 1 and 2 show that auditory short-term memory in monkeys is highly susceptible to PI caused by stimulus repetition. Additional analyses from Experiment 1 suggest that monkeys may make same/different judgments based on a familiarity criterion that is adjusted by error-related feedback. PMID:23526232
Compound Stimulus Extinction Reduces Spontaneous Recovery in Humans
ERIC Educational Resources Information Center
Coelho, Cesar A. O.; Dunsmoor, Joseph E.; Phelps, Elizabeth A.
2015-01-01
Fear-related behaviors are prone to relapse following extinction. We tested in humans a compound extinction design ("deepened extinction") shown in animal studies to reduce post-extinction fear recovery. Adult subjects underwent fear conditioning to a visual and an auditory conditioned stimulus (CSA and CSB, respectively) separately…
Further evidence of auditory extinction in aphasia.
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
2013-02-01
Preliminary research (Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Seventeen IWA (M(age) = 53.19 years) and 17 neurologically intact controls (M(age) = 55.18 years) participated. Auditory stimuli were spoken letters presented in a free-field listening environment. Stimuli were presented in single-stimulus stimulation (SSS) or double-simultaneous stimulation (DSS) trials across 5 conditions designed to determine whether extinction is related to binding, inefficient attention resource allocation, or overall deficits in attention. All participants completed all experimental conditions. Significant extinction was demonstrated only by IWA when sounds were different, providing further evidence of auditory extinction. However, binding requirements did not appear to influence the IWA's performance. Results indicate that, for IWA, auditory extinction may not be attributed to a binding deficit or inefficient attention resource allocation because of equivalent performance across all 5 conditions. Rather, overall attentional resources may be influential. Future research in aphasia should explore the effect of the stimulus presentation in addition to the continued study of attention treatment.
The Role of Temporal Disparity on Audiovisual Integration in Low-Vision Individuals.
Targher, Stefano; Micciolo, Rocco; Occelli, Valeria; Zampini, Massimiliano
2017-12-01
Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.
Involvement of the human midbrain and thalamus in auditory deviance detection.
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
2015-02-01
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Auditory proactive interference in monkeys: the roles of stimulus set size and intertrial interval.
Bigelow, James; Poremba, Amy
2013-09-01
We conducted two experiments to examine the influences of stimulus set size (the number of stimuli that are used throughout the session) and intertrial interval (ITI, the elapsed time between trials) in auditory short-term memory in monkeys. We used an auditory delayed matching-to-sample task wherein the animals had to indicate whether two sounds separated by a 5-s retention interval were the same (match trials) or different (nonmatch trials). In Experiment 1, we randomly assigned stimulus set sizes of 2, 4, 8, 16, 32, 64, or 192 (trial-unique) for each session of 128 trials. Consistent with previous visual studies, overall accuracy was consistently lower when smaller stimulus set sizes were used. Further analyses revealed that these effects were primarily caused by an increase in incorrect "same" responses on nonmatch trials. In Experiment 2, we held the stimulus set size constant at four for each session and alternately set the ITI at 5, 10, or 20 s. Overall accuracy improved when the ITI was increased from 5 to 10 s, but it was the same across the 10- and 20-s conditions. As in Experiment 1, the overall decrease in accuracy during the 5-s condition was caused by a greater number of false "match" responses on nonmatch trials. Taken together, Experiments 1 and 2 showed that auditory short-term memory in monkeys is highly susceptible to proactive interference caused by stimulus repetition. Additional analyses of the data from Experiment 1 suggested that monkeys may make same-different judgments on the basis of a familiarity criterion that is adjusted by error-related feedback.
Arrabito, G R; McFadden, S M; Crabtree, R B
2001-07-01
Auditory speech thresholds were measured in this study. Subjects were required to discriminate a female voice recording of three-digit numbers in the presence of diotic speech babble. The voice stimulus was spatialized at 11 static azimuth positions on the horizontal plane using three different head-related transfer functions (HRTFs) measured on individuals who did not participate in this study. The diotic presentation of the voice stimulus served as the control condition. The results showed that two of the HRTFS performed similarly and had significantly lower auditory speech thresholds than the third HRTF. All three HRTFs yielded significantly lower auditory speech thresholds compared with the diotic presentation of the voice stimulus, with the largest difference at 60 degrees azimuth. The practical implications of these results suggest that lower headphone levels of the communication system in military aircraft can be achieved without sacrificing intelligibility, thereby lessening the risk of hearing loss.
Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan
2015-01-01
In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond
2015-01-01
Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.
Smulders, Tom V; Jarvis, Erich D
2013-11-01
Repeated exposure to an auditory stimulus leads to habituation of the electrophysiological and immediate-early-gene (IEG) expression response in the auditory system. A novel auditory stimulus reinstates this response in a form of dishabituation. This has been interpreted as the start of new memory formation for this novel stimulus. Changes in the location of an otherwise identical auditory stimulus can also dishabituate the IEG expression response. This has been interpreted as an integration of stimulus identity and stimulus location into a single auditory object, encoded in the firing patterns of the auditory system. In this study, we further tested this hypothesis. Using chronic multi-electrode arrays to record multi-unit activity from the auditory system of awake and behaving zebra finches, we found that habituation occurs to repeated exposure to the same song and dishabituation with a novel song, similar to that described in head-fixed, restrained animals. A large proportion of recording sites also showed dishabituation when the same auditory stimulus was moved to a novel location. However, when the song was randomly moved among 8 interleaved locations, habituation occurred independently of the continuous changes in location. In contrast, when 8 different auditory stimuli were interleaved all from the same location, a separate habituation occurred to each stimulus. This result suggests that neuronal memories of the acoustic identity and spatial location are different, and that allocentric location of a stimulus is not encoded as part of the memory for an auditory object, while its acoustic properties are. We speculate that, instead, the dishabituation that occurs with a change from a stable location of a sound is due to the unexpectedness of the location change, and might be due to different underlying mechanisms than the dishabituation and separate habituations to different acoustic stimuli. Copyright © 2013 Elsevier Inc. All rights reserved.
Evaluative Conditioning Induces Changes in Sound Valence
Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan
2012-01-01
Through evaluative conditioning (EC) a stimulus can acquire an affective value by pairing it with another affective stimulus. While many sounds we encounter daily have acquired an affective value over life, EC has hardly been tested in the auditory domain. To get a more complete understanding of affective processing in auditory domain we examined EC of sound. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US). Congruency effects on an affective priming task for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether extinction occurs, i.e., whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results provide clear evidence for EC effects in the auditory domain. We will argue that both associative as well as propositional processes are likely to underlie these effects. PMID:22514545
Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro
2016-09-13
The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.
Shrem, Talia; Murray, Micah M; Deouell, Leon Y
2017-11-01
Space is a dimension shared by different modalities, but at what stage spatial encoding is affected by multisensory processes is unclear. Early studies observed attenuation of N1/P2 auditory evoked responses following repetition of sounds from the same location. Here, we asked whether this effect is modulated by audiovisual interactions. In two experiments, using a repetition-suppression paradigm, we presented pairs of tones in free field, where the test stimulus was a tone presented at a fixed lateral location. Experiment 1 established a neural index of auditory spatial sensitivity, by comparing the degree of attenuation of the response to test stimuli when they were preceded by an adapter sound at the same location versus 30° or 60° away. We found that the degree of attenuation at the P2 latency was inversely related to the spatial distance between the test stimulus and the adapter stimulus. In Experiment 2, the adapter stimulus was a tone presented from the same location or a more medial location than the test stimulus. The adapter stimulus was accompanied by a simultaneous flash displayed orthogonally from one of the two locations. Sound-flash incongruence reduced accuracy in a same-different location discrimination task (i.e., the ventriloquism effect) and reduced the location-specific repetition-suppression at the P2 latency. Importantly, this multisensory effect included topographic modulations, indicative of changes in the relative contribution of underlying sources across conditions. Our findings suggest that the auditory response at the P2 latency is affected by spatially selective brain activity, which is affected crossmodally by visual information. © 2017 Society for Psychophysiological Research.
Rate change detection of frequency modulated signals: developmental trends.
Cohen-Mimran, Ravit; Sapir, Shimon
2011-08-26
The aim of this study was to examine developmental trends in rate change detection of auditory rhythmic signals (repetitive sinusoidally frequency modulated tones). Two groups of children (9-10 years old and 11-12 years old) and one group of young adults performed a rate change detection (RCD) task using three types of stimuli. The rate of stimulus modulation was either constant (CR), raised by 1 Hz in the middle of the stimulus (RR1) or raised by 2 Hz in the middle of the stimulus (RR2). Performance on the RCD task significantly improved with age. Also, the different stimuli showed different developmental trajectories. When the RR2 stimulus was used, results showed adult-like performance by the age of 10 years but when the RR1 stimulus was used performance continued to improve beyond 12 years of age. Rate change detection of repetitive sinusoidally frequency modulated tones show protracted development beyond the age of 12 years. Given evidence for abnormal processing of auditory rhythmic signals in neurodevelopmental conditions, such as dyslexia, the present methodology might help delineate the nature of these conditions.
Cortical evoked responses associated with arousal from sleep.
Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M
2011-01-01
To determine if low-level intermittent auditory stimuli have the potential to disrupt sleep during 24-h recordings, we assessed arousal occurrence to varying stimulus intensities. Additionally, if stimulus-generated evoked response potential (ERP) components provide a metric of underlying cortical state, then a particular ERP structure may precede an arousal. Physiological electrodes measuring EEG, EKG, and EMG were implanted into 5 adult female Sprague-Dawley rats. We delivered auditory stimuli of varying intensities (50-75 dBa sound pressure level SPL) at random intervals of 6-12 s over a 24-hour period. Recordings were divided into 2-s epochs and scored for sleep/wake state. Following each stimulus, we identified whether the animal stayed asleep or woke. We then sorted the stimuli depending on prior and post-stimulus state, and measured ERP components. Auditory stimuli did not produce a significant increase in the number of arousals compared to silent control periods. Overall, arousal from REM sleep occurred more often compared to quiet sleep. ERPs preceding an arousal had decreased mean area and shorter N1 latency. Low level auditory stimuli did not fragment animal sleep since we observed no significant change in arousal occurrence. Arousals that occurred within 4 s of a stimulus exhibited an ERP mean area and latency had features similar to ERPs generated during wake, indicating that the underlying cortical tissue state may contribute to physiological conditions required for arousal.
fMRI during natural sleep as a method to study brain function during early childhood.
Redcay, Elizabeth; Kennedy, Daniel P; Courchesne, Eric
2007-12-01
Many techniques to study early functional brain development lack the whole-brain spatial resolution that is available with fMRI. We utilized a relatively novel method in which fMRI data were collected from children during natural sleep. Stimulus-evoked responses to auditory and visual stimuli as well as stimulus-independent functional networks were examined in typically developing 2-4-year-old children. Reliable fMRI data were collected from 13 children during presentation of auditory stimuli (tones, vocal sounds, and nonvocal sounds) in a block design. Twelve children were presented with visual flashing lights at 2.5 Hz. When analyses combined all three types of auditory stimulus conditions as compared to rest, activation included bilateral superior temporal gyri/sulci (STG/S) and right cerebellum. Direct comparisons between conditions revealed significantly greater responses to nonvocal sounds and tones than to vocal sounds in a number of brain regions including superior temporal gyrus/sulcus, medial frontal cortex and right lateral cerebellum. The response to visual stimuli was localized to occipital cortex. Furthermore, stimulus-independent functional connectivity MRI analyses (fcMRI) revealed functional connectivity between STG and other temporal regions (including contralateral STG) and medial and lateral prefrontal regions. Functional connectivity with an occipital seed was localized to occipital and parietal cortex. In sum, 2-4 year olds showed a differential fMRI response both between stimulus modalities and between stimuli in the auditory modality. Furthermore, superior temporal regions showed functional connectivity with numerous higher-order regions during sleep. We conclude that the use of sleep fMRI may be a valuable tool for examining functional brain organization in young children.
Integration of auditory and vibrotactile stimuli: Effects of frequency
Wilson, E. Courtenay; Reed, Charlotte M.; Braida, Louis D.
2010-01-01
Perceptual integration of vibrotactile and auditory sinusoidal tone pulses was studied in detection experiments as a function of stimulation frequency. Vibrotactile stimuli were delivered through a single channel vibrator to the left middle fingertip. Auditory stimuli were presented diotically through headphones in a background of 50 dB sound pressure level broadband noise. Detection performance for combined auditory-tactile presentations was measured using stimulus levels that yielded 63% to 77% correct unimodal performance. In Experiment 1, the vibrotactile stimulus was 250 Hz and the auditory stimulus varied between 125 and 2000 Hz. In Experiment 2, the auditory stimulus was 250 Hz and the tactile stimulus varied between 50 and 400 Hz. In Experiment 3, the auditory and tactile stimuli were always equal in frequency and ranged from 50 to 400 Hz. The highest rates of detection for the combined-modality stimulus were obtained when stimulating frequencies in the two modalities were equal or closely spaced (and within the Pacinian range). Combined-modality detection for closely spaced frequencies was generally consistent with an algebraic sum model of perceptual integration; wider-frequency spacings were generally better fit by a Pythagorean sum model. Thus, perceptual integration of auditory and tactile stimuli at near-threshold levels appears to depend both on absolute frequency and relative frequency of stimulation within each modality. PMID:21117754
Auditory frequency generalization in the goldfish (Carassius auratus)1
Fay, Richard R.
1970-01-01
Auditory frequency generalization in the goldfish was studied at five points within the best hearing range through the use of classical respiratory conditioning. Each experimental group received single-stimulus conditioning sessions at one of five stimulus frequencies (100, 200, 400, 800, and 1600 Hz), and were subsequently tested for generalization at eight neighboring frequencies. All stimuli were presented 30 db above absolute threshold. Significant generalization decrements were found for all subjects. For the subjects conditioned in the range between 100 and 800 Hz, a nearly complete failure to generalize was found at one octave above and below the training frequency. The subjects conditioned at 1600 Hz produced relatively more flat gradients between 900 and 2000 Hz. The widths of the generalization gradients, expressed in Hz, increased as a power function of frequency with a slope greater than one. PMID:16811481
Seki, Yoshimasa; Okanoya, Kazuo
2008-02-01
Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
NASA Astrophysics Data System (ADS)
Bardy, Fabrice; Van Dun, Bram; Dillon, Harvey; Cowan, Robert
2014-08-01
Objective. To evaluate the viability of disentangling a series of overlapping ‘cortical auditory evoked potentials’ (CAEPs) elicited by different stimuli using least-squares (LS) deconvolution, and to assess the adaptation of CAEPs for different stimulus onset-asynchronies (SOAs). Approach. Optimal aperiodic stimulus sequences were designed by controlling the condition number of matrices associated with the LS deconvolution technique. First, theoretical considerations of LS deconvolution were assessed in simulations in which multiple artificial overlapping responses were recovered. Second, biological CAEPs were recorded in response to continuously repeated stimulus trains containing six different tone-bursts with frequencies 8, 4, 2, 1, 0.5, 0.25 kHz separated by SOAs jittered around 150 (120-185), 250 (220-285) and 650 (620-685) ms. The control condition had a fixed SOA of 1175 ms. In a second condition, using the same SOAs, trains of six stimuli were separated by a silence gap of 1600 ms. Twenty-four adults with normal hearing (<20 dB HL) were assessed. Main results. Results showed disentangling of a series of overlapping responses using LS deconvolution on simulated waveforms as well as on real EEG data. The use of rapid presentation and LS deconvolution did not however, allow the recovered CAEPs to have a higher signal-to-noise ratio than for slowly presented stimuli. The LS deconvolution technique enables the analysis of a series of overlapping responses in EEG. Significance. LS deconvolution is a useful technique for the study of adaptation mechanisms of CAEPs for closely spaced stimuli whose characteristics change from stimulus to stimulus. High-rate presentation is necessary to develop an understanding of how the auditory system encodes natural speech or other intrinsically high-rate stimuli.
Theoretical Limitations on Functional Imaging Resolution in Auditory Cortex
Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.
2010-01-01
Functional imaging can reveal detailed organizational structure in cerebral cortical areas, but neuronal response features and local neural interconnectivity can influence the resulting images, possibly limiting the inferences that can be drawn about neural function. Discerning the fundamental principles of organizational structure in the auditory cortex of multiple species has been somewhat challenging historically both with functional imaging and with electrophysiology. A possible limitation affecting any methodology using pooled neuronal measures may be the relative distribution of response selectivity throughout the population of auditory cortex neurons. One neuronal response type inherited from the cochlea, for example, exhibits a receptive field that increases in size (i.e., decreases in selectivity) at higher stimulus intensities. Even though these neurons appear to represent a minority of auditory cortex neurons, they are likely to contribute disproportionately to the activity detected in functional images, especially if intense sounds are used for stimulation. To evaluate the potential influence of neuronal subpopulations upon functional images of primary auditory cortex, a model array representing cortical neurons was probed with virtual imaging experiments under various assumptions about the local circuit organization. As expected, different neuronal subpopulations were activated preferentially under different stimulus conditions. In fact, stimulus protocols that can preferentially excite selective neurons, resulting in a relatively sparse activation map, have the potential to improve the effective resolution of functional auditory cortical images. These experimental results also make predictions about auditory cortex organization that can be tested with refined functional imaging experiments. PMID:20079343
Discrepant visual speech facilitates covert selective listening in "cocktail party" conditions.
Williams, Jason A
2012-06-01
The presence of congruent visual speech information facilitates the identification of auditory speech, while the addition of incongruent visual speech information often impairs accuracy. This latter arrangement occurs naturally when one is being directly addressed in conversation but listens to a different speaker. Under these conditions, performance may diminish since: (a) one is bereft of the facilitative effects of the corresponding lip motion and (b) one becomes subject to visual distortion by incongruent visual speech; by contrast, speech intelligibility may be improved due to (c) bimodal localization of the central unattended stimulus. Participants were exposed to centrally presented visual and auditory speech while attending to a peripheral speech stream. In some trials, the lip movements of the central visual stimulus matched the unattended speech stream; in others, the lip movements matched the attended peripheral speech. Accuracy for the peripheral stimulus was nearly one standard deviation greater with incongruent visual information, compared to the congruent condition which provided bimodal pattern recognition cues. Likely, the bimodal localization of the central stimulus further differentiated the stimuli and thus facilitated intelligibility. Results are discussed with regard to similar findings in an investigation of the ventriloquist effect, and the relative strength of localization and speech cues in covert listening.
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Visual and auditory accessory stimulus offset and the Simon effect.
Nishimura, Akio; Yokosawa, Kazuhiko
2010-10-01
We investigated the effect on the right and left responses of the disappearance of a task-irrelevant stimulus located on the right or left side. Participants pressed a right or left response key on the basis of the color of a centrally located visual target. Visual (Experiment 1) or auditory (Experiment 2) task-irrelevant accessory stimuli appeared or disappeared at locations to the right or left of the central target. In Experiment 1, responses were faster when onset or offset of the visual accessory stimulus was spatially congruent with the response. In Experiment 2, responses were again faster when onset of the auditory accessory stimulus and the response were on the same side. However, responses were slightly slower when offset of the auditory accessory stimulus and the response were on the same side than when they were on opposite sides. These findings indicate that transient change information is crucial for a visual Simon effect, whereas sustained stimulation from an ongoing stimulus also contributes to an auditory Simon effect.
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles
2016-02-01
Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Acquisition of Conditioning between Methamphetamine and Cues in Healthy Humans
Mayo, Leah M.; de Wit, Harriet
2016-01-01
Environmental stimuli repeatedly paired with drugs of abuse can elicit conditioned responses that are thought to promote future drug seeking. We recently showed that healthy volunteers acquired conditioned responses to auditory and visual stimuli after just two pairings with methamphetamine (MA, 20 mg, oral). This study extended these findings by systematically varying the number of drug-stimuli pairings. We expected that more pairings would result in stronger conditioning. Three groups of healthy adults were randomly assigned to receive 1, 2 or 4 pairings (Groups P1, P2 and P4, Ns = 13, 16, 16, respectively) of an auditory-visual stimulus with MA, and another stimulus with placebo (PBO). Drug-cue pairings were administered in an alternating, counterbalanced order, under double-blind conditions, during 4 hr sessions. MA produced prototypic subjective effects (mood, ratings of drug effects) and alterations in physiology (heart rate, blood pressure). Although subjects did not exhibit increased behavioral preference for, or emotional reactivity to, the MA-paired cue after conditioning, they did exhibit an increase in attentional bias (initial gaze) toward the drug-paired stimulus. Further, subjects who had four pairings reported “liking” the MA-paired cue more than the PBO cue after conditioning. Thus, the number of drug-stimulus pairings, varying from one to four, had only modest effects on the strength of conditioned responses. Further studies investigating the parameters under which drug conditioning occurs will help to identify risk factors for developing drug abuse, and provide new treatment strategies. PMID:27548681
Human auditory evoked potentials. I - Evaluation of components
NASA Technical Reports Server (NTRS)
Picton, T. W.; Hillyard, S. A.; Krausz, H. I.; Galambos, R.
1974-01-01
Fifteen distinct components can be identified in the scalp recorded average evoked potential to an abrupt auditory stimulus. The early components occurring in the first 8 msec after a stimulus represent the activation of the cochlea and the auditory nuclei of the brainstem. The middle latency components occurring between 8 and 50 msec after the stimulus probably represent activation of both auditory thalamus and cortex but can be seriously contaminated by concurrent scalp muscle reflex potentials. The longer latency components occurring between 50 and 300 msec after the stimulus are maximally recorded over fronto-central scalp regions and seem to represent widespread activation of frontal cortex.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Tanaka, T; Kojima, S; Takeda, H; Ino, S; Ifukube, T
2001-12-15
The maintenance of postural balance depends on effective and efficient feedback from various sensory inputs. The importance of auditory inputs in this respect is not, as yet, fully understood. The purpose of this study was to analyse how the moving auditory stimuli could affect the standing balance in healthy adults of different ages. The participants of the study were 12 healthy volunteers, who were divided into two age categories: the young group (mean = 21.9 years) and the elderly group (mean = 68.9 years). The instrument used for evaluation of standing balance was a force plate for measuring body sway parameters. The toe pressure was measured using the F-scan Tactile Sensor System. The moving auditory stimulus produced a white-noise sound and binaural cue using the Beachtron Affordable 3D Audio system. The moving auditory stimulus conditions were employed by having the sound come from the right to left or vice versa at the height of the participant's ears. Participants were asked to stand on the force plate in the Romberg position for 20 s with either eyes opened or eyes closed for analysing the effect of visual input. Simultaneously, all participants tried to remain in the standing position with and without auditory stimulation that the participants heard from the headphone. In addition, the variables of body sway were measured under four conditions for analysing the effect of decreased tactile sensation of toes and feet soles: standing on the normal surface (NS) or soft surface (SS) with and without auditory stimulation. The participants were asked to stand in a total of eight conditions. The results showed that the lateral body sway of the elderly group was more influenced than that of the young group by the lateral moving auditory stimulation. The analysis of toe pressure indicated that all participants used their left feet more than their right feet to maintain balance. Moreover, the elderly had the tendency to be stabilized mainly by use of their heels. The young group were mainly stabilized by the toes of their feet. The results suggest that the elderly may need a more appropriate stimulus of tactile and auditory sense as a feedback system than the young for maintaining and control of their standing postures.
Verbal Recall of Auditory and Visual Signals by Normal and Deficient Reading Children.
ERIC Educational Resources Information Center
Levine, Maureen Julianne
Verbal recall of bisensory memory tasks was compared among 48 9- to 12-year old boys in three groups: normal readers, primary deficit readers, and secondary deficit readers. Auditory and visual stimulus pairs composed of digits, which incorporated variations of intersensory and intrasensory conditions were administered to Ss through a Bell and…
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
The effect of changes in stimulus level on electrically evoked cortical auditory potentials.
Kim, Jae-Ryong; Brown, Carolyn J; Abbas, Paul J; Etler, Christine P; O'Brien, Sara
2009-06-01
The purpose of this study was to determine whether the electrically evoked acoustic change complex (EACC) could be used to assess sensitivity to changes in stimulus level in cochlear implant (CI) recipients and to investigate the relationship between EACC amplitude and rate of growth of the N1-P2 onset response with increases in stimulus level. Twelve postlingually deafened adults using Nucleus CI24 CIs participated in this study. Nucleus Implant Communicator (NIC) routines were used to bypass the speech processor and to control the stimulation of the implant directly. The stimulus consisted of an 800 msec burst of a 1000 pps biphasic pulse train. A change in the stimulus level was introduced 400 msec after stimulus onset. Band-pass filtering (1 to 100 Hz) was used to minimize stimulus artifact. Four to six recordings of 50 sweeps were obtained for each condition, and averaged responses were analyzed in the time domain using standard peak picking procedures. Cortical auditory change potentials were recorded from CI users in response to both increases and decreases in stimulation level. The amplitude of the EACC was found to be dependent on the magnitude of the stimulus change. Increases in stimulus level elicited more robust EACC responses than decreases in stimulus level. Also, EACC amplitudes were significantly correlated with the slope of the growth of the onset response. This work describes the effect of change in stimulus level on electrically evoked auditory change potentials in CI users. The amplitude of the EACC was found to be related both to the magnitude of the stimulus change introduced and to the rate of growth of the N1-P2 onset response. To the extent that the EACC reflects processing of stimulus change, it could potentially be a valuable tool for assessing neural processing of the kinds of stimulation patterns produced by a CI. Further studies are needed, however, to determine the relationships between the EACC and psychophysical measures of intensity discrimination in CI recipients.
Auditory Cortex Is Required for Fear Potentiation of Gap Detection
Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.
2014-01-01
Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510
ERIC Educational Resources Information Center
Singer, Bryan F.; Bryan, Myranda A.; Popov, Pavlo; Scarff, Raymond; Carter, Cody; Wright, Erin; Aragona, Brandon J.; Robinson, Terry E.
2016-01-01
The sensory properties of a reward-paired cue (a conditioned stimulus; CS) may impact the motivational value attributed to the cue, and in turn influence the form of the conditioned response (CR) that develops. A cue with multiple sensory qualities, such as a moving lever-CS, may activate numerous neural pathways that process auditory and visual…
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Deviance sensitivity in the auditory cortex of freely moving rats
2018-01-01
Deviance sensitivity is the specific response to a surprising stimulus, one that violates expectations set by the past stimulation stream. In audition, deviance sensitivity is often conflated with stimulus-specific adaptation (SSA), the decrease in responses to a common stimulus that only partially generalizes to other, rare stimuli. SSA is usually measured using oddball sequences, where a common (standard) tone and a rare (deviant) tone are randomly intermixed. However, the larger responses to a tone when deviant does not necessarily represent deviance sensitivity. Deviance sensitivity is commonly tested using a control sequence in which many different tones serve as the standard, eliminating the expectations set by the standard ('deviant among many standards'). When the response to a tone when deviant (against a single standard) is larger than the responses to the same tone in the control sequence, it is concluded that true deviance sensitivity occurs. In primary auditory cortex of anesthetized rats, responses to deviants and to the same tones in the control condition are comparable in size. We recorded local field potentials and multiunit activity from the auditory cortex of awake, freely moving rats, implanted with 32-channel drivable microelectrode arrays and using telemetry. We observed highly significant SSA in the awake state. Moreover, the responses to a tone when deviant were significantly larger than the responses to the same tone in the control condition. These results establish the presence of true deviance sensitivity in primary auditory cortex in awake rats. PMID:29874246
Kinesthetic information facilitates saccades towards proprioceptive-tactile targets.
Voudouris, Dimitris; Goettker, Alexander; Mueller, Stefanie; Fiehler, Katja
2016-05-01
Saccades to somatosensory targets have longer latencies and are less accurate and precise than saccades to visual targets. Here we examined how different somatosensory information influences the planning and control of saccadic eye movements. Participants fixated a central cross and initiated a saccade as fast as possible in response to a tactile stimulus that was presented to either the index or the middle fingertip of their unseen left hand. In a static condition, the hand remained at a target location for the entire block of trials and the stimulus was presented at a fixed time after an auditory tone. Therefore, the target location was derived only from proprioceptive and tactile information. In a moving condition, the hand was first actively moved to the same target location and the stimulus was then presented immediately. Thus, in the moving condition additional kinesthetic information about the target location was available. We found shorter saccade latencies in the moving compared to the static condition, but no differences in accuracy or precision of saccadic endpoints. In a second experiment, we introduced variable delays after the auditory tone (static condition) or after the end of the hand movement (moving condition) in order to reduce the predictability of the moment of the stimulation and to allow more time to process the kinesthetic information. Again, we found shorter latencies in the moving compared to the static condition but no improvement in saccade accuracy or precision. In a third experiment, we showed that the shorter saccade latencies in the moving condition cannot be explained by the temporal proximity between the relevant event (auditory tone or end of hand movement) and the moment of the stimulation. Our findings suggest that kinesthetic information facilitates planning, but not control, of saccadic eye movements to proprioceptive-tactile targets. Copyright © 2016 Elsevier Ltd. All rights reserved.
Control of Auditory Attention in Children With Specific Language Impairment.
Victorino, Kristen R; Schwartz, Richard G
2015-08-01
Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a cross-modal word recognition task. Twenty participants with SLI (ages 9-12 years) and 20 age-matched peers with typical language development (TLD) listened to words through headphones and were instructed to attend to the words in 1 ear while ignoring the words in the other ear. They were simultaneously presented with pictures and asked to make a lexical decision about whether the pictures and auditory words were the same or different. Accuracy and reaction time were measured in 5 conditions, in which the stimulus in the unattended channel was manipulated. The groups performed with similar accuracy. Compared with their peers with TLD, children with SLI had slower reaction times overall and different within-group patterns of performance by condition. Children with TLD showed efficient inhibitory control in conditions that required active suppression of competing stimuli. Participants with SLI had difficulty exerting control over their auditory attention in all conditions, with particular difficulty inhibiting distractors of all types.
Applications of psychophysical models to the study of auditory development
NASA Astrophysics Data System (ADS)
Werner, Lynne
2003-04-01
Psychophysical models of listening, such as the energy detector model, have provided a framework from which to characterize the function of the mature auditory system and to explore how mature listeners make use of auditory information in sound identification. The application of such models to the study of auditory development has similarly provided insight into the characteristics of infant hearing and listening. Infants intensity, frequency, temporal and spatial resolution have been described at least grossly and some contributions of immature listening strategies to infant hearing have been identified. Infants psychoacoustic performance is typically poorer than adults under identical stimulus conditions. However, the infant's performance typically varies with stimulus condition in a way that is qualitatively similar to the adult's performance. In some cases, though, infants perform in a qualitatively different way from adults in psychoacoustic experiments. Further, recent psychoacoustic studies of children suggest that the classic models of listening may be inadequate to describe the children's performance. The characteristics of a model that might be appropriate for the immature listener will be outlined and the implications for models of mature listening will be discussed. [Work supported by NIH grants DC00396 and by DC04661.
Preattentive binding of auditory and visual stimulus features.
Winkler, István; Czigler, István; Sussman, Elyse; Horváth, János; Balázs, Lászlo
2005-02-01
We investigated the role of attention in feature binding in the auditory and the visual modality. One auditory and one visual experiment used the mismatch negativity (MMN and vMMN, respectively) event-related potential to index the memory representations created from stimulus sequences, which were either task-relevant and, therefore, attended or task-irrelevant and ignored. In the latter case, the primary task was a continuous demanding within-modality task. The test sequences were composed of two frequently occurring stimuli, which differed from each other in two stimulus features (standard stimuli) and two infrequently occurring stimuli (deviants), which combined one feature from one standard stimulus with the other feature of the other standard stimulus. Deviant stimuli elicited MMN responses of similar parameters across the different attentional conditions. These results suggest that the memory representations involved in the MMN deviance detection response encoded the frequently occurring feature combinations whether or not the test sequences were attended. A possible alternative to the memory-based interpretation of the visual results, the elicitation of the McCollough color-contingent aftereffect, was ruled out by the results of our third experiment. The current results are compared with those supporting the attentive feature integration theory. We conclude that (1) with comparable stimulus paradigms, similar results have been obtained in the two modalities, (2) there exist preattentive processes of feature binding, however, (3) conjoining features within rich arrays of objects under time pressure and/or longterm retention of the feature-conjoined memory representations may require attentive processes.
NASA Astrophysics Data System (ADS)
Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.
1995-08-01
A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.
Analysis of MEG Auditory 40-Hz Response by Event-Related Coherence
NASA Astrophysics Data System (ADS)
Tanaka, Keita; Kawakatsu, Masaki; Yunokuchi, Kazutomo
We examined the event-related coherence of magnetoencephalography (auditory 40-Hz response) while the subjects were presented click acoustic stimuli at repetition rate 40Hz in the ‘Attend' and ‘Reading' conditions. MEG signals were recorded of 5 healthy males using the whole-head SQUID system. The event-related coherence was used to provide a measurement of short synchronization which occurs in response to a stimulus. The results showed that the peak value of coherence in auditory 40-Hz response between right and left temporal regions was significantly larger when subjects paid attention to stimuli (‘Attend' condition) rather than it was when the subject ignored them (‘Reading' condition). Moreover, the latency of coherence in auditory 40-Hz response was significantly shorter when the subjects paid attention to stimuli (‘Attend' condition). These results suggest that the phase synchronization between right and left temporal region in auditory 40-Hz response correlate closely with selective attention.
ERIC Educational Resources Information Center
Panlilio, Leigh V.; Weiss, Stanley J.
2005-01-01
In earlier studies with rats, the effectiveness of the auditory element of a tone--light discriminative stimulus was enhanced when the conditioned incentive value of the compound was negative rather than positive. The present experiment systematically replicated these results in pigeons trained to press a treadle in the presence of a tone--light…
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Sela, Itamar
2014-01-01
Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing (SOP) gap (Asynchrony) between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time (RT) procedure where participants were asked to identify whether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, RT, and Event Related Potential (ERP) measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal SOP of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170 and 240 ms after stimulus presentation), increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal SOP of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia. PMID:24959125
Selective impairment of auditory selective attention under concurrent cognitive load.
Dittrich, Kerstin; Stahl, Christoph
2012-06-01
Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.
Order of stimulus presentation influences children's acquisition in receptive identification tasks.
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-03-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. © 2015 Society for the Experimental Analysis of Behavior.
Control of Auditory Attention in Children With Specific Language Impairment
Schwartz, Richard G.
2015-01-01
Purpose Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a cross-modal word recognition task. Method Twenty participants with SLI (ages 9–12 years) and 20 age-matched peers with typical language development (TLD) listened to words through headphones and were instructed to attend to the words in 1 ear while ignoring the words in the other ear. They were simultaneously presented with pictures and asked to make a lexical decision about whether the pictures and auditory words were the same or different. Accuracy and reaction time were measured in 5 conditions, in which the stimulus in the unattended channel was manipulated. Results The groups performed with similar accuracy. Compared with their peers with TLD, children with SLI had slower reaction times overall and different within-group patterns of performance by condition. Conclusions Children with TLD showed efficient inhibitory control in conditions that required active suppression of competing stimuli. Participants with SLI had difficulty exerting control over their auditory attention in all conditions, with particular difficulty inhibiting distractors of all types. PMID:26262428
Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal
Namazi, Hamidreza; Kulish, Vladimir V.
2016-01-01
One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory. PMID:27528219
Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal.
Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Kulish, Vladimir V
2016-08-30
One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that may exist between the memory content of stimulus and the memory content of EEG signal. For this purpose we consider the Hurst exponent as the measure of memory. This study reveals the plasticity of human EEG signals in relation to the auditory stimuli. For the first time we demonstrated that the memory content of an EEG signal shifts towards the memory content of the auditory stimulus used. The results of this analysis showed that an auditory stimulus with higher memory content causes a larger increment in the memory content of an EEG signal. For the verification of this result, we benefit from approximate entropy as indicator of time series randomness. The capability, observed in this research, can be further investigated in relation to human memory.
[Functional anatomy of the cochlear nerve and the central auditory system].
Simon, E; Perrot, X; Mertens, P
2009-04-01
The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Inter-subject synchronization of brain responses during natural music listening
Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod
2015-01-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016
Interactions across Multiple Stimulus Dimensions in Primary Auditory Cortex.
Sloas, David C; Zhuo, Ran; Xue, Hongbo; Chambers, Anna R; Kolaczyk, Eric; Polley, Daniel B; Sen, Kamal
2016-01-01
Although sensory cortex is thought to be important for the perception of complex objects, its specific role in representing complex stimuli remains unknown. Complex objects are rich in information along multiple stimulus dimensions. The position of cortex in the sensory hierarchy suggests that cortical neurons may integrate across these dimensions to form a more gestalt representation of auditory objects. Yet, studies of cortical neurons typically explore single or few dimensions due to the difficulty of determining optimal stimuli in a high dimensional stimulus space. Evolutionary algorithms (EAs) provide a potentially powerful approach for exploring multidimensional stimulus spaces based on real-time spike feedback, but two important issues arise in their application. First, it is unclear whether it is necessary to characterize cortical responses to multidimensional stimuli or whether it suffices to characterize cortical responses to a single dimension at a time. Second, quantitative methods for analyzing complex multidimensional data from an EA are lacking. Here, we apply a statistical method for nonlinear regression, the generalized additive model (GAM), to address these issues. The GAM quantitatively describes the dependence between neural response and all stimulus dimensions. We find that auditory cortical neurons in mice are sensitive to interactions across dimensions. These interactions are diverse across the population, indicating significant integration across stimulus dimensions in auditory cortex. This result strongly motivates using multidimensional stimuli in auditory cortex. Together, the EA and the GAM provide a novel quantitative paradigm for investigating neural coding of complex multidimensional stimuli in auditory and other sensory cortices.
Auditory Gap-in-Noise Detection Behavior in Ferrets and Humans
2015-01-01
The precise encoding of temporal features of auditory stimuli by the mammalian auditory system is critical to the perception of biologically important sounds, including vocalizations, speech, and music. In this study, auditory gap-detection behavior was evaluated in adult pigmented ferrets (Mustelid putorius furo) using bandpassed stimuli designed to widely sample the ferret’s behavioral and physiological audiogram. Animals were tested under positive operant conditioning, with psychometric functions constructed in response to gap-in-noise lengths ranging from 3 to 270 ms. Using a modified version of this gap-detection task, with the same stimulus frequency parameters, we also tested a cohort of normal-hearing human subjects. Gap-detection thresholds were computed from psychometric curves transformed according to signal detection theory, revealing that for both ferrets and humans, detection sensitivity was worse for silent gaps embedded within low-frequency noise compared with high-frequency or broadband stimuli. Additional psychometric function analysis of ferret behavior indicated effects of stimulus spectral content on aspects of behavioral performance related to decision-making processes, with animals displaying improved sensitivity for broadband gap-in-noise detection. Reaction times derived from unconditioned head-orienting data and the time from stimulus onset to reward spout activation varied with the stimulus frequency content and gap length, as well as the approach-to-target choice and reward location. The present study represents a comprehensive evaluation of gap-detection behavior in ferrets, while similarities in performance with our human subjects confirm the use of the ferret as an appropriate model of temporal processing. PMID:26052794
Where the imaginal appears real: A positron emission tomography study of auditory hallucinations
Szechtman, Henry; Woody, Erik; Bowers, Kenneth S.; Nahmias, Claude
1998-01-01
An auditory hallucination shares with imaginal hearing the property of being self-generated and with real hearing the experience of the stimulus being an external one. To investigate where in the brain an auditory event is “tagged” as originating from the external world, we used positron emission tomography to identify neural sites activated by both real hearing and hallucinations but not by imaginal hearing. Regional cerebral blood flow was measured during hearing, imagining, and hallucinating in eight healthy, highly hypnotizable male subjects prescreened for their ability to hallucinate under hypnosis (hallucinators). Control subjects were six highly hypnotizable male volunteers who lacked the ability to hallucinate under hypnosis (nonhallucinators). A region in the right anterior cingulate (Brodmann area 32) was activated in the group of hallucinators when they heard an auditory stimulus and when they hallucinated hearing it but not when they merely imagined hearing it. The same experimental conditions did not yield this activation in the group of nonhallucinators. Inappropriate activation of the right anterior cingulate may lead self-generated thoughts to be experienced as external, producing spontaneous auditory hallucinations. PMID:9465124
Deconvolution of magnetic acoustic change complex (mACC).
Bardy, Fabrice; McMahon, Catherine M; Yau, Shu Hui; Johnson, Blake W
2014-11-01
The aim of this study was to design a novel experimental approach to investigate the morphological characteristics of auditory cortical responses elicited by rapidly changing synthesized speech sounds. Six sound-evoked magnetoencephalographic (MEG) responses were measured to a synthesized train of speech sounds using the vowels /e/ and /u/ in 17 normal hearing young adults. Responses were measured to: (i) the onset of the speech train, (ii) an F0 increment; (iii) an F0 decrement; (iv) an F2 decrement; (v) an F2 increment; and (vi) the offset of the speech train using short (jittered around 135ms) and long (1500ms) stimulus onset asynchronies (SOAs). The least squares (LS) deconvolution technique was used to disentangle the overlapping MEG responses in the short SOA condition only. Comparison between the morphology of the recovered cortical responses in the short and long SOAs conditions showed high similarity, suggesting that the LS deconvolution technique was successful in disentangling the MEG waveforms. Waveform latencies and amplitudes were different for the two SOAs conditions and were influenced by the spectro-temporal properties of the sound sequence. The magnetic acoustic change complex (mACC) for the short SOA condition showed significantly lower amplitudes and shorter latencies compared to the long SOA condition. The F0 transition showed a larger reduction in amplitude from long to short SOA compared to the F2 transition. Lateralization of the cortical responses were observed under some stimulus conditions and appeared to be associated with the spectro-temporal properties of the acoustic stimulus. The LS deconvolution technique provides a new tool to study the properties of the auditory cortical response to rapidly changing sound stimuli. The presence of the cortical auditory evoked responses for rapid transition of synthesized speech stimuli suggests that the temporal code is preserved at the level of the auditory cortex. Further, the reduced amplitudes and shorter latencies might reflect intrinsic properties of the cortical neurons to rapidly presented sounds. This is the first demonstration of the separation of overlapping cortical responses to rapidly changing speech sounds and offers a potential new biomarker of discrimination of rapid transition of sound. Crown Copyright © 2014. Published by Elsevier Ireland Ltd. All rights reserved.
Galbraith, G C; Jhaveri, S P; Kuo, J
1997-01-01
Speech-evoked brainstem frequency-following responses (FFRs) were recorded to repeated presentations of the same stimulus word. Word repetition results in illusory verbal transformations (VTs) in which word perceptions can differ markedly from the actual stimulus. Previous behavioral studies support an explanation of VTs based on changes in arousal or attention. Horizontal and vertical dipole FFRs were recorded to assess responses with putative origins in the auditory nerve and central brainstem, respectively. FFRs were recorded from 18 subjects when they correctly heard the stimulus and when they reported VTs. Although horizontal and vertical dipole FFRs showed different frequency response patterns, dipoles did not differentiate between perceptual conditions. However, when subjects were divided into low- and high-VT groups (based on percentage of VT trials), a significant Condition x Group interaction resulted. This interaction showed the largest difference in FFR amplitudes during VT trials, with the low-VT group showing increased amplitudes, and the high-VT group showing decreased amplitudes, relative to trials in which the stimulus was correctly perceived. These results demonstrate measurable subject differences in the early processing of complex signals, due to possible effects of attention on the brainstem FFR. The present research shows that the FFR is useful in understanding human language as it is coded and processed in the brainstem auditory pathway.
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
Investigation of the neurological correlates of information reception
NASA Technical Reports Server (NTRS)
1971-01-01
Animals trained to respond to a given pattern of electrical stimuli applied to pathways or centers of the auditory nervous system respond also to certain patterns of acoustic stimuli without additional training. Likewise, only certain electrical stimuli elicit responses after training to a given acoustic signal. In most instances, if a response has been learned to a given electrical stimulus applied to one center of the auditory nervous system, the same stimulus applied to another auditory center at either a higher or lower level will also elicit the response. This kind of transfer of response does not take place when a stimulus is applied through electrodes implanted in neural tissue outside of the auditory system.
Stekelenburg, Jeroen J; Keetels, Mirjam
2016-05-01
The Colavita effect refers to the phenomenon that when confronted with an audiovisual stimulus, observers report more often to have perceived the visual than the auditory component. The Colavita effect depends on low-level stimulus factors such as spatial and temporal proximity between the unimodal signals. Here, we examined whether the Colavita effect is modulated by synesthetic congruency between visual size and auditory pitch. If the Colavita effect depends on synesthetic congruency, we expect a larger Colavita effect for synesthetically congruent size/pitch (large visual stimulus/low-pitched tone; small visual stimulus/high-pitched tone) than synesthetically incongruent (large visual stimulus/high-pitched tone; small visual stimulus/low-pitched tone) combinations. Participants had to identify stimulus type (visual, auditory or audiovisual). The study replicated the Colavita effect because participants reported more often the visual than auditory component of the audiovisual stimuli. Synesthetic congruency had, however, no effect on the magnitude of the Colavita effect. EEG recordings to congruent and incongruent audiovisual pairings showed a late frontal congruency effect at 400-550 ms and an occipitoparietal effect at 690-800 ms with neural sources in the anterior cingulate and premotor cortex for the 400- to 550-ms window and premotor cortex, inferior parietal lobule and the posterior middle temporal gyrus for the 690- to 800-ms window. The electrophysiological data show that synesthetic congruency was probably detected in a processing stage subsequent to the Colavita effect. We conclude that-in a modality detection task-the Colavita effect can be modulated by low-level structural factors but not by higher-order associations between auditory and visual inputs.
Daikhin, Luba; Ahissar, Merav
2015-07-01
Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
What is extinguished in auditory extinction?
Deouell, L Y; Soroker, N
2000-09-11
Extinction is a frequent sequel of brain damage, whereupon patients disregard (extinguish) a contralesional stimulus, and report only the more ipsilesional stimulus, of a pair of stimuli presented simultaneously. We investigated the possibility of a dissociation between the detection and the identification of extinguished phonemes. Fourteen right hemisphere damaged patients with severe auditory extinction were examined using a paradigm that separated the localization of stimuli and the identification of their phonetic content. Patients reported the identity of left-sided phonemes, while extinguishing them at the same time, in the traditional sense of the term. This dissociation suggests that auditory extinction is more about acknowledging the existence of a stimulus in the contralesional hemispace than about the actual processing of the stimulus.
Contingent negative variation (CNV) associated with sensorimotor timing error correction.
Jang, Joonyong; Jones, Myles; Milne, Elizabeth; Wilson, Daniel; Lee, Kwang-Hyuk
2016-02-15
Detection and subsequent correction of sensorimotor timing errors are fundamental to adaptive behavior. Using scalp-recorded event-related potentials (ERPs), we sought to find ERP components that are predictive of error correction performance during rhythmic movements. Healthy right-handed participants were asked to synchronize their finger taps to a regular tone sequence (every 600 ms), while EEG data were continuously recorded. Data from 15 participants were analyzed. Occasional irregularities were built into stimulus presentation timing: 90 ms before (advances: negative shift) or after (delays: positive shift) the expected time point. A tapping condition alternated with a listening condition in which identical stimulus sequence was presented but participants did not tap. Behavioral error correction was observed immediately following a shift, with a degree of over-correction with positive shifts. Our stimulus-locked ERP data analysis revealed, 1) increased auditory N1 amplitude for the positive shift condition and decreased auditory N1 modulation for the negative shift condition; and 2) a second enhanced negativity (N2) in the tapping positive condition, compared with the tapping negative condition. In response-locked epochs, we observed a CNV (contingent negative variation)-like negativity with earlier latency in the tapping negative condition compared with the tapping positive condition. This CNV-like negativity peaked at around the onset of subsequent tapping, with the earlier the peak, the better the error correction performance with the negative shifts while the later the peak, the better the error correction performance with the positive shifts. This study showed that the CNV-like negativity was associated with the error correction performance during our sensorimotor synchronization study. Auditory N1 and N2 were differentially involved in negative vs. positive error correction. However, we did not find evidence for their involvement in behavioral error correction. Overall, our study provides the basis from which further research on the role of the CNV in perceptual and motor timing can be developed. Copyright © 2015 Elsevier Inc. All rights reserved.
Brain bases for auditory stimulus-driven figure-ground segregation.
Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D
2011-01-05
Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.
ERIC Educational Resources Information Center
Osnes, Berge; Hugdahl, Kenneth; Hjelmervik, Helene; Specht, Karsten
2012-01-01
In studies on auditory speech perception, participants are often asked to perform active tasks, e.g. decide whether the perceived sound is a speech sound or not. However, information about the stimulus, inherent in such tasks, may induce expectations that cause altered activations not only in the auditory cortex, but also in frontal areas such as…
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields.
Yildiz, Izzet B; Mesgarani, Nima; Deneve, Sophie
2016-12-07
A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by "explaining away," a divisive competition between alternative interpretations of the auditory scene. Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Copyright © 2016 Yildiz et al.
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
Cortical evoked potentials to an auditory illusion: binaural beats.
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-08-01
To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Cortical Evoked Potentials to an Auditory Illusion: Binaural Beats
Pratt, Hillel; Starr, Arnold; Michalewski, Henry J.; Dimitrijevic, Andrew; Bleich, Naomi; Mittelman, Nomi
2009-01-01
Objective: To define brain activity corresponding to an auditory illusion of 3 and 6 Hz binaural beats in 250 Hz or 1,000 Hz base frequencies, and compare it to the sound onset response. Methods: Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000 Hz to one ear and 3 or 6 Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3 Hz and 6 Hz, in base frequencies of 250 Hz and 1000 Hz. Tones were 2,000 ms in duration and presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. Results: All stimuli evoked tone-onset P50, N100 and P200 components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P50 had significantly different sources than the beats-evoked oscillations; and N100 and P200 sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. Conclusions: Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. Significance: Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp. PMID:19616993
2010-01-01
Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535
Inter-subject synchronization of brain responses during natural music listening.
Abrams, Daniel A; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J; Menon, Vinod
2013-05-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic 'real-world' music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Birkett, Emma E; Talcott, Joel B
2012-01-01
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
Schlund, M W
2000-10-01
Bedside hearing screenings are routinely conducted by speech and language pathologists for brain injury survivors during rehabilitation. Cognitive deficits resulting from brain injury, however, may interfere with obtaining estimates of auditory thresholds. Poor comprehension or attention deficits often compromise patient abilities to follow procedural instructions. This article describes the effects of jointly applying behavioral methods and psychophysical methods to improve two severely brain-injured survivors' attending and reporting on auditory test stimuli presentation. Treatment consisted of stimulus control training that involved differentially reinforcing responding in the presence and absence of an auditory test tone. Subsequent hearing screenings were conducted with novel auditory test tones and a common titration procedure. Results showed that prior stimulus control training improved attending and reporting such that hearing screenings were conducted and estimates of auditory thresholds were obtained.
Neural Dynamics of Audiovisual Synchrony and Asynchrony Perception in 6-Month-Old Infants
Kopp, Franziska; Dietrich, Claudia
2013-01-01
Young infants are sensitive to multisensory temporal synchrony relations, but the neural dynamics of temporal interactions between vision and audition in infancy are not well understood. We investigated audiovisual synchrony and asynchrony perception in 6-month-old infants using event-related brain potentials (ERP). In a prior behavioral experiment (n = 45), infants were habituated to an audiovisual synchronous stimulus and tested for recovery of interest by presenting an asynchronous test stimulus in which the visual stream was delayed with respect to the auditory stream by 400 ms. Infants who behaviorally discriminated the change in temporal alignment were included in further analyses. In the EEG experiment (final sample: n = 15), synchronous and asynchronous stimuli (visual delay of 400 ms) were presented in random order. Results show latency shifts in the auditory ERP components N1 and P2 as well as the infant ERP component Nc. Latencies in the asynchronous condition were significantly longer than in the synchronous condition. After video onset but preceding the auditory onset, amplitude modulations propagating from posterior to anterior sites and related to the Pb component of infants’ ERP were observed. Results suggest temporal interactions between the two modalities. Specifically, they point to the significance of anticipatory visual motion for auditory processing, and indicate young infants’ predictive capacities for audiovisual temporal synchrony relations. PMID:23346071
Missing a trick: Auditory load modulates conscious awareness in audition.
Fairnie, Jake; Moore, Brian C J; Remington, Anna
2016-07-01
In the visual domain there is considerable evidence supporting the Load Theory of Attention and Cognitive Control, which holds that conscious perception of background stimuli depends on the level of perceptual load involved in a primary task. However, literature on the applicability of this theory to the auditory domain is limited and, in many cases, inconsistent. Here we present a novel "auditory search task" that allows systematic investigation of the impact of auditory load on auditory conscious perception. An array of simultaneous, spatially separated sounds was presented to participants. On half the trials, a critical stimulus was presented concurrently with the array. Participants were asked to detect which of 2 possible targets was present in the array (primary task), and whether the critical stimulus was present or absent (secondary task). Increasing the auditory load of the primary task (raising the number of sounds in the array) consistently reduced the ability to detect the critical stimulus. This indicates that, at least in certain situations, load theory applies in the auditory domain. The implications of this finding are discussed both with respect to our understanding of typical audition and for populations with altered auditory processing. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Constructing Noise-Invariant Representations of Sound in the Auditory Pathway
Rabinowitz, Neil C.; Willmore, Ben D. B.; King, Andrew J.; Schnupp, Jan W. H.
2013-01-01
Identifying behaviorally relevant sounds in the presence of background noise is one of the most important and poorly understood challenges faced by the auditory system. An elegant solution to this problem would be for the auditory system to represent sounds in a noise-invariant fashion. Since a major effect of background noise is to alter the statistics of the sounds reaching the ear, noise-invariant representations could be promoted by neurons adapting to stimulus statistics. Here we investigated the extent of neuronal adaptation to the mean and contrast of auditory stimulation as one ascends the auditory pathway. We measured these forms of adaptation by presenting complex synthetic and natural sounds, recording neuronal responses in the inferior colliculus and primary fields of the auditory cortex of anaesthetized ferrets, and comparing these responses with a sophisticated model of the auditory nerve. We find that the strength of both forms of adaptation increases as one ascends the auditory pathway. To investigate whether this adaptation to stimulus statistics contributes to the construction of noise-invariant sound representations, we also presented complex, natural sounds embedded in stationary noise, and used a decoding approach to assess the noise tolerance of the neuronal population code. We find that the code for complex sounds in the periphery is affected more by the addition of noise than the cortical code. We also find that noise tolerance is correlated with adaptation to stimulus statistics, so that populations that show the strongest adaptation to stimulus statistics are also the most noise-tolerant. This suggests that the increase in adaptation to sound statistics from auditory nerve to midbrain to cortex is an important stage in the construction of noise-invariant sound representations in the higher auditory brain. PMID:24265596
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Adaptation to stimulus statistics in the perception and neural representation of auditory space.
Dahmen, Johannes C; Keating, Peter; Nodal, Fernando R; Schulz, Andreas L; King, Andrew J
2010-06-24
Sensory systems are known to adapt their coding strategies to the statistics of their environment, but little is still known about the perceptual implications of such adjustments. We investigated how auditory spatial processing adapts to stimulus statistics by presenting human listeners and anesthetized ferrets with noise sequences in which interaural level differences (ILD) rapidly fluctuated according to a Gaussian distribution. The mean of the distribution biased the perceived laterality of a subsequent stimulus, whereas the distribution's variance changed the listeners' spatial sensitivity. The responses of neurons in the inferior colliculus changed in line with these perceptual phenomena. Their ILD preference adjusted to match the stimulus distribution mean, resulting in large shifts in rate-ILD functions, while their gain adapted to the stimulus variance, producing pronounced changes in neural sensitivity. Our findings suggest that processing of auditory space is geared toward emphasizing relative spatial differences rather than the accurate representation of absolute position.
Tyndall, Ian; Ragless, Liam; O'Hora, Denis
2018-04-01
The present study examined whether increasing visual perceptual load differentially affected both Socially Meaningful and Non-socially Meaningful auditory stimulus awareness in neurotypical (NT, n = 59) adults and Autism Spectrum Disorder (ASD, n = 57) adults. On a target trial, an unexpected critical auditory stimulus (CAS), either a Non-socially Meaningful ('beep' sound) or Socially Meaningful ('hi') stimulus, was played concurrently with the presentation of the visual task. Under conditions of low visual perceptual load both NT and ASD samples reliably noticed the CAS at similar rates (77-81%), whether the CAS was Socially Meaningful or Non-socially Meaningful. However, during high visual perceptual load NT and ASD participants reliably noticed the meaningful CAS (NT = 71%, ASD = 67%), but NT participants were unlikely to notice the Non-meaningful CAS (20%), whereas ASD participants reliably noticed it (80%), suggesting an inability to engage selective attention to ignore non-salient irrelevant distractor stimuli in ASD. Copyright © 2018 Elsevier Inc. All rights reserved.
Colin, C; Radeau, M; Soquet, A; Demolin, D; Colin, F; Deltenre, P
2002-04-01
The McGurk-MacDonald illusory percept is obtained by dubbing an incongruent articulatory movement on an auditory phoneme. This type of audiovisual speech perception contributes to the assessment of theories of speech perception. The mismatch negativity (MMN) reflects the detection of a deviant stimulus within the auditory short-term memory and besides an acoustic component, possesses, under certain conditions, a phonetic one. The present study assessed the existence of an MMN evoked by McGurk-MacDonald percepts elicited by audiovisual stimuli with constant auditory components. Cortical evoked potentials were recorded using the oddball paradigm on 8 adults in 3 experimental conditions: auditory alone, visual alone and audiovisual stimulation. Obtaining illusory percepts was confirmed in an additional psychophysical condition. The auditory deviant syllables and the audiovisual incongruent syllables elicited a significant MMN at Fz. In the visual condition, no negativity was observed either at Fz, or at O(z). An MMN can be evoked by visual articulatory deviants, provided they are presented in a suitable auditory context leading to a phonetically significant interaction. The recording of an MMN elicited by illusory McGurk percepts suggests that audiovisual integration mechanisms in speech take place rather early during the perceptual processes.
A comparison of methods for teaching receptive labeling to children with autism spectrum disorders.
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed.
Akhoun, Idrick; Moulin, Annie; Jeanvoine, Arnaud; Ménard, Mikael; Buret, François; Vollaire, Christian; Scorretti, Riccardo; Veuillet, Evelyne; Berger-Vachon, Christian; Collet, Lionel; Thai-Van, Hung
2008-11-15
Speech elicited auditory brainstem responses (Speech ABR) have been shown to be an objective measurement of speech processing in the brainstem. Given the simultaneous stimulation and recording, and the similarities between the recording and the speech stimulus envelope, there is a great risk of artefactual recordings. This study sought to systematically investigate the source of artefactual contamination in Speech ABR response. In a first part, we measured the sound level thresholds over which artefactual responses were obtained, for different types of transducers and experimental setup parameters. A watermelon model was used to model the human head susceptibility to electromagnetic artefact. It was found that impedances between the electrodes had a great effect on electromagnetic susceptibility and that the most prominent artefact is due to the transducer's electromagnetic leakage. The only artefact-free condition was obtained with insert-earphones shielded in a Faraday cage linked to common ground. In a second part of the study, using the previously defined artefact-free condition, we recorded speech ABR in unilateral deaf subjects and bilateral normal hearing subjects. In an additional control condition, Speech ABR was recorded with the insert-earphones used to deliver the stimulation, unplugged from the ears, so that the subjects did not perceive the stimulus. No responses were obtained from the deaf ear of unilaterally hearing impaired subjects, nor in the insert-out-of-the-ear condition in all the subjects, showing that Speech ABR reflects the functioning of the auditory pathways.
Martiniano, Eli Carlos; Santana, Milana Drumond Ramos; Barros, Érico Luiz Damasceno; do Socorro da Silva, Maria; Garner, David Matthew; de Abreu, Luiz Carlos; Valenti, Vitor E
2018-01-17
Music can improve the efficiency of medical treatment when correctly associated with drug action, reducing risk factors involving deteriorating cardiac function. We evaluated the effect of musical auditory stimulus associated with anti-hypertensive medication on heart rate (HR) autonomic control in hypertensive subjects. We evaluated 37 well-controlled hypertensive patients designated for anti-hypertensive medication. Heart rate variability (HRV) was calculated from the HR monitor recordings of two different, randomly sorted protocols (control and music) on two separate days. Patients were examined in a resting condition 10 minutes before medication and 20 minutes, 40 minutes and 60 minutes after oral medication. Music was played throughout the 60 minutes after medication with the same intensity for all subjects in the music protocol. We noted analogous response of systolic and diastolic arterial pressure in both protocols. HR decreased 60 minutes after medication in the music protocol while it remained unchanged in the control protocol. The effects of anti-hypertensive medication on SDNN (Standard deviation of all normal RR intervals), LF (low frequency, nu), HF (high frequency, nu) and alpha-1 scale were more intense in the music protocol. In conclusion, musical auditory stimulus increased HR autonomic responses to anti-hypertensive medication in well-controlled hypertensive subjects.
Adaptation in the auditory midbrain of the barn owl (Tyto alba) induced by tonal double stimulation.
Singheiser, Martin; Ferger, Roland; von Campenhausen, Mark; Wagner, Hermann
2012-02-01
During hunting, the barn owl typically listens to several successive sounds as generated, for example, by rustling mice. As auditory cells exhibit adaptive coding, the earlier stimuli may influence the detection of the later stimuli. This situation was mimicked with two double-stimulus paradigms, and adaptation was investigated in neurons of the barn owl's central nucleus of the inferior colliculus. Each double-stimulus paradigm consisted of a first or reference stimulus and a second stimulus (probe). In one paradigm (second level tuning), the probe level was varied, whereas in the other paradigm (inter-stimulus interval tuning), the stimulus interval between the first and second stimulus was changed systematically. Neurons were stimulated with monaural pure tones at the best frequency, while the response was recorded extracellularly. The responses to the probe were significantly reduced when the reference stimulus and probe had the same level and the inter-stimulus interval was short. This indicated response adaptation, which could be compensated for by an increase of the probe level of 5-7 dB over the reference level, if the latter was in the lower half of the dynamic range of a neuron's rate-level function. Recovery from adaptation could be best fitted with a double exponential showing a fast (1.25 ms) and a slow (800 ms) component. These results suggest that neurons in the auditory system show dynamic coding properties to tonal double stimulation that might be relevant for faithful upstream signal propagation. Furthermore, the overall stimulus level of the masker also seems to affect the recovery capabilities of auditory neurons. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Pre-stimulus EEG oscillations correlate with perceptual alternation of speech forms.
Barraza, Paulo; Jaume-Guazzini, Francisco; Rodríguez, Eugenio
2016-05-27
Speech perception is often seen as a passive process guided by physical stimulus properties. However, ongoing brain dynamics could influence the subsequent perceptual organization of the speech, to an as yet unknown extent. To elucidate this issue, we analyzed EEG oscillatory activity before and immediately after the repetitive auditory presentation of words inducing the so-called verbal transformation effect (VTE), or spontaneous alternation of meanings due to its rapid repetition. Subjects indicated whether the meaning of the bistable word changed or not. For the Reversal more than for the Stable condition, results show a pre-stimulus local alpha desynchronization (300-50ms), followed by an early post-stimulus increase of local beta synchrony (0-80ms), and then a late increase and decrease of local alpha (200-340ms) and beta (360-440ms) synchrony respectively. Additionally, the ERPs showed that reversal positivity (RP) and reversal negativity components (RN), along with a late positivity complex (LPC) correlate with switching between verbal forms. Our results show how the ongoing dynamics brain is actively involved in the perceptual organization of the speech, destabilizing verbal perceptual states, and facilitating the perceptual regrouping of the elements composing the linguistic auditory stimulus. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Memorable Audiovisual Narratives Synchronize Sensory and Supramodal Neural Responses
2016-01-01
Abstract Our brains integrate information across sensory modalities to generate perceptual experiences and form memories. However, it is difficult to determine the conditions under which multisensory stimulation will benefit or hinder the retrieval of everyday experiences. We hypothesized that the determining factor is the reliability of information processing during stimulus presentation, which can be measured through intersubject correlation of stimulus-evoked activity. We therefore presented biographical auditory narratives and visual animations to 72 human subjects visually, auditorily, or combined, while neural activity was recorded using electroencephalography. Memory for the narrated information, contained in the auditory stream, was tested 3 weeks later. While the visual stimulus alone led to no meaningful retrieval, this related stimulus improved memory when it was combined with the story, even when it was temporally incongruent with the audio. Further, individuals with better subsequent memory elicited neural responses during encoding that were more correlated with their peers. Surprisingly, portions of this predictive synchronized activity were present regardless of the sensory modality of the stimulus. These data suggest that the strength of sensory and supramodal activity is predictive of memory performance after 3 weeks, and that neural synchrony may explain the mnemonic benefit of the functionally uninformative visual context observed for these real-world stimuli. PMID:27844062
Affective Priming with Auditory Speech Stimuli
ERIC Educational Resources Information Center
Degner, Juliane
2011-01-01
Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In Experiment 2, stimulus onset asynchrony (SOA) was…
Region-specific reduction of auditory sensory gating in older adults.
Cheng, Chia-Hsiung; Baillet, Sylvain; Lin, Yung-Yang
2015-12-01
Aging has been associated with declines in sensory-perceptual processes. Sensory gating (SG), or repetition suppression, refers to the attenuation of neural activity in response to a second stimulus and is considered to be an automatic process to inhibit redundant sensory inputs. It is controversial whether SG deficits, as tested with an auditory paired-stimulus protocol, accompany normal aging in humans. To reconcile the debates arising from event-related potential studies, we recorded auditory neuromagnetic reactivity in 20 young and 19 elderly adult men and determined the neural activation by using minimum-norm estimate (MNE) source modeling. SG of M100 was calculated by the ratio of the response to the second stimulus over that to the first stimulus. MNE results revealed that fronto-temporo-parietal networks were implicated in the M100 SG. Compared to the younger participants, the elderly showed selectively increased SG ratios in the anterior superior temporal gyrus, anterior middle temporal gyrus, temporal pole and orbitofrontal cortex, suggesting an insufficient age-related gating to repetitive auditory stimulation. These findings also highlight the loss of frontal inhibition of the auditory cortex in normal aging. Copyright © 2015 Elsevier Inc. All rights reserved.
Heine, Lizette; Castro, Maïté; Martial, Charlotte; Tillmann, Barbara; Laureys, Steven; Perrin, Fabien
2015-01-01
Preferred music is a highly emotional and salient stimulus, which has previously been shown to increase the probability of auditory cognitive event-related responses in patients with disorders of consciousness (DOC). To further investigate whether and how music modifies the functional connectivity of the brain in DOC, five patients were assessed with both a classical functional connectivity scan (control condition), and a scan while they were exposed to their preferred music (music condition). Seed-based functional connectivity (left or right primary auditory cortex), and mean network connectivity of three networks linked to conscious sound perception were assessed. The auditory network showed stronger functional connectivity with the left precentral gyrus and the left dorsolateral prefrontal cortex during music as compared to the control condition. Furthermore, functional connectivity of the external network was enhanced during the music condition in the temporo-parietal junction. Although caution should be taken due to small sample size, these results suggest that preferred music exposure might have effects on patients auditory network (implied in rhythm and music perception) and on cerebral regions linked to autobiographical memory. PMID:26617542
Cognitive effects of rhythmic auditory stimulation in Parkinson's disease: A P300 study.
Lei, Juan; Conradi, Nadine; Abel, Cornelius; Frisch, Stefan; Brodski-Guerniero, Alla; Hildner, Marcel; Kell, Christian A; Kaiser, Jochen; Schmidt-Kassow, Maren
2018-05-16
Rhythmic auditory stimulation (RAS) may compensate dysfunctions of the basal ganglia (BG), involved with intrinsic evaluation of temporal intervals and action initiation or continuation. In the cognitive domain, RAS containing periodically presented tones facilitates young healthy participants' attention allocation to anticipated time points, indicated by better performance and larger P300 amplitudes to periodic compared to random stimuli. Additionally, active auditory-motor synchronization (AMS) leads to a more precise temporal encoding of stimuli via embodied timing encoding than stimulus presentation adapted to the participants' actual movements. Here we investigated the effect of RAS and AMS in Parkinson's disease (PD). 23 PD patients and 23 healthy age-matched controls underwent an auditory oddball task. We manipulated the timing (periodic/random/adaptive) and setting (pedaling/sitting still) of stimulation. While patients elicited a general timing effect, i.e., larger P300 amplitudes for periodic versus random tones for both, sitting and pedaling conditions, controls showed a timing effect only for the sitting but not for the pedaling condition. However, a correlation between P300 amplitudes and motor variability in the periodic pedaling condition was obtained in control participants only. We conclude that RAS facilitates attentional processing of temporally predictable external events in PD patients as well as healthy controls, but embodied timing encoding via body movement does not affect stimulus processing due to BG impairment in patients. Moreover, even with intact embodied timing encoding, such as healthy elderly, the effect of AMS depends on the degree of movement synchronization performance, which is very low in the current study. Copyright © 2018 Elsevier B.V. All rights reserved.
Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
2010-09-29
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.
Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena.
Simmonds-Moore, Christine
2014-08-01
Ninety-five participants (32 believers, 30 disbelievers and 33 neutral believers in the paranormal) participated in an experiment comprising one visual and one auditory block of trials. Each block included one ESP, two degraded stimuli and one random trial. Each trial included 8 screens or epochs of "random" noise. Participants entered a guess if they perceived a stimulus or changed their mind about stimulus identity, rated guesses for confidence and made notes during each trial. Believers and disbelievers did not differ in the number of guesses made, or in their ability to detect degraded stimuli. Believers displayed a trend toward making faster guesses for some conditions and significantly higher confidence and more misidentifications concerning guesses than disbelievers. Guesses, misidentifications and faster response latencies were generally more likely in the visual than auditory conditions. ESP performance was no different from chance. ESP performance did not differ between belief groups or sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Lane, S D; Clow, J K; Innis, A; Critchfield, T S
1998-01-01
This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680
ERIC Educational Resources Information Center
Horrocks, Erin; Higbee, Thomas S.
2008-01-01
Previous researchers have used stimulus preference assessment (SPA) methods to identify salient reinforcers for individuals with developmental disabilities including tangible, leisure, edible and olfactory stimuli. In the present study, SPA procedures were used to identify potential auditory reinforcers and determine the reinforcement value of…
Functional neuroanatomy of auditory scene analysis in Alzheimer's disease
Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Auditory brainstem responses in the Eastern Screech Owl: An estimate of auditory thresholds
Brittan-Powell, E.F.; Lohr, B.; Hahn, D.C.; Dooling, R.J.
2005-01-01
The auditory brainstem response (ABR), a measure of neural synchrony, was used to estimate auditory sensitivity in the eastern screech owl (Megascops asio). The typical screech owl ABR waveform showed two to three prominent peaks occurring within 5 ms of stimulus onset. As sound pressure levels increased, the ABR peak amplitude increased and latency decreased. With an increasing stimulus presentation rate, ABR peak amplitude decreased and latency increased. Generally, changes in the ABR waveform to stimulus intensity and repetition rate are consistent with the pattern found in several avian families. The ABR audiogram shows that screech owls hear best between 1.5 and 6.4 kHz with the most acute sensitivity between 4?5.7 kHz. The shape of the average screech owl ABR audiogram is similar to the shape of the behaviorally measured audiogram of the barn owl, except at the highest frequencies. Our data also show differences in overall auditory sensitivity between the color morphs of screech owls.
NASA Technical Reports Server (NTRS)
Gilinskiy, M. A.; Korsakov, I. A.
1979-01-01
Averaged evoked potentials in the auditory, somatosensory, and motor cortical zones, as well as in the mesencephalic reticular formation were recorded in acute experiments on nonanesthetized, immobilized cats. Omission of the painful stimulus after a number of pairings resulted in the appearance of a delayed evoked potential, often resembling the late phases of the response to the painful stimulus. The characteristics of this response are discussed in comparison with conditioned changes of the sensory potential amplitudes.
Statistical context shapes stimulus-specific adaptation in human auditory cortex
Henry, Molly J.; Fromboluti, Elisa Kim; McAuley, J. Devin
2015-01-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. PMID:25652920
Phillips, Derrick J; Schei, Jennifer L; Meighan, Peter C; Rector, David M
2011-11-01
Auditory evoked potential (AEP) components correspond to sequential activation of brain structures within the auditory pathway and reveal neural activity during sensory processing. To investigate state-dependent modulation of stimulus intensity response profiles within different brain structures, we assessed AEP components across both stimulus intensity and state. We implanted adult female Sprague-Dawley rats (N = 6) with electrodes to measure EEG, EKG, and EMG. Intermittent auditory stimuli (6-12 s) varying from 50 to 75 dBa were delivered over a 24-h period. Data were parsed into 2-s epochs and scored for wake/sleep state. All AEP components increased in amplitude with increased stimulus intensity during wake. During quiet sleep, however, only the early latency response (ELR) showed this relationship, while the middle latency response (MLR) increased at the highest 75 dBa intensity, and the late latency response (LLR) showed no significant change across the stimulus intensities tested. During rapid eye movement sleep (REM), both ELR and LLR increased, similar to wake, but MLR was severely attenuated. Stimulation intensity and the corresponding AEP response profile were dependent on both brain structure and sleep state. Lower brain structures maintained stimulus intensity and neural response relationships during sleep. This relationship was not observed in the cortex, implying state-dependent modification of stimulus intensity coding. Since AEP amplitude is not modulated by stimulus intensity during sleep, differences between paired 75/50 dBa stimuli could be used to determine state better than individual intensities.
Synchronization to auditory and visual rhythms in hearing and deaf individuals
Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen
2014-01-01
A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395
Auditory steady-state response in cochlear implant patients.
Torres-Fortuny, Alejandro; Arnaiz-Marquez, Isabel; Hernández-Pérez, Heivet; Eimil-Suárez, Eduardo
2018-03-19
Auditory steady state responses to continuous amplitude modulated tones at rates between 70 and 110Hz, have been proposed as a feasible alternative to objective frequency specific audiometry in cochlear implant subjects. The aim of the present study is to obtain physiological thresholds by means of auditory steady-state response in cochlear implant patients (Clarion HiRes 90K), with acoustic stimulation, on free field conditions and to verify its biological origin. 11 subjects comprised the sample. Four amplitude modulated tones of 500, 1000, 2000 and 4000Hz were used as stimuli, using the multiple frequency technique. The recording of auditory steady-state response was also recorded at 0dB HL of intensity, non-specific stimulus and using a masking technique. The study enabled the electrophysiological thresholds to be obtained for each subject of the explored sample. There were no auditory steady-state responses at either 0dB or non-specific stimulus recordings. It was possible to obtain the masking thresholds. A difference was identified between behavioral and electrophysiological thresholds of -6±16, -2±13, 0±22 and -8±18dB at frequencies of 500, 1000, 2000 and 4000Hz respectively. The auditory steady state response seems to be a suitable technique to evaluate the hearing threshold in cochlear implant subjects. Copyright © 2018 Sociedad Española de Otorrinolaringología y Cirugía de Cabeza y Cuello. Publicado por Elsevier España, S.L.U. All rights reserved.
Predictive cues for auditory stream formation in humans and monkeys.
Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael
2017-12-18
Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Information processing capacity while wearing personal protective eyewear.
Wade, Chip; Davis, Jerry; Marzilli, Thomas S; Weimar, Wendi H
2006-08-15
It is difficult to overemphasize the function vision plays in information processing, specifically in maintaining postural control. Vision appears to be an immediate, effortless event; suggesting that eyes need only to be open to employ the visual information provided by the environment. This study is focused on investigating the effect of Occupational Safety and Health Administration regulated personal protective eyewear (29 CFR 1910.133) on physiological and cognitive factors associated with information processing capabilities. Twenty-one college students between the ages of 19 and 25 years were randomly tested in each of three eyewear conditions (control, new and artificially aged) on an inclined and horizontal support surface for auditory and visual stimulus reaction time. Data collection trials consisted of 50 randomly selected (25 auditory, 25 visual) stimuli over a 10-min surface-eyewear condition trial. Auditory stimulus reaction time was significantly affected by the surface by eyewear interaction (F2,40 = 7.4; p < 0.05). Similarly, analysis revealed a significant surface by eyewear interaction in reaction time following the visual stimulus (F2,40 = 21.7; p < 0.05). The current findings do not trivialize the importance of personal protective eyewear usage in an occupational setting; rather, they suggest the value of future research focused on the effect that personal protective eyewear has on the physiological, cognitive and biomechanical contributions to postural control. These findings suggest that while personal protective eyewear may serve to protect an individual from eye injury, an individual's use of such personal protective eyewear may have deleterious effects on sensory information associated with information processing and postural control.
Ceponiene, R; Westerfield, M; Torki, M; Townsend, J
2008-06-18
Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.
Comparison on driving fatigue related hemodynamics activated by auditory and visual stimulus
NASA Astrophysics Data System (ADS)
Deng, Zishan; Gao, Yuan; Li, Ting
2018-02-01
As one of the main causes of traffic accidents, driving fatigue deserves researchers' attention and its detection and monitoring during long-term driving require a new technique to realize. Since functional near-infrared spectroscopy (fNIRS) can be applied to detect cerebral hemodynamic responses, we can promisingly expect its application in fatigue level detection. Here, we performed three different kinds of experiments on a driver and recorded his cerebral hemodynamic responses when driving for long hours utilizing our device based on fNIRS. Each experiment lasted for 7 hours and one of the three specific experimental tests, detecting the driver's response to sounds, traffic lights and direction signs respectively, was done every hour. The results showed that visual stimulus was easier to cause fatigue compared with auditory stimulus and visual stimulus induced by traffic lights scenes was easier to cause fatigue compared with visual stimulus induced by direction signs in the first few hours. We also found that fatigue related hemodynamics caused by auditory stimulus increased fastest, then traffic lights scenes, and direction signs scenes slowest. Our study successfully compared audio, visual color, and visual character stimulus in sensitivity to cause driving fatigue, which is meaningful for driving safety management.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei
2015-02-01
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069
Brain stem auditory-evoked response of the nonanesthetized dog.
Marshall, A E
1985-04-01
The brain stem auditory evoked-response was measured from a group of 24 healthy dogs under conditions suitable for clinical diagnostic use. The waveforms were identified, and analysis of amplitude ratios, latencies, and interpeak latencies were done. The group was subdivided into subgroups based on tranquilization, nontranquilization, sex, and weight. Differences were not observed among any of these subgroups. All dogs responded to the click stimulus from 30 dB to 90 dB, but only 62.5% of the dogs responded at 5 dB. The total number of peaks averaged 1.6 at 5 dB, increased linearly to 6.5 at 50 dB, and remained at 6.5 to 90 dB. Frequency of recognizability of each wave was tabulated for each stimulus intensity tested; recognizability increased with increased stimulus intensity. Amplitudes of waves increased with increasing stimulus intensity, but were highly variable. The 4th wave had the greatest amplitude at the lower stimulus intensities, and the 1st wave had the greatest amplitude at the higher stimulus intensities. Amplitude ratio of the 1st to 5th wave was greater than 1 at less than or equal to 50 dB stimulus intensity, and was 1 for stimulus intensities greater than 50 dB. Interpeak latencies did not change relative to stimulus intensities. Peak latencies of each wave averaged at 5-dB hearing level for the 1st to 6th waves were 2.03, 2.72, 3.23, 4.14, 4.41, and 6.05 ms, respectively; latencies of these 6 waves at 90 dB were 0.92, 1.79, 2.46, 3.03, 3.47, and 4.86 ms, respectively. Latency decreased between 0.009 to 0.014 ms/dB for the waves.
ERIC Educational Resources Information Center
Díaz-Mataix, Lorenzo; Piper, Walter T.; Schiff, Hillary C.; Roberts, Clark H.; Campese, Vincent D.; Sears, Robert M.; LeDoux, Joseph E.
2017-01-01
The creation of auditory threat Pavlovian memory requires an initial learning stage in which a neutral conditioned stimulus (CS), such as a tone, is paired with an aversive one (US), such as a shock. In this phase, the CS acquires the capacity of predicting the occurrence of the US and therefore elicits conditioned defense responses.…
Deal, Alex L.; Erickson, Kristen J.; Shiers, Stephanie I.; Burman, Michael A.
2016-01-01
Classical fear conditioning creates an association between an aversive stimulus and a neutral stimulus. Although the requisite neural circuitry is well understood in mature organisms, the development of these circuits is less well studied. The current experiments examine the ontogeny of fear conditioning and relate it to neuronal activation assessed through immediate early gene (IEG) expression in the amygdala, hippocampus, perirhinal cortex, and hypothalamus of periweanling rats. Rat pups were fear conditioned, or not, during the 3rd or 4th weeks of life. Neuronal activation was assessed by quantifying expression of FBJ osteosarcoma oncogene (FOS) using immunohistochemistry (IHC) in Experiment 1. Fos and early growth response gene-1 (EGR1) expression was assessed using qRT-PCR in Experiment 2. Behavioral data confirm that both auditory and contextual fear continue to emerge between PD 17 and 24. The IEG expression data are highly consistent with these behavioral results. IHC results demonstrate significantly more FOS protein expression in the basal amygdala of fear conditioned PD 23 subjects compared to control subjects, but no significant difference at PD 17. qRT-PCR results suggest specific activation of the amygdala only in older subjects during auditory fear expression. A similar effect of age and conditioning status was also observed in the perirhinal cortex during both contextual and auditory fear expression. Overall, the development of fear conditioning occurring between the 3rd and 4th weeks of life appears to be at least partly attributable to changes in activation of the amygdala and perirhinal cortex during fear conditioning or expression. PMID:26820587
2003-01-01
stability. The ectosylvian gyrus, which includes the primary auditory cortex, was exposed by craniotomy and the dura was reflected. The contralateral... awake monkey. Journal Revista de Acustica, 33:84–87985–06–8. Victor, J. and Knight, B. (1979). Nonlinear analysis with an arbitrary stimulus ensemble
Selective attention and the auditory vertex potential. 1: Effects of stimulus delivery rate
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1975-01-01
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Interactions of cognitive and auditory abilities in congenitally blind individuals.
Rokem, Ariel; Ahissar, Merav
2009-02-01
Congenitally blind individuals have been found to show superior performance in perceptual and memory tasks. In the present study, we asked whether superior stimulus encoding could account for performance in memory tasks. We characterized the performance of a group of congenitally blind individuals on a series of auditory, memory and executive cognitive tasks and compared their performance to that of sighted controls matched for age, education and musical training. As expected, we found superior verbal spans among congenitally blind individuals. Moreover, we found superior speech perception, measured by resilience to noise, and superior auditory frequency discrimination. However, when memory span was measured under conditions of equivalent speech perception, by adjusting the signal to noise ratio for each individual to the same level of perceptual difficulty (80% correct), the advantage in memory span was completely eliminated. Moreover, blind individuals did not possess any advantage in cognitive executive functions, such as manipulation of items in memory and math abilities. We propose that the short-term memory advantage of blind individuals results from better stimulus encoding, rather than from superiority at subsequent processing stages.
Olulade, O; Hu, S; Gonzalez-Castillo, J; Tamer, G G; Luh, W-M; Ulmer, J L; Talavage, T M
2011-07-01
A confounding factor in auditory functional magnetic resonance imaging (fMRI) experiments is the presence of the acoustic noise inherently associated with the echo planar imaging acquisition technique. Previous studies have demonstrated that this noise can induce unwanted neuronal responses that can mask stimulus-induced responses. Similarly, activation accumulated over multiple stimuli has been demonstrated to elevate the baseline, thus reducing the dynamic range available for subsequent responses. To best evaluate responses to auditory stimuli, it is necessary to account for the presence of all recent acoustic stimulation, beginning with an understanding of the attenuating effects brought about by interaction between and among induced unwanted neuronal responses, and responses to desired auditory stimuli. This study focuses on the characterization of the duration of this temporal memory and qualitative assessment of the associated response attenuation. Two experimental parameters--inter-stimulus interval (ISI) and repetition time (TR)--were varied during an fMRI experiment in which participants were asked to passively attend to an auditory stimulus. Results present evidence of a state-dependent interaction between induced responses. As expected, attenuating effects of these interactions become less significant as TR and ISI increase and in contrast to previous work, persist up to 18s after a stimulus presentation. Copyright © 2011 Elsevier B.V. All rights reserved.
Ponnath, Abhilash; Farris, Hamilton E.
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437
Ponnath, Abhilash; Farris, Hamilton E
2014-01-01
Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.
Working memory capacity affects the interference control of distractors at auditory gating.
Tsuchida, Yukio; Katayama, Jun'ichi; Murohashi, Harumitsu
2012-05-10
It is important to understand the role of individual differences in working memory capacity (WMC). We investigated the relation between differences in WMC and N1 in event-related brain potentials as a measure of early selective attention for an auditory distractor in three-stimulus oddball tasks that required minimum memory. A high-WMC group (n=13) showed a smaller N1 in response to a distractor and target than did a low-WMC group (n=13) in the novel condition with high distraction. However, in the simple condition with low distraction, there was no difference in N1 between the groups. For all participants (n=52), the correlation between the scores for WMC and N1 peak amplitude was strong for distractors in the novel condition, whereas there was no relation in the simple condition. These results suggest that WMC can predict the interference control for a salient distractor at auditory gating even during a selective attention task. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Putative mechanisms mediating tolerance for audiovisual stimulus onset asynchrony.
Bhat, Jyoti; Miller, Lee M; Pitt, Mark A; Shahin, Antoine J
2015-03-01
Audiovisual (AV) speech perception is robust to temporal asynchronies between visual and auditory stimuli. We investigated the neural mechanisms that facilitate tolerance for audiovisual stimulus onset asynchrony (AVOA) with EEG. Individuals were presented with AV words that were asynchronous in onsets of voice and mouth movement and judged whether they were synchronous or not. Behaviorally, individuals tolerated (perceived as synchronous) longer AVOAs when mouth movement preceded the speech (V-A) stimuli than when the speech preceded mouth movement (A-V). Neurophysiologically, the P1-N1-P2 auditory evoked potentials (AEPs), time-locked to sound onsets and known to arise in and surrounding the primary auditory cortex (PAC), were smaller for the in-sync than the out-of-sync percepts. Spectral power of oscillatory activity in the beta band (14-30 Hz) following the AEPs was larger during the in-sync than out-of-sync perception for both A-V and V-A conditions. However, alpha power (8-14 Hz), also following AEPs, was larger for the in-sync than out-of-sync percepts only in the V-A condition. These results demonstrate that AVOA tolerance is enhanced by inhibiting low-level auditory activity (e.g., AEPs representing generators in and surrounding PAC) that code for acoustic onsets. By reducing sensitivity to acoustic onsets, visual-to-auditory onset mapping is weakened, allowing for greater AVOA tolerance. In contrast, beta and alpha results suggest the involvement of higher-level neural processes that may code for language cues (phonetic, lexical), selective attention, and binding of AV percepts, allowing for wider neural windows of temporal integration, i.e., greater AVOA tolerance. Copyright © 2015 the American Physiological Society.
Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.
Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S
2018-02-21
Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from sound-level distributions with different modes (15 vs 45 dB). Auditory cortex neurons adapted to sound-level statistics in younger and older adults, but adaptation was incomplete in older people. The data suggest that the aging auditory system does not fully capitalize on the statistics available in sound environments to tune the perceptual system dynamically. Copyright © 2018 the authors 0270-6474/18/381989-11$15.00/0.
Statistical context shapes stimulus-specific adaptation in human auditory cortex.
Herrmann, Björn; Henry, Molly J; Fromboluti, Elisa Kim; McAuley, J Devin; Obleser, Jonas
2015-04-01
Stimulus-specific adaptation is the phenomenon whereby neural response magnitude decreases with repeated stimulation. Inconsistencies between recent nonhuman animal recordings and computational modeling suggest dynamic influences on stimulus-specific adaptation. The present human electroencephalography (EEG) study investigates the potential role of statistical context in dynamically modulating stimulus-specific adaptation by examining the auditory cortex-generated N1 and P2 components. As in previous studies of stimulus-specific adaptation, listeners were presented with oddball sequences in which the presentation of a repeated tone was infrequently interrupted by rare spectral changes taking on three different magnitudes. Critically, the statistical context varied with respect to the probability of small versus large spectral changes within oddball sequences (half of the time a small change was most probable; in the other half a large change was most probable). We observed larger N1 and P2 amplitudes (i.e., release from adaptation) for all spectral changes in the small-change compared with the large-change statistical context. The increase in response magnitude also held for responses to tones presented with high probability, indicating that statistical adaptation can overrule stimulus probability per se in its influence on neural responses. Computational modeling showed that the degree of coadaptation in auditory cortex changed depending on the statistical context, which in turn affected stimulus-specific adaptation. Thus the present data demonstrate that stimulus-specific adaptation in human auditory cortex critically depends on statistical context. Finally, the present results challenge the implicit assumption of stationarity of neural response magnitudes that governs the practice of isolating established deviant-detection responses such as the mismatch negativity. Copyright © 2015 the American Physiological Society.
Event-Related Potential Measures of a Violation of an Expected Increase and Decrease in Intensity
Macdonald, Margaret; Campbell, Kenneth
2013-01-01
Unexpected physical increases in the intensity of a frequently occurring “standard” auditory stimulus are experienced as obtrusive. This could either be because of a physical change, the increase in intensity of the “deviant” stimulus, or a psychological change, the violation of the expectancy for the occurrence of the lower intensity standard stimulus. Two experiments were run in which event-related potentials (ERPs) were recorded to determine whether “psychological” increments (violation of an expectancy for a lower intensity) would be processed differently than psychological decrements (violation of an expectancy for a higher intensity). Event-related potentials (ERPs) were recorded while subjects were presented with auditory tones that alternated between low and high intensity. The subjects ignored the auditory stimuli while watching a video. Deviants were created by repeating the same stimulus. In the first experiment, pairs of stimuli alternating in intensity, were presented in separate increment (H-L…H-L…H-H…H-L, in which H = 80 dB SPL and L = 60 dB SPL) and decrement conditions (L-H…L-H…L-L… L-H, in which H = 90 dB SPL and L = 80 dB SPL). The paradigm employed in the second experiment consisted of an alternating intensity pattern (H-L-H-L-H-H-H-L) or (H-L-H-L-L-L-H-L). Importantly, the stimulus prior to the deviant (the standard) and the actual deviants in both increment and decrement conditions in both experiments were physically identical (80 dB SPL tones). The repetition of the lower intensity tone therefore acted as a psychological rather than a physical decrement (a higher intensity tone was expected) while the repetition of the higher intensity tone acted as a psychological increment (a lower intensity tone was expected). The psychological increments in both experiments elicited a larger amplitude mismatch negativity (MMN) than the decrements. Thus, regardless of whether an acoustic change signals a physical increase in intensity or violates an expected decrease in intensity, a large MMN will be elicited. PMID:24143195
A COMPARISON OF METHODS FOR TEACHING RECEPTIVE LABELING TO CHILDREN WITH AUTISM SPECTRUM DISORDERS
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
2011-01-01
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has been a call for the use of alternative teaching procedures such as the conditional-only method, which involves conditional discrimination training from the onset of intervention. The purpose of the present study was to compare the simple-conditional and conditional-only methods for teaching receptive labeling to 3 young children diagnosed with autism spectrum disorders. The data indicated that the conditional-only method was a more reliable and efficient teaching procedure. In addition, several error patterns emerged during training using the simple-conditional method. The implications of the results with respect to current teaching practices in early intervention programs are discussed. PMID:21941380
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Primary and multisensory cortical activity is correlated with audiovisual percepts.
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P; Stufflebeam, Steven
2010-04-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. Copyright 2009 Wiley-Liss, Inc.
Primary and Multisensory Cortical Activity is Correlated with Audiovisual Percepts
Benoit, Margo McKenna; Raij, Tommi; Lin, Fa-Hsuan; Jääskeläinen, Iiro P.; Stufflebeam, Steven
2012-01-01
Incongruent auditory and visual stimuli can elicit audiovisual illusions such as the McGurk effect where visual /ka/ and auditory /pa/ fuse into another percept such as/ta/. In the present study, human brain activity was measured with adaptation functional magnetic resonance imaging to investigate which brain areas support such audiovisual illusions. Subjects viewed trains of four movies beginning with three congruent /pa/ stimuli to induce adaptation. The fourth stimulus could be (i) another congruent /pa/, (ii) a congruent /ka/, (iii) an incongruent stimulus that evokes the McGurk effect in susceptible individuals (lips /ka/ voice /pa/), or (iv) the converse combination that does not cause the McGurk effect (lips /pa/ voice/ ka/). This paradigm was predicted to show increased release from adaptation (i.e. stronger brain activation) when the fourth movie and the related percept was increasingly different from the three previous movies. A stimulus change in either the auditory or the visual stimulus from /pa/ to /ka/ (iii, iv) produced within-modality and cross-modal responses in primary auditory and visual areas. A greater release from adaptation was observed for incongruent non-McGurk (iv) compared to incongruent McGurk (iii) trials. A network including the primary auditory and visual cortices, nonprimary auditory cortex, and several multisensory areas (superior temporal sulcus, intraparietal sulcus, insula, and pre-central cortex) showed a correlation between perceiving the McGurk effect and the fMRI signal, suggesting that these areas support the audiovisual illusion. PMID:19780040
Lanuza, E; Moncho-Bogani, J; Ledoux, J E
2008-08-26
The lateral nucleus of the amygdala (LA) is a site of convergence for auditory (conditioned stimulus) and foot-shock (unconditioned stimulus) inputs during fear conditioning. The auditory pathways to LA are well characterized, but less is known about the pathways through which foot shock is transmitted. Anatomical tracing and physiological recording studies suggest that the posterior intralaminar thalamic nucleus, which projects to LA, receives both auditory and somatosensory inputs. In the present study we examined the expression of the immediate-early gene c-fos in the LA in rats in response to foot-shock stimulation. We then determined the effects of posterior intralaminar thalamic lesions on foot-shock-induced c-Fos expression in the LA. Foot-shock stimulation led to an increase in the density of c-Fos-positive cells in all LA subnuclei in comparison to controls exposed to the conditioning box but not shocked. However, some differences among the dorsolateral, ventrolateral and ventromedial subnuclei were observed. The ventrolateral subnucleus showed a homogeneous activation throughout its antero-posterior extension. In contrast, only the rostral aspect of the ventromedial subnucleus and the central aspect of the dorsolateral subnucleus showed a significant increment in c-Fos expression. The density of c-Fos-labeled cells in all LA subnuclei was also increased in animals placed in the box in comparison to untreated animals. Unilateral electrolytic lesions of the posterior intralaminar thalamic nucleus and the medial division of the medial geniculate body reduced foot-shock-induced c-Fos activation in the LA ipsilateral to the lesion. The number of c-Fos labeled cells on the lesioned side was reduced to the levels observed in the animals exposed only to the box. These results indicate that the LA is involved in processing information about the foot-shock unconditioned stimulus and receives this kind of somatosensory information from the posterior intralaminar thalamic nucleus and the medial division of the medial geniculate body.
Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
2018-02-01
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Food and water deprivation disrupts latent inhibition with an auditory fear conditioning procedure.
De la Casa, Luis G
2013-11-01
Latent inhibition (LI), operationally defined as the reduced conditioned response to a stimulus that has been preexposed before conditioning, seems to be determined by the interaction of different processes that includes attentional, associative, memory, motivational, and emotional factors. In this paper we focused on the role of deprivation level on LI intensity using an auditory fear conditioning procedure with rats. LI was observed when the animals were non-deprived, but it was disrupted when the rats were water- or food-deprived. We propose that deprivation induced an increase in attention to the to-be-CS, and, as a result, LI was disrupted in deprived animals. The implications of the results for the current interpretations of LI are also discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
Pre-Attentive Auditory Processing of Lexicality
ERIC Educational Resources Information Center
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
2004-01-01
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Kurt, Simone; Sausbier, Matthias; Rüttiger, Lukas; Brandt, Niels; Moeller, Christoph K.; Kindler, Jennifer; Sausbier, Ulrike; Zimmermann, Ulrike; van Straaten, Harald; Neuhuber, Winfried; Engel, Jutta; Knipper, Marlies; Ruth, Peter; Schulze, Holger
2012-01-01
Large conductance, voltage- and Ca2+-activated K+ (BK) channels in inner hair cells (IHCs) of the cochlea are essential for hearing. However, germline deletion of BKα, the pore-forming subunit KCNMA1 of the BK channel, surprisingly did not affect hearing thresholds in the first postnatal weeks, even though altered IHC membrane time constants, decreased IHC receptor potential alternating current/direct current ratio, and impaired spike timing of auditory fibers were reported in these mice. To investigate the role of IHC BK channels for central auditory processing, we generated a conditional mouse model with hair cell-specific deletion of BKα from postnatal day 10 onward. This had an unexpected effect on temporal coding in the central auditory system: neuronal single and multiunit responses in the inferior colliculus showed higher excitability and greater precision of temporal coding that may be linked to the improved discrimination of temporally modulated sounds observed in behavioral training. The higher precision of temporal coding, however, was restricted to slower modulations of sound and reduced stimulus-driven activity. This suggests a diminished dynamic range of stimulus coding that is expected to impair signal detection in noise. Thus, BK channels in IHCs are crucial for central coding of the temporal fine structure of sound and for detection of signals in a noisy environment.—Kurt, S., Sausbier, M., Rüttiger, L., Brandt, N., Moeller, C. K., Kindler, J., Sausbier, U., Zimmermann, U., van Straaten, H., Neuhuber, W., Engel, J., Knipper, M., Ruth, P., Schulze, H. Critical role for cochlear hair cell BK channels for coding the temporal structure and dynamic range of auditory information for central auditory processing. PMID:22691916
Analysis of stimulus-related activity in rat auditory cortex using complex spectral coefficients
Krause, Bryan M.
2013-01-01
The neural mechanisms of sensory responses recorded from the scalp or cortical surface remain controversial. Evoked vs. induced response components (i.e., changes in mean vs. variance) are associated with bottom-up vs. top-down processing, but trial-by-trial response variability can confound this interpretation. Phase reset of ongoing oscillations has also been postulated to contribute to sensory responses. In this article, we present evidence that responses under passive listening conditions are dominated by variable evoked response components. We measured the mean, variance, and phase of complex time-frequency coefficients of epidurally recorded responses to acoustic stimuli in rats. During the stimulus, changes in mean, variance, and phase tended to co-occur. After the stimulus, there was a small, low-frequency offset response in the mean and modest, prolonged desynchronization in the alpha band. Simulations showed that trial-by-trial variability in the mean can account for most of the variance and phase changes observed during the stimulus. This variability was state dependent, with smallest variability during periods of greatest arousal. Our data suggest that cortical responses to auditory stimuli reflect variable inputs to the cortical network. These analyses suggest that caution should be exercised when interpreting variance and phase changes in terms of top-down cortical processing. PMID:23657279
Dual Functions of Perirhinal Cortex in Fear Conditioning
Kent, Brianne A.; Brown, Thomas H.
2012-01-01
The present review examines the role of perirhinal cortex (PRC) in Pavlovian fear conditioning. The focus is on rats, partly because so much is known, behaviorally and neurobiologically, about fear conditioning in these animals. In addition, the neuroanatomy and neurophysiology of rat PRC have been described in considerable detail at the cellular and systems levels. The evidence suggests that PRC can serve at least two types of mnemonic functions in Pavlovian fear conditioning. The first function, termed "stimulus unitization," refers to the ability to treat two or more separate items or stimulus elements as a single entity. Supporting evidence for this perceptual function comes from studies of context conditioning as well as delay conditioning to discontinuous auditory cues. In a delay paradigm, the conditional stimulus (CS) and unconditional stimulus (US) overlap temporally and co-terminate. The second PRC function entails a type of "transient memory." Supporting evidence comes from studies of trace cue conditioning, where there is a temporal gap or trace interval between the CS offset and the US onset. For learning to occur, there must be a transient CS representation during the trace interval. We advance a novel neurophysiological mechanism for this transient representation. These two hypothesized functions of PRC are consistent with inferences based on non-aversive forms of learning. PMID:22903623
Auditory modulation of wind-elicited walking behavior in the cricket Gryllus bimaculatus.
Fukutomi, Matasaburo; Someya, Makoto; Ogawa, Hiroto
2015-12-01
Animals flexibly change their locomotion triggered by an identical stimulus depending on the environmental context and behavioral state. This indicates that additional sensory inputs in different modality from the stimulus triggering the escape response affect the neuronal circuit governing that behavior. However, how the spatio-temporal relationships between these two stimuli effect a behavioral change remains unknown. We studied this question, using crickets, which respond to a short air-puff by oriented walking activity mediated by the cercal sensory system. In addition, an acoustic stimulus, such as conspecific 'song' received by the tympanal organ, elicits a distinct oriented locomotion termed phonotaxis. In this study, we examined the cross-modal effects on wind-elicited walking when an acoustic stimulus was preceded by an air-puff and tested whether the auditory modulation depends on the coincidence of the direction of both stimuli. A preceding 10 kHz pure tone biased the wind-elicited walking in a backward direction and elevated a threshold of the wind-elicited response, whereas other movement parameters, including turn angle, reaction time, walking speed and distance were unaffected. The auditory modulations, however, did not depend on the coincidence of the stimulus directions. A preceding sound consistently altered the wind-elicited walking direction and response probability throughout the experimental sessions, meaning that the auditory modulation did not result from previous experience or associative learning. These results suggest that the cricket nervous system is able to integrate auditory and air-puff stimuli, and modulate the wind-elicited escape behavior depending on the acoustic context. © 2015. Published by The Company of Biologists Ltd.
Bell, Brittany A; Phan, Mimi L; Vicario, David S
2015-03-01
How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.
Salisbury, Dean F
2011-01-01
Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counter-intuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of 6 pips (50 msec duration, 330 msec stimulus onset asynchrony (SOA), 400 trials), were presented with an inter-trial interval (ITI) of 750 msec while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (p<.05). The missing 4th (−0.8 uV, p <.01) and the missing 6th stimuli (−1.1 uV, p <.05) were more negative than standard 6th stimuli (0.3 uV). MMN can be elicited by a missing stimulus at long SOAs by violation of a gestalt grouping rule. Homogenous stimulus streams appear to differ in the relative weighting of omissions than strongly patterned streams. PMID:22221004
Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter
2002-12-01
Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.
ERIC Educational Resources Information Center
Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina
2005-01-01
The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…
Shared Processing of Language and Music.
Atherton, Ryan P; Chrobak, Quin M; Rauscher, Frances H; Karst, Aaron T; Hanson, Matt D; Steinert, Steven W; Bowe, Kyra L
2018-01-01
The present study sought to explore whether musical information is processed by the phonological loop component of the working memory model of immediate memory. Original instantiations of this model primarily focused on the processing of linguistic information. However, the model was less clear about how acoustic information lacking phonological qualities is actively processed. Although previous research has generally supported shared processing of phonological and musical information, these studies were limited as a result of a number of methodological concerns (e.g., the use of simple tones as musical stimuli). In order to further investigate this issue, an auditory interference task was employed. Specifically, participants heard an initial stimulus (musical or linguistic) followed by an intervening stimulus (musical, linguistic, or silence) and were then asked to indicate whether a final test stimulus was the same as or different from the initial stimulus. Results indicated that mismatched interference conditions (i.e., musical - linguistic; linguistic - musical) resulted in greater interference than silence conditions, with matched interference conditions producing the greatest interference. Overall, these results suggest that processing of linguistic and musical information draws on at least some of the same cognitive resources.
Double dissociation of 'what' and 'where' processing in auditory cortex.
Lomber, Stephen G; Malhotra, Shveta
2008-05-01
Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.
Optimal resource allocation for novelty detection in a human auditory memory.
Sinkkonen, J; Kaski, S; Huotilainen, M; Ilmoniemi, R J; Näätänen, R; Kaila, K
1996-11-04
A theory of resource allocation for neuronal low-level filtering is presented, based on an analysis of optimal resource allocation in simple environments. A quantitative prediction of the theory was verified in measurements of the magnetic mismatch response (MMR), an auditory event-related magnetic response of the human brain. The amplitude of the MMR was found to be directly proportional to the information conveyed by the stimulus. To the extent that the amplitude of the MMR can be used to measure resource usage by the auditory cortex, this finding supports our theory that, at least for early auditory processing, energy resources are used in proportion to the information content of incoming stimulus flow.
Stevens, Courtney; Paulsen, David; Yasen, Alia; Neville, Helen
2015-02-01
Previous neuroimaging studies indicate that lower socio-economic status (SES) is associated with reduced effects of selective attention on auditory processing. Here, we investigated whether lower SES is also associated with differences in a stimulus-driven aspect of auditory processing: the neural refractory period, or reduced amplitude response at faster rates of stimulus presentation. Thirty-two children aged 3 to 8 years participated, and were divided into two SES groups based on maternal education. Event-related brain potentials were recorded to probe stimuli presented at interstimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting a simultaneous experimental manipulation of selective attention. Results indicated that group differences in refractory periods differed as a function of attention condition. Children from higher SES backgrounds showed full neural recovery by 500 ms for attended stimuli, but required at least 1000 ms for unattended stimuli. In contrast, children from lower SES backgrounds showed similar refractory effects to attended and unattended stimuli, with full neural recovery by 500 ms. Thus, in higher SES children only, one functional consequence of selective attention is attenuation of the response to unattended stimuli, particularly at rapid ISIs, altering basic properties of the auditory refractory period. Together, these data indicate that differences in selective attention impact basic aspects of auditory processing in children from lower SES backgrounds. Copyright © 2013 Elsevier B.V. All rights reserved.
Xu, Yifang; Collins, Leslie M
2004-04-01
The incorporation of low levels of noise into an electrical stimulus has been shown to improve auditory thresholds in some human subjects (Zeng et al., 2000). In this paper, thresholds for noise-modulated pulse-train stimuli are predicted utilizing a stochastic neural-behavioral model of ensemble fiber responses to bi-phasic stimuli. The neural refractory effect is described using a Markov model for a noise-free pulse-train stimulus and a closed-form solution for the steady-state neural response is provided. For noise-modulated pulse-train stimuli, a recursive method using the conditional probability is utilized to track the neural responses to each successive pulse. A neural spike count rule has been presented for both threshold and intensity discrimination under the assumption that auditory perception occurs via integration over a relatively long time period (Bruce et al., 1999). An alternative approach originates from the hypothesis of the multilook model (Viemeister and Wakefield, 1991), which argues that auditory perception is based on several shorter time integrations and may suggest an NofM model for prediction of pulse-train threshold. This motivates analyzing the neural response to each individual pulse within a pulse train, which is considered to be the brief look. A logarithmic rule is hypothesized for pulse-train threshold. Predictions from the multilook model are shown to match trends in psychophysical data for noise-free stimuli that are not always matched by the long-time integration rule. Theoretical predictions indicate that threshold decreases as noise variance increases. Theoretical models of the neural response to pulse-train stimuli not only reduce calculational overhead but also facilitate utilization of signal detection theory and are easily extended to multichannel psychophysical tasks.
NASA Astrophysics Data System (ADS)
Mills, David M.
2003-02-01
Characteristics of distortion product otoacoustic emissions (DPOAEs) and auditory brainstem responses (ABRs) were measured in Mongolian gerbil before and after the introduction of two different auditory dysfunctions: (1) acoustic damage with a high-intensity tone, or (2) furosemide intoxication. The goal was to find emission parameters and measures that best differentiated between the two dysfunctions, e.g., at a given ABR threshold elevation. Emission input-output or ``growth'' functions were used (frequencies f1 and f2, f2/f1=1.21) with equal levels, L1=L2, and unequal levels, with L1=L2+20 dB. The best parametric choice was found to be unequal stimulus levels, and the best measure was found to be the change in the emission threshold level, Δx. The emission threshold was defined as the stimulus level required to reach a criterion emission amplitude, in this case -10 dB SPL. (The next best measure was the change in emission amplitude at high stimulus levels, specifically that measured at L1×L2=90×70 dB SPL.) For an ABR threshold shift of 20 dB or more, there was essentially no overlap in the emission threshold measures for the two conditions, sound damage or furosemide. The dividing line between the two distributions increased slowly with the change in ABR threshold, ΔABR, and was given by Δxt=0.6 ΔABR+8 dB. For a given ΔABR, if the shift in emission threshold was more than the calculated dividing line value, Δxt, the auditory dysfunction was due to acoustic damage, if less, it was due to furosemide.
Ponnath, Abhilash; Hoke, Kim L; Farris, Hamilton E
2013-04-01
Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28% of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f=±16%). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45% of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments.
Ponnath, Abhilash; Hoke, Kim L.
2013-01-01
Neural adaptation, a reduction in the response to a maintained stimulus, is an important mechanism for detecting stimulus change. Contributing to change detection is the fact that adaptation is often stimulus specific: adaptation to a particular stimulus reduces excitability to a specific subset of stimuli, while the ability to respond to other stimuli is unaffected. Phasic cells (e.g., cells responding to stimulus onset) are good candidates for detecting the most rapid changes in natural auditory scenes, as they exhibit fast and complete adaptation to an initial stimulus presentation. We made recordings of single phasic auditory units in the frog midbrain to determine if adaptation was specific to stimulus frequency and ear of input. In response to an instantaneous frequency step in a tone, 28 % of phasic cells exhibited frequency specific adaptation based on a relative frequency change (delta-f = ±16 %). Frequency specific adaptation was not limited to frequency steps, however, as adaptation was also overcome during continuous frequency modulated stimuli and in response to spectral transients interrupting tones. The results suggest that adaptation is separated for peripheral (e.g., frequency) channels. This was tested directly using dichotic stimuli. In 45 % of binaural phasic units, adaptation was ear specific: adaptation to stimulation of one ear did not affect responses to stimulation of the other ear. Thus, adaptation exhibited specificity for stimulus frequency and lateralization at the level of the midbrain. This mechanism could be employed to detect rapid stimulus change within and between sound sources in complex acoustic environments. PMID:23344947
The Rhythm of Perception: Entrainment to Acoustic Rhythms Induces Subsequent Perceptual Oscillation.
Hickok, Gregory; Farahbod, Haleh; Saberi, Kourosh
2015-07-01
Acoustic rhythms are pervasive in speech, music, and environmental sounds. Recent evidence for neural codes representing periodic information suggests that they may be a neural basis for the ability to detect rhythm. Further, rhythmic information has been found to modulate auditory-system excitability, which provides a potential mechanism for parsing the acoustic stream. Here, we explored the effects of a rhythmic stimulus on subsequent auditory perception. We found that a low-frequency (3 Hz), amplitude-modulated signal induces a subsequent oscillation of the perceptual detectability of a brief nonperiodic acoustic stimulus (1-kHz tone); the frequency but not the phase of the perceptual oscillation matches the entrained stimulus-driven rhythmic oscillation. This provides evidence that rhythmic contexts have a direct influence on subsequent auditory perception of discrete acoustic events. Rhythm coding is likely a fundamental feature of auditory-system design that predates the development of explicit human enjoyment of rhythm in music or poetry. © The Author(s) 2015.
Emotion modulates activity in the 'what' but not 'where' auditory processing pathway.
Kryklywy, James H; Macpherson, Ewan A; Greening, Steven G; Mitchell, Derek G V
2013-11-15
Auditory cortices can be separated into dissociable processing pathways similar to those observed in the visual domain. Emotional stimuli elicit enhanced neural activation within sensory cortices when compared to neutral stimuli. This effect is particularly notable in the ventral visual stream. Little is known, however, about how emotion interacts with dorsal processing streams, and essentially nothing is known about the impact of emotion on auditory stimulus localization. In the current study, we used fMRI in concert with individualized auditory virtual environments to investigate the effect of emotion during an auditory stimulus localization task. Surprisingly, participants were significantly slower to localize emotional relative to neutral sounds. A separate localizer scan was performed to isolate neural regions sensitive to stimulus location independent of emotion. When applied to the main experimental task, a significant main effect of location, but not emotion, was found in this ROI. A whole-brain analysis of the data revealed that posterior-medial regions of auditory cortex were modulated by sound location; however, additional anterior-lateral areas of auditory cortex demonstrated enhanced neural activity to emotional compared to neutral stimuli. The latter region resembled areas described in dual pathway models of auditory processing as the 'what' processing stream, prompting a follow-up task to generate an identity-sensitive ROI (the 'what' pathway) independent of location and emotion. Within this region, significant main effects of location and emotion were identified, as well as a significant interaction. These results suggest that emotion modulates activity in the 'what,' but not the 'where,' auditory processing pathway. Copyright © 2013 Elsevier Inc. All rights reserved.
Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong
2017-02-01
Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.
Gardner-Berry, Kirsty; Chang, Hsiuwen; Ching, Teresa Y. C.; Hou, Sanna
2016-01-01
With the introduction of newborn hearing screening, infants are being diagnosed with hearing loss during the first few months of life. For infants with a sensory/neural hearing loss (SNHL), the audiogram can be estimated objectively using auditory brainstem response (ABR) testing and hearing aids prescribed accordingly. However, for infants with auditory neuropathy spectrum disorder (ANSD) due to the abnormal/absent ABR waveforms, alternative measures of auditory function are needed to assess the need for amplification and evaluate whether aided benefit has been achieved. Cortical auditory evoked potentials (CAEPs) are used to assess aided benefit in infants with hearing loss; however, there is insufficient information regarding the relationship between stimulus audibility and CAEP detection rates. It is also not clear whether CAEP detection rates differ between infants with SNHL and infants with ANSD. This study involved retrospective collection of CAEP, hearing threshold, and hearing aid gain data to investigate the relationship between stimulus audibility and CAEP detection rates. The results demonstrate that increases in stimulus audibility result in an increase in detection rate. For the same range of sensation levels, there was no difference in the detection rates between infants with SNHL and ANSD. PMID:27587922
Bendor, Daniel
2015-01-01
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160-200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360-400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides.
Yang, Weiping; Li, Qi; Ochi, Tatsuya; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Takahashi, Satoshi; Wu, Jinglong
2013-01-01
This article aims to investigate whether auditory stimuli in the horizontal plane, particularly originating from behind the participant, affect audiovisual integration by using behavioral and event-related potential (ERP) measurements. In this study, visual stimuli were presented directly in front of the participants, auditory stimuli were presented at one location in an equidistant horizontal plane at the front (0°, the fixation point), right (90°), back (180°), or left (270°) of the participants, and audiovisual stimuli that include both visual stimuli and auditory stimuli originating from one of the four locations were simultaneously presented. These stimuli were presented randomly with equal probability; during this time, participants were asked to attend to the visual stimulus and respond promptly only to visual target stimuli (a unimodal visual target stimulus and the visual target of the audiovisual stimulus). A significant facilitation of reaction times and hit rates was obtained following audiovisual stimulation, irrespective of whether the auditory stimuli were presented in the front or back of the participant. However, no significant interactions were found between visual stimuli and auditory stimuli from the right or left. Two main ERP components related to audiovisual integration were found: first, auditory stimuli from the front location produced an ERP reaction over the right temporal area and right occipital area at approximately 160–200 milliseconds; second, auditory stimuli from the back produced a reaction over the parietal and occipital areas at approximately 360–400 milliseconds. Our results confirmed that audiovisual integration was also elicited, even though auditory stimuli were presented behind the participant, but no integration occurred when auditory stimuli were presented in the right or left spaces, suggesting that the human brain might be particularly sensitive to information received from behind than both sides. PMID:23799097
Aging effects on functional auditory and visual processing using fMRI with variable sensory loading.
Cliff, Michael; Joyce, Dan W; Lamar, Melissa; Dannhauser, Thomas; Tracy, Derek K; Shergill, Sukhwinder S
2013-05-01
Traditionally, studies investigating the functional implications of age-related structural brain alterations have focused on higher cognitive processes; by increasing stimulus load, these studies assess behavioral and neurophysiological performance. In order to understand age-related changes in these higher cognitive processes, it is crucial to examine changes in visual and auditory processes that are the gateways to higher cognitive functions. This study provides evidence for age-related functional decline in visual and auditory processing, and regional alterations in functional brain processing, using non-invasive neuroimaging. Using functional magnetic resonance imaging (fMRI), younger (n=11; mean age=31) and older (n=10; mean age=68) adults were imaged while observing flashing checkerboard images (passive visual stimuli) and hearing word lists (passive auditory stimuli) across varying stimuli presentation rates. Younger adults showed greater overall levels of temporal and occipital cortical activation than older adults for both auditory and visual stimuli. The relative change in activity as a function of stimulus presentation rate showed differences between young and older participants. In visual cortex, the older group showed a decrease in fMRI blood oxygen level dependent (BOLD) signal magnitude as stimulus frequency increased, whereas the younger group showed a linear increase. In auditory cortex, the younger group showed a relative increase as a function of word presentation rate, while older participants showed a relatively stable magnitude of fMRI BOLD response across all rates. When analyzing participants across all ages, only the auditory cortical activation showed a continuous, monotonically decreasing BOLD signal magnitude as a function of age. Our preliminary findings show an age-related decline in demand-related, passive early sensory processing. As stimulus demand increases, visual and auditory cortex do not show increases in activity in older compared to younger people. This may negatively impact on the fidelity of information available to higher cognitive processing. Such evidence may inform future studies focused on cognitive decline in aging. Copyright © 2012 Elsevier Ltd. All rights reserved.
Covic, Amra; Keitel, Christian; Porcu, Emanuele; Schröger, Erich; Müller, Matthias M
2017-11-01
The neural processing of a visual stimulus can be facilitated by attending to its position or by a co-occurring auditory tone. Using frequency-tagging, we investigated whether facilitation by spatial attention and audio-visual synchrony rely on similar neural processes. Participants attended to one of two flickering Gabor patches (14.17 and 17 Hz) located in opposite lower visual fields. Gabor patches further "pulsed" (i.e. showed smooth spatial frequency variations) at distinct rates (3.14 and 3.63 Hz). Frequency-modulating an auditory stimulus at the pulse-rate of one of the visual stimuli established audio-visual synchrony. Flicker and pulsed stimulation elicited stimulus-locked rhythmic electrophysiological brain responses that allowed tracking the neural processing of simultaneously presented Gabor patches. These steady-state responses (SSRs) were quantified in the spectral domain to examine visual stimulus processing under conditions of synchronous vs. asynchronous tone presentation and when respective stimulus positions were attended vs. unattended. Strikingly, unique patterns of effects on pulse- and flicker driven SSRs indicated that spatial attention and audiovisual synchrony facilitated early visual processing in parallel and via different cortical processes. We found attention effects to resemble the classical top-down gain effect facilitating both, flicker and pulse-driven SSRs. Audio-visual synchrony, in turn, only amplified synchrony-producing stimulus aspects (i.e. pulse-driven SSRs) possibly highlighting the role of temporally co-occurring sights and sounds in bottom-up multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Mühler, Roland; Rahne, Torsten; Verhey, Jesko L
2013-01-01
Recently an optimized broad-band chirp stimulus has been proposed for the objective estimation of hearing thresholds with auditory brainstem responses (ABRs). Several studies have demonstrated that this stimulus, compensating for the travelling wave delay of the frequency components of a click stimulus at the basilar membrane, evokes larger ABR amplitudes in adults. This study analyses the amplitude of chirp-evoked ABRs recorded in infants below 48 month of age under clinical conditions and compares these results with literature data. Chirp-evoked ABR recordings in 46 infants under chloral hydrate sedation or general anaesthesia were analysed retrospectively. The amplitude of the wave V was measured as a function of the stimulus intensity. To compare ABR amplitudes across infants with different hearing losses, the stimulus intensity was readjusted to the subjects' individual physiological threshold in dB SL (sensation level). Individual wave V amplitudes were plotted against stimulus intensity and individual amplitude growth functions were calculated. To investigate the maturation of chirp-evoked ABR, data from infants below and above 18 months of age were analysed separately. Chirp-evoked ABR amplitudes in both age groups were larger than the click-evoked ABR amplitudes in young infants from the literature. Amplitudes of chirp-evoked ABR in infants above 18 months of age were not substantially smaller than those reported for normal hearing adults. Amplitudes recorded in infants below 18 months were significantly smaller than those in infants above 18 months. A significant difference between chirp-evoked ABR amplitudes recorded in sedation or under general anaesthesia was not found. The higher amplitudes of ABR elicited by a broadband chirp stimulus allow for a reduction of the recording time in young infants. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Neural Correlates of the Binaural Masking Level Difference in Human Frequency-Following Responses.
Clinard, Christopher G; Hodgson, Sarah L; Scherer, Mary Ellen
2017-04-01
The binaural masking level difference (BMLD) is an auditory phenomenon where binaural tone-in-noise detection is improved when the phase of either signal or noise is inverted in one of the ears (S π N o or S o N π , respectively), relative to detection when signal and noise are in identical phase at each ear (S o N o ). Processing related to BMLDs and interaural time differences has been confirmed in the auditory brainstem of non-human mammals; in the human auditory brainstem, phase-locked neural responses elicited by BMLD stimuli have not been systematically examined across signal-to-noise ratio. Behavioral and physiological testing was performed in three binaural stimulus conditions: S o N o , S π N o , and S o N π . BMLDs at 500 Hz were obtained from 14 young, normal-hearing adults (ages 21-26). Physiological BMLDs used the frequency-following response (FFR), a scalp-recorded auditory evoked potential dependent on sustained phase-locked neural activity; FFR tone-in-noise detection thresholds were used to calculate physiological BMLDs. FFR BMLDs were significantly smaller (poorer) than behavioral BMLDs, and FFR BMLDs did not reflect a physiological release from masking, on average. Raw FFR amplitude showed substantial reductions in the S π N o condition relative to S o N o and S o N π conditions, consistent with negative effects of phase summation from left and right ear FFRs. FFR amplitude differences between stimulus conditions (e.g., S o N o amplitude-S π N o amplitude) were significantly predictive of behavioral S π N o BMLDs; individuals with larger amplitude differences had larger (better) behavioral B MLDs and individuals with smaller amplitude differences had smaller (poorer) behavioral B MLDs. These data indicate a role for sustained phase-locked neural activity in BMLDs of humans and are the first to show predictive relationships between behavioral BMLDs and human brainstem responses.
Nakajima, S
2000-03-14
Pigeons were trained with the A+, AB-, ABC+, AD- and ADE+ task where each of stimulus A and stimulus compounds ABC and ADE signalled food (positive trials), and each of stimulus compounds AB and AD signalled no food (negative trials). Stimuli A, B, C and E were small visual figures localised on a response key, and stimulus D was a white noise. Stimulus B was more effective than D as an inhibitor of responding to A during the training. After the birds learned to respond exclusively on the positive trials, effects of B and D on responding to C and E, respectively, were tested by comparing C, BC, E and DE trials. Stimulus B continuously facilitated responding to C on the BC test trials, but D's facilitative effect was observed only on the first DE test trial. Stimulus B also facilitated responding to E on BE test trials. Implications for the Rescorla-Wagner elemental model and the Pearce configural model of Pavlovian conditioning were discussed.
Stimulus uncertainty enhances long-term potentiation-like plasticity in human motor cortex.
Sale, Martin V; Nydam, Abbey S; Mattingley, Jason B
2017-03-01
Plasticity can be induced in human cortex using paired associative stimulation (PAS), which repeatedly and predictably pairs a peripheral electrical stimulus with transcranial magnetic stimulation (TMS) to the contralateral motor region. Many studies have reported small or inconsistent effects of PAS. Given that uncertain stimuli can promote learning, the predictable nature of the stimulation in conventional PAS paradigms might serve to attenuate plasticity induction. Here, we introduced stimulus uncertainty into the PAS paradigm to investigate if it can boost plasticity induction. Across two experimental sessions, participants (n = 28) received a modified PAS paradigm consisting of a random combination of 90 paired stimuli and 90 unpaired (TMS-only) stimuli. Prior to each of these stimuli, participants also received an auditory cue which either reliably predicted whether the upcoming stimulus was paired or unpaired (no uncertainty condition) or did not predict the upcoming stimulus (maximum uncertainty condition). Motor evoked potentials (MEPs) evoked from abductor pollicis brevis (APB) muscle quantified cortical excitability before and after PAS. MEP amplitude increased significantly 15 min following PAS in the maximum uncertainty condition. There was no reliable change in MEP amplitude in the no uncertainty condition, nor between post-PAS MEP amplitudes across the two conditions. These results suggest that stimulus uncertainty may provide a novel means to enhance plasticity induction with the PAS paradigm in human motor cortex. To provide further support to the notion that stimulus uncertainty and prediction error promote plasticity, future studies should further explore the time course of these changes, and investigate what aspects of stimulus uncertainty are critical in boosting plasticity. Copyright © 2016 Elsevier Ltd. All rights reserved.
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
The Use of Picture Prompts and Prompt Delay to Teach Receptive Labeling
ERIC Educational Resources Information Center
Vedora, Joseph; Barry, Tiffany
2016-01-01
The current study extended research on picture prompts by using them with a progressive prompt delay to teach receptive labeling of pictures to 2 teenagers with autism. The procedure differed from prior research because the auditory stimulus was not presented or was presented only once during the picture-prompt condition. The results indicated…
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Stimulus-specific adaptation and deviance detection in the inferior colliculus
Ayala, Yaneri A.; Malmierca, Manuel S.
2013-01-01
Deviancy detection in the continuous flow of sensory information into the central nervous system is of vital importance for animals. The task requires neuronal mechanisms that allow for an efficient representation of the environment by removing statistically redundant signals. Recently, the neuronal principles of auditory deviance detection have been approached by studying the phenomenon of stimulus-specific adaptation (SSA). SSA is a reduction in the responsiveness of a neuron to a common or repetitive sound while the neuron remains highly sensitive to rare sounds (Ulanovsky et al., 2003). This phenomenon could enhance the saliency of unexpected, deviant stimuli against a background of repetitive signals. SSA shares many similarities with the evoked potential known as the “mismatch negativity,” (MMN) and it has been linked to cognitive process such as auditory memory and scene analysis (Winkler et al., 2009) as well as to behavioral habituation (Netser et al., 2011). Neurons exhibiting SSA can be found at several levels of the auditory pathway, from the inferior colliculus (IC) up to the auditory cortex (AC). In this review, we offer an account of the state-of-the art of SSA studies in the IC with the aim of contributing to the growing interest in the single-neuron electrophysiology of auditory deviance detection. The dependence of neuronal SSA on various stimulus features, e.g., probability of the deviant stimulus and repetition rate, and the roles of the AC and inhibition in shaping SSA at the level of the IC are addressed. PMID:23335883
van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean
2017-04-15
A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.
Mismatch and conflict: neurophysiological and behavioral evidence for conflict priming.
Mager, Ralph; Meuth, Sven G; Kräuchi, Kurt; Schmidlin, Maria; Müller-Spahn, Franz; Falkenstein, Michael
2009-11-01
Conflict-related cognitive processes are critical for adapting to sudden environmental changes that confront the individual with inconsistent or ambiguous information. Thus, these processes play a crucial role to cope with daily life. Generally, conflicts tend to accumulate especially in complex and threatening situations. Therefore, the question arises how conflict-related cognitive processes are modulated by the close succession of conflicts. In the present study, we investigated the effect of interactions between different types of conflict on performance as well as on electrophysiological parameters. A task-irrelevant auditory stimulus and a task-relevant visual stimulus were presented successively. The auditory stimulus consisted of a standard or deviant tone, followed by a congruent or incongruent Stroop stimulus. After standard prestimuli, performance deteriorated for incongruent compared to congruent Stroop stimuli, which were accompanied by a widespread negativity for incongruent versus congruent stimuli in the event-related potentials (ERPs). However, after deviant prestimuli, performance was better for incongruent than for congruent Stroop stimuli and an additional early negativity in the ERP emerged with a fronto-central maximum. Our data show that deviant auditory prestimuli facilitate specifically the processing of stimulus-related conflict, providing evidence for a conflict-priming effect.
Auditory Discrimination Learning: Role of Working Memory.
Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Direct Recordings of Pitch Responses from Human Auditory Cortex
Griffiths, Timothy D.; Kumar, Sukhbinder; Sedley, William; Nourski, Kirill V.; Kawasaki, Hiroto; Oya, Hiroyuki; Patterson, Roy D.; Brugge, John F.; Howard, Matthew A.
2010-01-01
Summary Pitch is a fundamental percept with a complex relationship to the associated sound structure [1]. Pitch perception requires brain representation of both the structure of the stimulus and the pitch that is perceived. We describe direct recordings of local field potentials from human auditory cortex made while subjects perceived the transition between noise and a noise with a regular repetitive structure in the time domain at the millisecond level called regular-interval noise (RIN) [2]. RIN is perceived to have a pitch when the rate is above the lower limit of pitch [3], at approximately 30 Hz. Sustained time-locked responses are observed to be related to the temporal regularity of the stimulus, commonly emphasized as a relevant stimulus feature in models of pitch perception (e.g., [1]). Sustained oscillatory responses are also demonstrated in the high gamma range (80–120 Hz). The regularity responses occur irrespective of whether the response is associated with pitch perception. In contrast, the oscillatory responses only occur for pitch. Both responses occur in primary auditory cortex and adjacent nonprimary areas. The research suggests that two types of pitch-related activity occur in humans in early auditory cortex: time-locked neural correlates of stimulus regularity and an oscillatory response related to the pitch percept. PMID:20605456
Auditory Discrimination Learning: Role of Working Memory
Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal
2016-01-01
Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068
Cecere, Roberto; Gross, Joachim; Willis, Ashleigh; Thut, Gregor
2017-05-24
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory-visual (AV) or visual-auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV-VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV-VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV-VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AV maps = VA maps versus AV maps ≠ VA maps The tRSA results favored the AV maps ≠ VA maps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). Copyright © 2017 Cecere et al.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study.
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190-210 ms, for 1 kHz stimuli from 170-200 ms, for 2.5 kHz stimuli from 140-200 ms, 5 kHz stimuli from 100-200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300-340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Ji, Jinzhao; Maren, Stephen
2008-12-12
Recent studies have shown that the hippocampus is critical for the context-dependent expression of extinguished fear memories. Here we used Pavlovian fear conditioning in rats to explore whether the entorhinal cortex and fornix, which are the major cortical and subcortical interfaces of the hippocampus, are also involved in the context-dependence of extinction. After pairing an auditory conditional stimulus (CS) with an aversive footshock (unconditional stimulus or US) in one context, rats received an extinction session in which the CS was presented without the US in another context. Conditional fear to the CS was then tested in either the extinction context or a third familiar context; freezing behavior served as the index of fear. Sham-operated rats exhibited little conditional freezing to the CS in the extinction context, but showed a robust renewal of fear when tested outside of the extinction context. In contrast, rats with neurotoxic lesions in the entorhinal cortex or electrolytic lesions in the fornix did not exhibit a renewal of fear when tested outside the extinction context. Impairments in freezing behavior to the auditory CS were not able to account for the observed results, insofar as rats with either entorhinal cortex or fornix lesions exhibited normal freezing behavior during the conditioning session. Thus, contextual memory retrieval requires not only the hippocampus proper, but also its cortical and subcortical interfaces.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Spatio-temporal Dynamics of Audiovisual Speech Processing
Bernstein, Lynne E.; Auer, Edward T.; Wagner, Michael; Ponton, Curtis W.
2007-01-01
The cortical processing of auditory-alone, visual-alone, and audiovisual speech information is temporally and spatially distributed, and functional magnetic resonance imaging (fMRI) cannot adequately resolve its temporal dynamics. In order to investigate a hypothesized spatio-temporal organization for audiovisual speech processing circuits, event-related potentials (ERPs) were recorded using electroencephalography (EEG). Stimuli were congruent audiovisual /bα/, incongruent auditory /bα/ synchronized with visual /gα/, auditory-only /bα/, and visual-only /bα/ and /gα/. Current density reconstructions (CDRs) of the ERP data were computed across the latency interval of 50-250 milliseconds. The CDRs demonstrated complex spatio-temporal activation patterns that differed across stimulus conditions. The hypothesized circuit that was investigated here comprised initial integration of audiovisual speech by the middle superior temporal sulcus (STS), followed by recruitment of the intraparietal sulcus (IPS), followed by activation of Broca's area (Miller and d'Esposito, 2005). The importance of spatio-temporally sensitive measures in evaluating processing pathways was demonstrated. Results showed, strikingly, early (< 100 msec) and simultaneous activations in areas of the supramarginal and angular gyrus (SMG/AG), the IPS, the inferior frontal gyrus, and the dorsolateral prefrontal cortex. Also, emergent left hemisphere SMG/AG activation, not predicted based on the unisensory stimulus conditions was observed at approximately 160 to 220 msec. The STS was neither the earliest nor most prominent activation site, although it is frequently considered the sine qua non of audiovisual speech integration. As discussed here, the relatively late activity of the SMG/AG solely under audiovisual conditions is a possible candidate audiovisual speech integration response. PMID:17920933
Boosting pitch encoding with audiovisual interactions in congenital amusia.
Albouy, Philippe; Lévêque, Yohana; Hyde, Krista L; Bouchet, Patrick; Tillmann, Barbara; Caclin, Anne
2015-01-01
The combination of information across senses can enhance perception, as revealed for example by decreased reaction times or improved stimulus detection. Interestingly, these facilitatory effects have been shown to be maximal when responses to unisensory modalities are weak. The present study investigated whether audiovisual facilitation can be observed in congenital amusia, a music-specific disorder primarily ascribed to impairments of pitch processing. Amusic individuals and their matched controls performed two tasks. In Task 1, they were required to detect auditory, visual, or audiovisual stimuli as rapidly as possible. In Task 2, they were required to detect as accurately and as rapidly as possible a pitch change within an otherwise monotonic 5-tone sequence that was presented either only auditorily (A condition), or simultaneously with a temporally congruent, but otherwise uninformative visual stimulus (AV condition). Results of Task 1 showed that amusics exhibit typical auditory and visual detection, and typical audiovisual integration capacities: both amusics and controls exhibited shorter response times for audiovisual stimuli than for either auditory stimuli or visual stimuli. Results of Task 2 revealed that both groups benefited from simultaneous uninformative visual stimuli to detect pitch changes: accuracy was higher and response times shorter in the AV condition than in the A condition. The audiovisual improvements of response times were observed for different pitch interval sizes depending on the group. These results suggest that both typical listeners and amusic individuals can benefit from multisensory integration to improve their pitch processing abilities and that this benefit varies as a function of task difficulty. These findings constitute the first step towards the perspective to exploit multisensory paradigms to reduce pitch-related deficits in congenital amusia, notably by suggesting that audiovisual paradigms are effective in an appropriate range of unimodal performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2018-01-01
Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602
Neurofeedback in Learning Disabled Children: Visual versus Auditory Reinforcement.
Fernández, Thalía; Bosch-Bayard, Jorge; Harmony, Thalía; Caballero, María I; Díaz-Comas, Lourdes; Galán, Lídice; Ricardo-Garcell, Josefina; Aubert, Eduardo; Otero-Ojeda, Gloria
2016-03-01
Children with learning disabilities (LD) frequently have an EEG characterized by an excess of theta and a deficit of alpha activities. NFB using an auditory stimulus as reinforcer has proven to be a useful tool to treat LD children by positively reinforcing decreases of the theta/alpha ratio. The aim of the present study was to optimize the NFB procedure by comparing the efficacy of visual (with eyes open) versus auditory (with eyes closed) reinforcers. Twenty LD children with an abnormally high theta/alpha ratio were randomly assigned to the Auditory or the Visual group, where a 500 Hz tone or a visual stimulus (a white square), respectively, was used as a positive reinforcer when the value of the theta/alpha ratio was reduced. Both groups had signs consistent with EEG maturation, but only the Auditory Group showed behavioral/cognitive improvements. In conclusion, the auditory reinforcer was more efficacious in reducing the theta/alpha ratio, and it improved the cognitive abilities more than the visual reinforcer.
Hearing in noisy environments: noise invariance and contrast gain control
Willmore, Ben D B; Cooke, James E; King, Andrew J
2014-01-01
Contrast gain control has recently been identified as a fundamental property of the auditory system. Electrophysiological recordings in ferrets have shown that neurons continuously adjust their gain (their sensitivity to change in sound level) in response to the contrast of sounds that are heard. At the level of the auditory cortex, these gain changes partly compensate for changes in sound contrast. This means that sounds which are structurally similar, but have different contrasts, have similar neuronal representations in the auditory cortex. As a result, the cortical representation is relatively invariant to stimulus contrast and robust to the presence of noise in the stimulus. In the inferior colliculus (an important subcortical auditory structure), gain changes are less reliably compensatory, suggesting that contrast- and noise-invariant representations are constructed gradually as one ascends the auditory pathway. In addition to noise invariance, contrast gain control provides a variety of computational advantages over static neuronal representations; it makes efficient use of neuronal dynamic range, may contribute to redundancy-reducing, sparse codes for sound and allows for simpler decoding of population responses. The circuits underlying auditory contrast gain control are still under investigation. As in the visual system, these circuits may be modulated by factors other than stimulus contrast, forming a potential neural substrate for mediating the effects of attention as well as interactions between the senses. PMID:24907308
Contralateral Inhibition of Click- and Chirp-Evoked Human Compound Action Potentials
Smith, Spencer B.; Lichtenhan, Jeffery T.; Cone, Barbara K.
2017-01-01
Cochlear outer hair cells (OHC) receive direct efferent feedback from the caudal auditory brainstem via the medial olivocochlear (MOC) bundle. This circuit provides the neural substrate for the MOC reflex, which inhibits cochlear amplifier gain and is believed to play a role in listening in noise and protection from acoustic overexposure. The human MOC reflex has been studied extensively using otoacoustic emissions (OAE) paradigms; however, these measurements are insensitive to subsequent “downstream” efferent effects on the neural ensembles that mediate hearing. In this experiment, click- and chirp-evoked auditory nerve compound action potential (CAP) amplitudes were measured electrocochleographically from the human eardrum without and with MOC reflex activation elicited by contralateral broadband noise. We hypothesized that the chirp would be a more optimal stimulus for measuring neural MOC effects because it synchronizes excitation along the entire length of the basilar membrane and thus evokes a more robust CAP than a click at low to moderate stimulus levels. Chirps produced larger CAPs than clicks at all stimulus intensities (50–80 dB ppeSPL). MOC reflex inhibition of CAPs was larger for chirps than clicks at low stimulus levels when quantified both in terms of amplitude reduction and effective attenuation. Effective attenuation was larger for chirp- and click-evoked CAPs than for click-evoked OAEs measured from the same subjects. Our results suggest that the chirp is an optimal stimulus for evoking CAPs at low stimulus intensities and for assessing MOC reflex effects on the auditory nerve. Further, our work supports previous findings that MOC reflex effects at the level of the auditory nerve are underestimated by measures of OAE inhibition. PMID:28420960
Stevenson, Ryan A; Fister, Juliane Krueger; Barnett, Zachary P; Nidiffer, Aaron R; Wallace, Mark T
2012-05-01
In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.
Visual adaptation enhances action sound discrimination.
Barraclough, Nick E; Page, Steve A; Keefe, Bruce D
2017-01-01
Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.
NASA Technical Reports Server (NTRS)
Wightman, Frederic L.; Jenison, Rick
1995-01-01
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Berger, Christopher C; Gonzalez-Franco, Mar; Tajadura-Jiménez, Ana; Florencio, Dinei; Zhang, Zhengyou
2018-01-01
Auditory spatial localization in humans is performed using a combination of interaural time differences, interaural level differences, as well as spectral cues provided by the geometry of the ear. To render spatialized sounds within a virtual reality (VR) headset, either individualized or generic Head Related Transfer Functions (HRTFs) are usually employed. The former require arduous calibrations, but enable accurate auditory source localization, which may lead to a heightened sense of presence within VR. The latter obviate the need for individualized calibrations, but result in less accurate auditory source localization. Previous research on auditory source localization in the real world suggests that our representation of acoustic space is highly plastic. In light of these findings, we investigated whether auditory source localization could be improved for users of generic HRTFs via cross-modal learning. The results show that pairing a dynamic auditory stimulus, with a spatio-temporally aligned visual counterpart, enabled users of generic HRTFs to improve subsequent auditory source localization. Exposure to the auditory stimulus alone or to asynchronous audiovisual stimuli did not improve auditory source localization. These findings have important implications for human perception as well as the development of VR systems as they indicate that generic HRTFs may be enough to enable good auditory source localization in VR.
Lifespan differences in nonlinear dynamics during rest and auditory oddball performance.
Müller, Viktor; Lindenberger, Ulman
2012-07-01
Electroencephalographic recordings (EEG) were used to assess age-associated differences in nonlinear brain dynamics during both rest and auditory oddball performance in children aged 9.0-12.8 years, younger adults, and older adults. We computed nonlinear coupling dynamics and dimensional complexity, and also determined spectral alpha power as an indicator of cortical reactivity. During rest, both nonlinear coupling and spectral alpha power decreased with age, whereas dimensional complexity increased. In contrast, when attending to the deviant stimulus, nonlinear coupling increased with age, and complexity decreased. Correlational analyses showed that nonlinear measures assessed during auditory oddball performance were reliably related to an independently assessed measure of perceptual speed. We conclude that cortical dynamics during rest and stimulus processing undergo substantial reorganization from childhood to old age, and propose that lifespan age differences in nonlinear dynamics during stimulus processing reflect lifespan changes in the functional organization of neuronal cell assemblies. © 2012 Blackwell Publishing Ltd.
Electroencephalographic and behavioral effects of nocturnally occurring jet aircraft sounds.
NASA Technical Reports Server (NTRS)
Levere, T. E.; Bartus, R. T.; Hart, F. D.
1972-01-01
The present research presents data relative to the objective evaluation of the effects of a specific complex auditory stimulus presented during sleep. The auditory stimulus was a jet aircraft flyover of approximately 20-sec duration and a peak intensity level of approximately 80 dB (A). Our specific interests were in terms of how this stimulus would interact with the frequency pattern of the sleeping EEG and whether there would be any carry-over effects of the nocturnally presented stimuli to the waking state. The results indicated that the physiological effects (changes in electroencephalographic activity) produced by the jet aircraft stimuli outlasted the physical presence of the auditory stimuli by a considerable degree. Further, it was possible to note both behavioral and electroencephalographic changes during waking performances subsequent to nights disturbed by the jet aircraft flyovers which were not apparent during performances subsequent to undisturbed nights.
Scholes, Kirsty E; Martin-Iverson, Mathew T
2010-03-01
Controversy exists as to the cause of disturbed prepulse inhibition (PPI) in patients with schizophrenia. This study aimed to clarify the nature of PPI in schizophrenia using improved methodology. Startle and PPI were measured in 44 patients with schizophrenia and 32 controls across a range of startling stimulus intensities under two conditions, one while participants were attending to the auditory stimuli (ATTEND condition) and one while participants completed a visual task in order to ensure they were ignoring the auditory stimuli (IGNORE condition). Patients showed reduced PPI of R(MAX) (reflex capacity) and increased PPI of Hillslope (reflex efficacy) only under the INGORE condition, and failed to show the same pattern of attentional modulation of the reflex parameters as controls. In conclusion, disturbed PPI in schizophrenia appears to result from deficits in selective attention, rather than from preattentive dysfunction.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Sanfratello, Lori; Aine, Cheryl; Stephen, Julia
2018-05-25
Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.
Auditory Detection of the Human Brainstem Auditory Evoked Response.
ERIC Educational Resources Information Center
Kidd, Gerald, Jr.; And Others
1993-01-01
This study evaluated whether listeners can distinguish human brainstem auditory evoked responses elicited by acoustic clicks from control waveforms obtained with no acoustic stimulus when the waveforms are presented auditorily. Detection performance for stimuli presented visually was slightly, but consistently, superior to that which occurred for…
Primary Auditory Cortex Regulates Threat Memory Specificity
ERIC Educational Resources Information Center
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
Effects of musical training on sound pattern processing in high-school students.
Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse
2009-05-01
Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
ERIC Educational Resources Information Center
Kerfoot, Erin C.; Agarwal, Isha; Lee, Hongjoo J.; Holland, Peter C.
2007-01-01
Through associative learning, cues for biologically significant reinforcers such as food may gain access to mental representations of those reinforcers. Here, we used devaluation procedures, behavioral assessment of hedonic taste-reactivity responses, and measurement of immediate-early gene (IEG) expression to show that a cue for food engages…
ERIC Educational Resources Information Center
Shucard, Janet L.; Shucard, David W.
1990-01-01
Verbal and musical stimuli were presented to infants in a study of the relations of evoked potential left-right amplitude asymmetries to gender and hand preference. There was a relation between asymmetry and hand preference, and for girls, between asymmetry and stimulus condition. Results suggest a gender difference in cerebral hemisphere…
Perceptual Bias and Loudness Change: An Investigation of Memory, Masking, and Psychophysiology
NASA Astrophysics Data System (ADS)
Olsen, Kirk N.
Loudness is a fundamental aspect of human auditory perception that is closely associated with a sound's physical acoustic intensity. The dynamic quality of intensity change is an inherent acoustic feature in real-world listening domains such as speech and music. However, perception of loudness change in response to continuous intensity increases (up-ramps) and decreases (down-ramps) has received relatively little empirical investigation. Overestimation of loudness change in response to up-ramps is said to be linked to an adaptive survival response associated with looming (or approaching) motion in the environment. The hypothesised 'perceptual bias' to looming auditory motion suggests why perceptual overestimation of up-ramps may occur; however it does not offer a causal explanation. It is concluded that post-stimulus judgements of perceived loudness change are significantly affected by a cognitive recency response bias that, until now, has been an artefact of experimental procedure. Perceptual end-level differences caused by duration specific sensory adaptation at peripheral and/or central stages of auditory processing may explain differences in post-stimulus judgements of loudness change. Experiments that investigate human responses to acoustic intensity dynamics, encompassing topics from basic auditory psychophysics (e.g., sensory adaptation) to cognitive-emotional appraisal of increasingly complex stimulus events such as music and auditory warnings, are proposed for future research.
Gamma-band activation predicts both associative memory and cortical plasticity
Headley, Drew B.; Weinberger, Norman M.
2011-01-01
Gamma-band oscillations are a ubiquitous phenomenon in the nervous system and have been implicated in multiple aspects of cognition. In particular, the strength of gamma oscillations at the time a stimulus is encoded predicts its subsequent retrieval, suggesting that gamma may reflect enhanced mnemonic processing. Likewise, activity in the gamma-band can modulate plasticity in vitro. However, it is unclear whether experience-dependent plasticity in vivo is also related to gamma-band activation. The aim of the present study is to determine whether gamma activation in primary auditory cortex modulates both the associative memory for an auditory stimulus during classical conditioning and its accompanying specific receptive field plasticity. Rats received multiple daily sessions of single tone/shock trace and two-tone discrimination conditioning, during which local field potentials and multiunit discharges were recorded from chronically implanted electrodes. We found that the strength of tone-induced gamma predicted the acquisition of associative memory 24 h later, and ceased to predict subsequent performance once asymptote was reached. Gamma activation also predicted receptive field plasticity that specifically enhanced representation of the signal tone. This concordance provides a long-sought link between gamma oscillations, cortical plasticity and the formation of new memories. PMID:21900554
Deacon, D; Nousak, J M; Pilotti, M; Ritter, W; Yang, C M
1998-07-01
The effects of global and feature-specific probabilities of auditory stimuli were manipulated to determine their effects on the mismatch negativity (MMN) of the human event-related potential. The question of interest was whether the automatic comparison of stimuli indexed by the MMN was performed on representations of individual stimulus features or on gestalt representations of their combined attributes. The design of the study was such that both feature and gestalt representations could have been available to the comparator mechanism generating the MMN. The data were consistent with the interpretation that the MMN was generated following an analysis of stimulus features.
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance.
von Trapp, Gardiner; Buran, Bradley N; Sen, Kamal; Semple, Malcolm N; Sanes, Dan H
2016-10-26
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. Copyright © 2016 the authors 0270-6474/16/3611097-10$15.00/0.
A Decline in Response Variability Improves Neural Signal Detection during Auditory Task Performance
Buran, Bradley N.; Sen, Kamal; Semple, Malcolm N.; Sanes, Dan H.
2016-01-01
The detection of a sensory stimulus arises from a significant change in neural activity, but a sensory neuron's response is rarely identical to successive presentations of the same stimulus. Large trial-to-trial variability would limit the central nervous system's ability to reliably detect a stimulus, presumably affecting perceptual performance. However, if response variability were to decrease while firing rate remained constant, then neural sensitivity could improve. Here, we asked whether engagement in an auditory detection task can modulate response variability, thereby increasing neural sensitivity. We recorded telemetrically from the core auditory cortex of gerbils, both while they engaged in an amplitude-modulation detection task and while they sat quietly listening to the identical stimuli. Using a signal detection theory framework, we found that neural sensitivity was improved during task performance, and this improvement was closely associated with a decrease in response variability. Moreover, units with the greatest change in response variability had absolute neural thresholds most closely aligned with simultaneously measured perceptual thresholds. Our findings suggest that the limitations imposed by response variability diminish during task performance, thereby improving the sensitivity of neural encoding and potentially leading to better perceptual sensitivity. SIGNIFICANCE STATEMENT The detection of a sensory stimulus arises from a significant change in neural activity. However, trial-to-trial variability of the neural response may limit perceptual performance. If the neural response to a stimulus is quite variable, then the response on a given trial could be confused with the pattern of neural activity generated when the stimulus is absent. Therefore, a neural mechanism that served to reduce response variability would allow for better stimulus detection. By recording from the cortex of freely moving animals engaged in an auditory detection task, we found that variability of the neural response becomes smaller during task performance, thereby improving neural detection thresholds. PMID:27798189
Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine
2010-01-01
Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…
Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.
2013-01-01
Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263
Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro
2013-01-01
The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338
Influence of auditory and audiovisual stimuli on the right-left prevalence effect.
Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim
2014-01-01
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
A neural network model of ventriloquism effect and aftereffect.
Magosso, Elisa; Cuppini, Cristiano; Ursino, Mauro
2012-01-01
Presenting simultaneous but spatially discrepant visual and auditory stimuli induces a perceptual translocation of the sound towards the visual input, the ventriloquism effect. General explanation is that vision tends to dominate over audition because of its higher spatial reliability. The underlying neural mechanisms remain unclear. We address this question via a biologically inspired neural network. The model contains two layers of unimodal visual and auditory neurons, with visual neurons having higher spatial resolution than auditory ones. Neurons within each layer communicate via lateral intra-layer synapses; neurons across layers are connected via inter-layer connections. The network accounts for the ventriloquism effect, ascribing it to a positive feedback between the visual and auditory neurons, triggered by residual auditory activity at the position of the visual stimulus. Main results are: i) the less localized stimulus is strongly biased toward the most localized stimulus and not vice versa; ii) amount of the ventriloquism effect changes with visual-auditory spatial disparity; iii) ventriloquism is a robust behavior of the network with respect to parameter value changes. Moreover, the model implements Hebbian rules for potentiation and depression of lateral synapses, to explain ventriloquism aftereffect (that is, the enduring sound shift after exposure to spatially disparate audio-visual stimuli). By adaptively changing the weights of lateral synapses during cross-modal stimulation, the model produces post-adaptive shifts of auditory localization that agree with in-vivo observations. The model demonstrates that two unimodal layers reciprocally interconnected may explain ventriloquism effect and aftereffect, even without the presence of any convergent multimodal area. The proposed study may provide advancement in understanding neural architecture and mechanisms at the basis of visual-auditory integration in the spatial realm.
A further assessment of the Hall-Rodriguez theory of latent inhibition.
Leung, Hiu Tin; Killcross, A S; Westbrook, R Frederick
2013-04-01
The Hall-Rodriguez (G. Hall & G. Rodriguez, 2010, Associative and nonassociative processes in latent inhibition: An elaboration of the Pearce-Hall model, in R. E. Lubow & I. Weiner, Eds., Latent inhibition: Data, theories, and applications to schizophrenia, pp. 114-136, Cambridge, England: Cambridge University Press) theory of latent inhibition predicts that it will be deepened when a preexposed target stimulus is given additional preexposures in compound with (a) a novel stimulus or (b) another preexposed stimulus, and (c) that deepening will be greater when the compound contains a novel rather than another preexposed stimulus. A series of experiments studied these predictions using a fear conditioning procedure with rats. In each experiment, rats were preexposed to 3 stimuli, 1 (A) taken from 1 modality (visual or auditory) and the remaining 2 (X and Y) taken from another modality (auditory or visual). Then A was compounded with X, and Y was compounded with a novel stimulus (B) taken from the same modality as A. A previous series of experiments (H. T. Leung, A. S. Killcross, & R. F. Westbrook, 2011, Additional exposures to a compound of two preexposed stimuli deepen latent inhibition, Journal of Experimental Psychology: Animal Behavior Processes, Vol. 37, pp. 394-406) compared A with Y, finding that A was more latently inhibited than Y, the opposite of what was predicted. The present experiments confirmed that A was more latently inhibited than Y, showed that this was due to A entering the compound more latently inhibited than Y, and finally, that a comparison of X and Y confirmed the 3 predictions made by the theory.
Binaural Interaction Effects of 30-50 Hz Auditory Steady State Responses.
Gransier, Robin; van Wieringen, Astrid; Wouters, Jan
Auditory stimuli modulated by modulation frequencies within the 30 to 50 Hz region evoke auditory steady state responses (ASSRs) with high signal to noise ratios in adults, and can be used to determine the frequency-specific hearing thresholds of adults who are unable to give behavioral feedback reliably. To measure ASSRs as efficiently as possible a multiple stimulus paradigm can be used, stimulating both ears simultaneously. The response strength of 30 to 50Hz ASSRs is, however, affected when both ears are stimulated simultaneously. The aim of the present study is to gain insight in the measurement efficiency of 30 to 50 Hz ASSRs evoked with a 2-ear stimulation paradigm, by systematically investigating the binaural interaction effects of 30 to 50 Hz ASSRs in normal-hearing adults. ASSRs were obtained with a 64-channel EEG system in 23 normal-hearing adults. All participants participated in one diotic, multiple dichotic, and multiple monaural conditions. Stimuli consisted of a modulated one-octave noise band, centered at 1 kHz, and presented at 70 dB SPL. The diotic condition contained 40 Hz modulated stimuli presented to both ears. In the dichotic conditions, the modulation frequency of the left ear stimulus was kept constant at 40 Hz, while the stimulus at the right ear was either the unmodulated or modulated carrier. In case of the modulated carrier, the modulation frequency varied between 30 and 50 Hz in steps of 2 Hz across conditions. The monaural conditions consisted of all stimuli included in the diotic and dichotic conditions. Modulation frequencies ≥36 Hz resulted in prominent ASSRs in all participants for the monaural conditions. A significant enhancement effect was observed (average: ~3 dB) in the diotic condition, whereas a significant reduction effect was observed in the dichotic conditions. There was no distinct effect of the temporal characteristics of the stimuli on the amount of reduction. The attenuation was in 33% of the cases >3 dB for ASSRs evoked with modulation frequencies ≥40 Hz and 50% for ASSRs evoked with modulation frequencies ≤36 Hz. Binaural interaction effects as observed in the diotic condition are similar to the binaural interaction effects of middle latency responses as reported in the literature, suggesting that these responses share a same underlying mechanism. Our data also indicated that 30 to 50 Hz ASSRs are attenuated when presented dichotically and that this attenuation is independent of the stimulus characteristics as used in the present study. These findings are important as they give insight in how binaural interaction affects the measurement efficiency. The 2-ear stimulation paradigm of the present study was, for the most optimal modulation frequencies (i.e., ≥40 Hz), more efficient than a 1-ear sequential stimulation paradigm in 66% of the cases.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study
Yang, Weiping; Yang, Jingjing; Gao, Yulin; Tang, Xiaoyu; Ren, Yanna; Takahashi, Satoshi; Wu, Jinglong
2015-01-01
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies. PMID:26384256
Do former preterm infants remember and respond to neonatal intensive care unit noise?
Barreto, Edwin D; Morris, Brenda H; Philbin, M Kathleen; Gray, Lincoln C; Lasky, Robert E
2006-11-01
Previous studies have shown that 4-month-old infants have a decrease in heart rate, a component of the orienting reflex, in response to interesting auditory stimuli and an increase in heart rate to aversive auditory stimuli. To compare the heart rate responses of former preterm and term infants at 4-5 months corrected age to a recording of NICU noises. 13 former preterm infants and 17 full-term infants were presented NICU noise and another noise of similar level and frequency content in random order. Heart rate 10s prior to the stimulus and for 20s during the stimulus was analyzed. Group differences in second by second heart rate changes in response to the two noise stimuli were compared by analysis of covariance. Both the preterm and term newborns responded similarly to the NICU noise and the control noise. The preterm infants did not alter their heart rate in response to either stimulus. In contrast, the term infants displayed an orienting response to the second stimulus presented regardless of whether it was the NICU or control noise. Former preterm infants at 4-5 months corrected age have reduced responsiveness to auditory stimulation in comparison to 4- to 5-month-old term infants. Furthermore, they did not respond to the NICU noise as an aversive stimulus.
Vicario, David S.
2017-01-01
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird’s own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing. NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal functional correlations. Accordingly, NCM seems to process the individually specific complex vocalizations of others based on prior familiarity, while HVC responses appear to be modulated by transitions and/or timing in the ongoing sequence of sounds. PMID:28031398
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
MOLECULAR MECHANISMS OF FEAR LEARNING AND MEMORY
Johansen, Joshua P.; Cain, Christopher K.; Ostroff, Linnaea E.; LeDoux, Joseph E.
2011-01-01
Pavlovian fear conditioning is a useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Together, this research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals, and potentially for understanding fear related disorders, such as PTSD and phobias. PMID:22036561
Constantinidou, Fofi; Evripidou, Christiana
2012-01-01
This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.
Electrophysiological measurement of human auditory function
NASA Technical Reports Server (NTRS)
Galambos, R.
1975-01-01
Knowledge of the human auditory evoked response is reviewed, including methods of determining this response, the way particular changes in the stimulus are coupled to specific changes in the response, and how the state of mind of the listener will influence the response. Important practical applications of this basic knowledge are discussed. Measurement of the brainstem evoked response, for instance, can state unequivocally how well the peripheral auditory apparatus functions. It might then be developed into a useful hearing test, especially for infants and preverbal or nonverbal children. Clinical applications of measuring the brain waves evoked 100 msec and later after the auditory stimulus are undetermined. These waves are clearly related to brain events associated with cognitive processing of acoustic signals, since their properties depend upon where the listener directs his attention and whether how long he expects the signal.
Intensity-invariant coding in the auditory system.
Barbour, Dennis L
2011-11-01
The auditory system faithfully represents sufficient details from sound sources such that downstream cognitive processes are capable of acting upon this information effectively even in the face of signal uncertainty, degradation or interference. This robust sound source representation leads to an invariance in perception vital for animals to interact effectively with their environment. Due to unique nonlinearities in the cochlea, sound representations early in the auditory system exhibit a large amount of variability as a function of stimulus intensity. In other words, changes in stimulus intensity, such as for sound sources at differing distances, create a unique challenge for the auditory system to encode sounds invariantly across the intensity dimension. This challenge and some strategies available to sensory systems to eliminate intensity as an encoding variable are discussed, with a special emphasis upon sound encoding. Copyright © 2011 Elsevier Ltd. All rights reserved.
Beckers, Gabriël J L; Gahr, Manfred
2012-08-01
Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.
Short-term memory for event duration: modality specificity and goal dependency.
Takahashi, Kohske; Watanabe, Katsumi
2012-11-01
Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.
Primary auditory cortex regulates threat memory specificity.
Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M
2017-01-01
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.
Nishimura, Akio; Yokosawa, Kazuhiko
2009-08-01
In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.
ERIC Educational Resources Information Center
Hernandez, Oscar H.; Vogel-Sprott, Muriel
2009-01-01
This within-subjects experiment tested the relationship between the premotor (cognitive) component of reaction time (RT) to a missing stimulus and parameters of the omitted stimulus potential (OSP) brain wave. Healthy young men (N = 28) completed trials with an auditory stimulus that recurred at 2 s intervals and ceased unpredictably. Premotor RT…
Does the Auditory Saltation Stimulus Distinguish Dyslexic from Competently Reading Adults?
ERIC Educational Resources Information Center
Kidd, Joanna C.; Hogben, John H.
2007-01-01
Purpose: Where the auditory saltation illusion has been used as a measure of auditory temporal processing (ATP) in dyslexia, conflicting results have been apparent (cf. R. Hari & P. Kiesila, 1996; M. Kronbichler, F. Hutzler, & H. Wimmer, 2002). This study sought to re-examine these findings by investigating whether dyslexia is characterized by…
2017-01-01
In multisensory integration, processing in one sensory modality is enhanced by complementary information from other modalities. Intersensory timing is crucial in this process because only inputs reaching the brain within a restricted temporal window are perceptually bound. Previous research in the audiovisual field has investigated various features of the temporal binding window, revealing asymmetries in its size and plasticity depending on the leading input: auditory–visual (AV) or visual–auditory (VA). Here, we tested whether separate neuronal mechanisms underlie this AV–VA dichotomy in humans. We recorded high-density EEG while participants performed an audiovisual simultaneity judgment task including various AV–VA asynchronies and unisensory control conditions (visual-only, auditory-only) and tested whether AV and VA processing generate different patterns of brain activity. After isolating the multisensory components of AV–VA event-related potentials (ERPs) from the sum of their unisensory constituents, we ran a time-resolved topographical representational similarity analysis (tRSA) comparing the AV and VA ERP maps. Spatial cross-correlation matrices were built from real data to index the similarity between the AV and VA maps at each time point (500 ms window after stimulus) and then correlated with two alternative similarity model matrices: AVmaps = VAmaps versus AVmaps ≠ VAmaps. The tRSA results favored the AVmaps ≠ VAmaps model across all time points, suggesting that audiovisual temporal binding (indexed by synchrony perception) engages different neural pathways depending on the leading sense. The existence of such dual route supports recent theoretical accounts proposing that multiple binding mechanisms are implemented in the brain to accommodate different information parsing strategies in auditory and visual sensory systems. SIGNIFICANCE STATEMENT Intersensory timing is a crucial aspect of multisensory integration, determining whether and how inputs in one modality enhance stimulus processing in another modality. Our research demonstrates that evaluating synchrony of auditory-leading (AV) versus visual-leading (VA) audiovisual stimulus pairs is characterized by two distinct patterns of brain activity. This suggests that audiovisual integration is not a unitary process and that different binding mechanisms are recruited in the brain based on the leading sense. These mechanisms may be relevant for supporting different classes of multisensory operations, for example, auditory enhancement of visual attention (AV) and visual enhancement of auditory speech (VA). PMID:28450537
Staib, Jennifer M; Della Valle, Rebecca; Knox, Dayan K
2018-07-01
In classical fear conditioning, a neutral conditioned stimulus (CS) is paired with an aversive unconditioned stimulus (US), which leads to a fear memory. If the CS is repeatedly presented without the US after fear conditioning, the formation of an extinction memory occurs, which inhibits fear memory expression. A previous study has demonstrated that selective cholinergic lesions in the medial septum and vertical limb of the diagonal bands of Broca (MS/vDBB) prior to fear and extinction learning disrupt contextual fear memory discrimination and acquisition of extinction memory. MS/vDBB cholinergic neurons project to a number of substrates that are critical for fear and extinction memory. However, it is currently unknown which of these efferent projections are critical for contextual fear memory discrimination and extinction memory. To address this, we induced cholinergic lesions in efferent targets of MS/vDBB cholinergic neurons. These included the dorsal hippocampus (dHipp), ventral hippocampus (vHipp), medial prefrontal cortex (mPFC), and in the mPFC and dHipp combined. None of these lesion groups exhibited deficits in contextual fear memory discrimination or extinction memory. However, vHipp cholinergic lesions disrupted auditory fear memory. Because MS/vDBB cholinergic neurons are the sole source of acetylcholine in the vHipp, these results suggest that MS/vDBB cholinergic input to the vHipp is critical for auditory fear memory. Taken together with previous findings, the results of this study suggest that MS/vDBB cholinergic neurons are critical for fear and extinction memory, though further research is needed to elucidate the role of MS/vDBB cholinergic neurons in these types of emotional memory. Copyright © 2018 Elsevier Inc. All rights reserved.
Characterizing the roles of alpha and theta oscillations in multisensory attention.
Keller, Arielle S; Payne, Lisa; Sekuler, Robert
2017-05-01
Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterizing the roles of alpha and theta oscillations in multisensory attention
Keller, Arielle S.; Payne, Lisa; Sekuler, Robert
2017-01-01
Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771
Discrimination of sound source velocity in human listeners
NASA Astrophysics Data System (ADS)
Carlile, Simon; Best, Virginia
2002-02-01
The ability of six human subjects to discriminate the velocity of moving sound sources was examined using broadband stimuli presented in virtual auditory space. Subjects were presented with two successive stimuli moving in the frontal horizontal plane level with the ears, and were required to judge which moved the fastest. Discrimination thresholds were calculated for reference velocities of 15, 30, and 60 degrees/s under three stimulus conditions. In one condition, stimuli were centered on 0° azimuth and their duration varied randomly to prevent subjects from using displacement as an indicator of velocity. Performance varied between subjects giving median thresholds of 5.5, 9.1, and 14.8 degrees/s for the three reference velocities, respectively. In a second condition, pairs of stimuli were presented for a constant duration and subjects would have been able to use displacement to assist their judgment as faster stimuli traveled further. It was found that thresholds decreased significantly for all velocities (3.8, 7.1, and 9.8 degrees/s), suggesting that the subjects were using the additional displacement cue. The third condition differed from the second in that the stimuli were ``anchored'' on the same starting location rather than centered on the midline, thus doubling the spatial offset between stimulus endpoints. Subjects showed the lowest thresholds in this condition (2.9, 4.0, and 7.0 degrees/s). The results suggested that the auditory system is sensitive to velocity per se, but velocity comparisons are greatly aided if displacement cues are present.
Nieto-Diego, Javier; Malmierca, Manuel S.
2016-01-01
Stimulus-specific adaptation (SSA) in single neurons of the auditory cortex was suggested to be a potential neural correlate of the mismatch negativity (MMN), a widely studied component of the auditory event-related potentials (ERP) that is elicited by changes in the auditory environment. However, several aspects on this SSA/MMN relation remain unresolved. SSA occurs in the primary auditory cortex (A1), but detailed studies on SSA beyond A1 are lacking. To study the topographic organization of SSA, we mapped the whole rat auditory cortex with multiunit activity recordings, using an oddball paradigm. We demonstrate that SSA occurs outside A1 and differs between primary and nonprimary cortical fields. In particular, SSA is much stronger and develops faster in the nonprimary than in the primary fields, paralleling the organization of subcortical SSA. Importantly, strong SSA is present in the nonprimary auditory cortex within the latency range of the MMN in the rat and correlates with an MMN-like difference wave in the simultaneously recorded local field potentials (LFP). We present new and strong evidence linking SSA at the cellular level to the MMN, a central tool in cognitive and clinical neuroscience. PMID:26950883
Examining age-related differences in auditory attention control using a task-switching procedure.
Lawo, Vera; Koch, Iring
2014-03-01
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Sturza, Julie; Silver, Monica K; Xu, Lin; Li, Mingyan; Mai, Xiaoqin; Xia, Yankai; Shao, Jie; Lozoff, Betsy; Meeker, John
2016-01-01
Pesticides are associated with poorer neurodevelopmental outcomes, but little is known about the effects on sensory functioning. Auditory brainstem response (ABR) and pesticide data were available for 27 healthy, full-term 9-month-old infants participating in a larger study of early iron deficiency and neurodevelopment. Cord blood was analyzed by gas chromatography-mass spectrometry for levels of 20 common pesticides. The ABR forward-masking condition consisted of a click stimulus (masker) delivered via ear canal transducers followed by an identical stimulus delayed by 8, 16, or 64 milliseconds (ms). ABR peak latencies were evaluated as a function of masker-stimulus time interval. Shorter wave latencies reflect faster neural conduction, more mature auditory pathways, and greater degree of myelination. Linear regression models were used to evaluate associations between total number of pesticides detected and ABR outcomes. We considered an additive or synergistic effect of poor iron status by stratifying our analysis by newborn ferritin (based on median split). Infants in the sample were highly exposed to pesticides; a mean of 4.1 pesticides were detected (range 0-9). ABR Wave V latency and central conduction time (CCT) were associated with the number of pesticides detected in cord blood for the 64ms and non-masker conditions. A similar pattern seen for CCT from the 8ms and 16ms conditions, although statistical significance was not reached. Increased pesticide exposure was associated with longer latency. The relation between number of pesticides detected in cord blood and CCT depended on the infant's cord blood ferritin level. Specifically, the relation was present in the lower cord blood ferritin group but not the higher cord blood ferritin group. ABR processing was slower in infants with greater prenatal pesticide exposure, indicating impaired neuromaturation. Infants with lower cord blood ferritin appeared to be more sensitive to the effects of prenatal pesticide exposure on ABR latency delay, suggesting an additive or multiplicative effect. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sturza, Julie; Silver, Monica K.; Xu, Lin; Li, Mingyan; Mai, Xiaoqin; Xia, Yankai; Shao, Jie; Lozoff, Betsy; Meeker, John
2016-01-01
Background Pesticides are associated with poorer neurodevelopmental outcomes, but little is known about the effects on sensory functioning. Methods Auditory brainstem response (ABR) and pesticide data were available for 27 healthy, full-term 9-month-old infants participating in a larger study of early iron deficiency and neurodevelopment. Cord blood was analyzed by gas chromatography-mass spectrometry for levels of 20 common pesticides. The ABR forward-masking condition consisted of a click stimulus (masker) delivered via ear canal transducers followed by an identical stimulus delayed by 8, 16, or 64 milliseconds (ms). ABR peak latencies were evaluated as a function of masker-stimulus time interval. Shorter wave latencies reflect faster neural conduction, more mature auditory pathways, and greater degree of myelination. Linear regression models were used to evaluate associations between total number of pesticides detected and ABR outcomes. We considered an additive or synergistic effect of poor iron status by stratifying our analysis by newborn ferritin (based on median split). Results Infants in the sample were highly exposed to pesticides; a mean of 4.1 pesticides were detected (range 0-9). ABR Wave V latency and central conduction time (CCT) were associated with the number of pesticides detected in cord blood for the 64ms and non-masker conditions. A similar pattern seen for CCT from the 8ms and 16ms conditions, although statistical significance was not reached. Increased pesticide exposure was associated with longer latency. The relation between number of pesticides detected in cord blood and CCT depended on the infant’s cord blood ferritin level. Specifically, the relation was present in the lower cord blood ferritin group but not the higher cord blood ferritin group. Conclusions ABR processing was slower in infants with greater prenatal pesticide exposure, indicating impaired neuromaturation. Infants with lower cord blood ferritin appeared to be more sensitive to the effects of prenatal pesticide exposure on ABR latency delay, suggesting an additive or multiplicative effect. PMID:27166702
Dissociating verbal and nonverbal audiovisual object processing.
Hocking, Julia; Price, Cathy J
2009-02-01
This fMRI study investigates how audiovisual integration differs for verbal stimuli that can be matched at a phonological level and nonverbal stimuli that can be matched at a semantic level. Subjects were presented simultaneously with one visual and one auditory stimulus and were instructed to decide whether these stimuli referred to the same object or not. Verbal stimuli were simultaneously presented spoken and written object names, and nonverbal stimuli were photographs of objects simultaneously presented with naturally occurring object sounds. Stimulus differences were controlled by including two further conditions that paired photographs of objects with spoken words and object sounds with written words. Verbal matching, relative to all other conditions, increased activation in a region of the left superior temporal sulcus that has previously been associated with phonological processing. Nonverbal matching, relative to all other conditions, increased activation in a right fusiform region that has previously been associated with structural and conceptual object processing. Thus, we demonstrate how brain activation for audiovisual integration depends on the verbal content of the stimuli, even when stimulus and task processing differences are controlled.
He, Shuman; Grose, John H; Teagle, Holly F B; Woodard, Jennifer; Park, Lisa R; Hatch, Debora R; Buchman, Craig A
2013-01-01
This study aimed (1) to investigate the feasibility of recording the electrically evoked auditory event-related potential (eERP), including the onset P1-N1-P2 complex and the electrically evoked auditory change complex (EACC) in response to temporal gaps, in children with auditory neuropathy spectrum disorder (ANSD); and (2) to evaluate the relationship between these measures and speech-perception abilities in these subjects. Fifteen ANSD children who are Cochlear Nucleus device users participated in this study. For each subject, the speech-processor microphone was bypassed and the eERPs were elicited by direct stimulation of one mid-array electrode (electrode 12). The stimulus was a train of biphasic current pulses 800 msec in duration. Two basic stimulation conditions were used to elicit the eERP. In the no-gap condition, the entire pulse train was delivered uninterrupted to electrode 12, and the onset P1-N1-P2 complex was measured relative to the stimulus onset. In the gapped condition, the stimulus consisted of two pulse train bursts, each being 400 msec in duration, presented sequentially on the same electrode and separated by one of five gaps (i.e., 5, 10, 20, 50, and 100 msec). Open-set speech-perception ability of these subjects with ANSD was assessed using the phonetically balanced kindergarten (PBK) word lists presented at 60 dB SPL, using monitored live voice in a sound booth. The eERPs were recorded from all subjects with ANSD who participated in this study. There were no significant differences in test-retest reliability, root mean square amplitude or P1 latency for the onset P1-N1-P2 complex between subjects with good (>70% correct on PBK words) and poorer speech-perception performance. In general, the EACC showed less mature morphological characteristics than the onset P1-N1-P2 response recorded from the same subject. There was a robust correlation between the PBK word scores and the EACC thresholds for gap detection. Subjects with poorer speech-perception performance showed larger EACC thresholds in this study. These results demonstrate the feasibility of recording eERPs from implanted children with ANSD, using direct electrical stimulation. Temporal-processing deficits, as demonstrated by large EACC thresholds for gap detection, might account in part for the poor speech-perception performances observed in a subgroup of implanted subjects with ANSD. This finding suggests that the EACC elicited by changes in temporal continuity (i.e., gap) holds promise as a predictor of speech-perception ability among implanted children with ANSD.
Brain stem auditory-evoked response in the nonanesthetized horse and pony.
Marshall, A E
1985-07-01
The brain stem auditory-evoked response (BAER) was measured in 10 horses and 7 ponies under conditions suitable for clinical diagnostic testing. Latencies of 5 vertex-positive peaks and interpeak latency and amplitude ratio on the 1st and 4th peaks were determined. Data from horses and ponies were analyzed separately and were compared. The stimulus was a click (n = 3,000) ranging from 10- to 90-dB hearing level (HL). Neither horses nor ponies responded with a BAER at 10 dB nor did they give reliable responses at less than 50 dB. The 2nd of the BAER waves appeared in the record at lower stimulus intensities than did the 1st wave for the horse and pony. Horses and ponies had a decreasing latency for all waves, as a result of increasing stimulus intensity. Latencies were shorter for the ponies than for the horses at all stimulus intensities for the 1st, 2nd, 3rd, and 4th waves, but not the 5th wave. At 60-dB HL, the mean latencies for the 1st through 5th wave, respectively, for the horse were 1.73, 3.08, 3.93, 4.98, and 6.00 ms and for the pony 1.48, 2.73, 3.50, 4.56, and 6.58 ms. Interpeak latencies, 1st to 4th wave, averaged 3.22 ms (horse) and 3.11 ms (pony) for all stimulus intensities from 50- to 90-dB HL and had a tendency to decrease slightly as stimulus intensity increased. Amplitude ratios (4th wave/1st wave) were less than 1 for all stimulus intensities in the horse. In the pony, the ratio was less than 1 at greater than or equal to 70-dB HL and greater than 1 at less than or equal to 60-dB HL.
Cortical response variability as a developmental index of selective auditory attention
Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina
2014-01-01
Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory attention. We compared cortical auditory-evoked variability with attention across three age groups: preschoolers, school-aged children and young adults. Results reveal an increased impact of selective auditory attention on cortical response variability with development. Although all three age groups have equivalent response variability to attended speech, only school-aged children and adults have a distinction between attend and ignore conditions. Preschoolers, on the other hand, demonstrate no impact of attention on cortical responses, which we argue reflects the gradual emergence of attention within this age range. Outcomes are interpreted in the context of the behavioral relevance of cortical response variability and its potential to serve as a developmental index of cognitive skill. PMID:24267508
Burnham, Denis; Dodd, Barbara
2004-12-01
The McGurk effect, in which auditory [ba] dubbed onto [ga] lip movements is perceived as "da" or "tha," was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4 1/2-month-olds were tested in a habituation-test paradigm, in which an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [(delta)a] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [(delta)a], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [(delta)a] were no more familiar than [ba]. These results are consistent with infants' perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. Copyright 2004 Wiley Periodicals, Inc.
Happel, Max F. K.; Ohl, Frank W.
2017-01-01
Robust perception of auditory objects over a large range of sound intensities is a fundamental feature of the auditory system. However, firing characteristics of single neurons across the entire auditory system, like the frequency tuning, can change significantly with stimulus intensity. Physiological correlates of level-constancy of auditory representations hence should be manifested on the level of larger neuronal assemblies or population patterns. In this study we have investigated how information of frequency and sound level is integrated on the circuit-level in the primary auditory cortex (AI) of the Mongolian gerbil. We used a combination of pharmacological silencing of corticocortically relayed activity and laminar current source density (CSD) analysis. Our data demonstrate that with increasing stimulus intensities progressively lower frequencies lead to the maximal impulse response within cortical input layers at a given cortical site inherited from thalamocortical synaptic inputs. We further identified a temporally precise intercolumnar synaptic convergence of early thalamocortical and horizontal corticocortical inputs. Later tone-evoked activity in upper layers showed a preservation of broad tonotopic tuning across sound levels without shifts towards lower frequencies. Synaptic integration within corticocortical circuits may hence contribute to a level-robust representation of auditory information on a neuronal population level in the auditory cortex. PMID:28046062
Cholecystokinin from the entorhinal cortex enables neural plasticity in the auditory cortex
Li, Xiao; Yu, Kai; Zhang, Zicong; Sun, Wenjian; Yang, Zhou; Feng, Jingyu; Chen, Xi; Liu, Chun-Hua; Wang, Haitao; Guo, Yi Ping; He, Jufang
2014-01-01
Patients with damage to the medial temporal lobe show deficits in forming new declarative memories but can still recall older memories, suggesting that the medial temporal lobe is necessary for encoding memories in the neocortex. Here, we found that cortical projection neurons in the perirhinal and entorhinal cortices were mostly immunopositive for cholecystokinin (CCK). Local infusion of CCK in the auditory cortex of anesthetized rats induced plastic changes that enabled cortical neurons to potentiate their responses or to start responding to an auditory stimulus that was paired with a tone that robustly triggered action potentials. CCK infusion also enabled auditory neurons to start responding to a light stimulus that was paired with a noise burst. In vivo intracellular recordings in the auditory cortex showed that synaptic strength was potentiated after two pairings of presynaptic and postsynaptic activity in the presence of CCK. Infusion of a CCKB antagonist in the auditory cortex prevented the formation of a visuo-auditory association in awake rats. Finally, activation of the entorhinal cortex potentiated neuronal responses in the auditory cortex, which was suppressed by infusion of a CCKB antagonist. Together, these findings suggest that the medial temporal lobe influences neocortical plasticity via CCK-positive cortical projection neurons in the entorhinal cortex. PMID:24343575
Noise-induced hearing loss alters the temporal dynamics of auditory-nerve responses
Scheidt, Ryan E.; Kale, Sushrut; Heinz, Michael G.
2010-01-01
Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids. PMID:20696230
Emberson, Lauren L.; Cannon, Grace; Palmeri, Holly; Richards, John E.; Aslin, Richard N.
2016-01-01
How does the developing brain respond to recent experience? Repetition suppression (RS) is a robust and well-characterized response of to recent experience found, predominantly, in the perceptual cortices of the adult brain. We use functional near-infrared spectroscopy (fNIRS) to investigate how perceptual (temporal and occipital) and frontal cortices in the infant brain respond to auditory and visual stimulus repetitions (spoken words and faces). In Experiment 1, we find strong evidence of repetition suppression in the frontal cortex but only for auditory stimuli. In perceptual cortices, we find only suggestive evidence of auditory RS in the temporal cortex and no evidence of visual RS in any ROI. In Experiments 2 and 3, we replicate and extend these findings. Overall, we provide the first evidence that infant and adult brains respond differently to stimulus repetition. We suggest that the frontal lobe may support the development of RS in perceptual cortices. PMID:28012401
Auditory peripersonal space in humans.
Farnè, Alessandro; Làdavas, Elisabetta
2002-10-01
In the present study we report neuropsychological evidence of the existence of an auditory peripersonal space representation around the head in humans and its characteristics. In a group of right brain-damaged patients with tactile extinction, we found that a sound delivered near the ipsilesional side of the head (20 cm) strongly extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). By contrast, when an auditory stimulus was presented far from the head (70 cm), cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was most consistently found (i.e., both in the front and back spaces) when a complex sound was presented, like a white noise burst. Pure tones produced spatially specific cross-modal extinction when presented in the back space, but not in the front space. In addition, the most severe cross-modal extinction emerged when sounds came from behind the head, thus showing that the back space is more sensitive than the front space to the sensory interaction of auditory-tactile inputs. Finally, when cross-modal effects were investigated by reversing the spatial arrangement of cross-modal stimuli (i.e., touch on the right and sound on the left), we found that an ipsilesional tactile stimulus, although inducing a small amount of cross-modal tactile-auditory extinction, did not produce any spatial-specific effect. Therefore, the selective aspects of cross-modal interaction found near the head cannot be explained by a competition between a damaged left spatial representation and an intact right spatial representation. Thus, consistent with neurophysiological evidence from monkeys, our findings strongly support the existence, in humans, of an integrated cross-modal system coding auditory and tactile stimuli near the body, that is, in the peripersonal space.
Potes, Cristhian; Brunner, Peter; Gunduz, Aysegul; Knight, Robert T; Schalk, Gerwin
2014-08-15
Neuroimaging approaches have implicated multiple brain sites in musical perception, including the posterior part of the superior temporal gyrus and adjacent perisylvian areas. However, the detailed spatial and temporal relationship of neural signals that support auditory processing is largely unknown. In this study, we applied a novel inter-subject analysis approach to electrophysiological signals recorded from the surface of the brain (electrocorticography (ECoG)) in ten human subjects. This approach allowed us to reliably identify those ECoG features that were related to the processing of a complex auditory stimulus (i.e., continuous piece of music) and to investigate their spatial, temporal, and causal relationships. Our results identified stimulus-related modulations in the alpha (8-12 Hz) and high gamma (70-110 Hz) bands at neuroanatomical locations implicated in auditory processing. Specifically, we identified stimulus-related ECoG modulations in the alpha band in areas adjacent to primary auditory cortex, which are known to receive afferent auditory projections from the thalamus (80 of a total of 15,107 tested sites). In contrast, we identified stimulus-related ECoG modulations in the high gamma band not only in areas close to primary auditory cortex but also in other perisylvian areas known to be involved in higher-order auditory processing, and in superior premotor cortex (412/15,107 sites). Across all implicated areas, modulations in the high gamma band preceded those in the alpha band by 280 ms, and activity in the high gamma band causally predicted alpha activity, but not vice versa (Granger causality, p<1e(-8)). Additionally, detailed analyses using Granger causality identified causal relationships of high gamma activity between distinct locations in early auditory pathways within superior temporal gyrus (STG) and posterior STG, between posterior STG and inferior frontal cortex, and between STG and premotor cortex. Evidence suggests that these relationships reflect direct cortico-cortical connections rather than common driving input from subcortical structures such as the thalamus. In summary, our inter-subject analyses defined the spatial and temporal relationships between music-related brain activity in the alpha and high gamma bands. They provide experimental evidence supporting current theories about the putative mechanisms of alpha and gamma activity, i.e., reflections of thalamo-cortical interactions and local cortical neural activity, respectively, and the results are also in agreement with existing functional models of auditory processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Niwa, Mamiko; Johnson, Jeffrey S.; O’Connor, Kevin N.; Sutter, Mitchell L.
2013-01-01
We recorded from middle-lateral (ML) and primary (A1) auditory cortex while macaques discriminated amplitude modulated (AM) from unmodulated noise. Compared to A1, ML had a higher proportion of neurons that encode increasing AM depth by decreasing their firing-rates (‘decreasing’ neurons), particularly with responses that were not synchronized to the modulation. Choice probability (CP) analysis revealed that A1 and ML activity were different during the first half of the test stimulus. In A1, significant CP begins prior to the test stimulus, remains relatively constant (or increases slightly) during the stimulus and increases greatly within 200 ms of lever-release. Neurons in ML behave similarly, except that significant CP disappears during the first half of the stimulus and reappears during the second half and pre-release periods. CP differences between A1 and ML depend on neural response type. In ML (but not A1), when activity is lower during the first half of the stimulus in non-synchronized ‘decreasing’ neurons, the monkey is more likely to report AM. Neurons that both increase firing rate with increasing modulation depth (‘increasing’ neurons) and synchronize their responses to AM had similar choice-related activity dynamics in ML and A1. The results suggest that, when ascending the auditory system, there is a transformation in coding AM from primarily synchronized ‘increasing’ responses in A1 to non-synchronized and dual (‘increasing’/’decreasing’) coding in ML. This sensory transformation is accompanied by changes in the timing of activity related to choice, suggesting functional differences between A1 and ML related to attention and/or behavior. PMID:23658177
The Effect of Cognitive Control on Different Types of Auditory Distraction.
Bell, Raoul; Röer, Jan P; Marsh, John E; Storch, Dunja; Buchner, Axel
2017-09-01
Deviant as well as changing auditory distractors interfere with short-term memory. According to the duplex model of auditory distraction, the deviation effect is caused by a shift of attention while the changing-state effect is due to obligatory order processing. This theory predicts that foreknowledge should reduce the deviation effect, but should have no effect on the changing-state effect. We compared the effect of foreknowledge on the two phenomena directly within the same experiment. In a pilot study, specific foreknowledge was impotent in reducing either the changing-state effect or the deviation effect, but it reduced disruption by sentential speech, suggesting that the effects of foreknowledge on auditory distraction may increase with the complexity of the stimulus material. Given the unexpected nature of this finding, we tested whether the same finding would be obtained in (a) a direct preregistered replication in Germany and (b) an additional replication with translated stimulus materials in Sweden.
An Adaptive Neural Mechanism for Acoustic Motion Perception with Varying Sparsity
Shaikh, Danish; Manoonpong, Poramate
2017-01-01
Biological motion-sensitive neural circuits are quite adept in perceiving the relative motion of a relevant stimulus. Motion perception is a fundamental ability in neural sensory processing and crucial in target tracking tasks. Tracking a stimulus entails the ability to perceive its motion, i.e., extracting information about its direction and velocity. Here we focus on auditory motion perception of sound stimuli, which is poorly understood as compared to its visual counterpart. In earlier work we have developed a bio-inspired neural learning mechanism for acoustic motion perception. The mechanism extracts directional information via a model of the peripheral auditory system of lizards. The mechanism uses only this directional information obtained via specific motor behaviour to learn the angular velocity of unoccluded sound stimuli in motion. In nature however the stimulus being tracked may be occluded by artefacts in the environment, such as an escaping prey momentarily disappearing behind a cover of trees. This article extends the earlier work by presenting a comparative investigation of auditory motion perception for unoccluded and occluded tonal sound stimuli with a frequency of 2.2 kHz in both simulation and practice. Three instances of each stimulus are employed, differing in their movement velocities–0.5°/time step, 1.0°/time step and 1.5°/time step. To validate the approach in practice, we implement the proposed neural mechanism on a wheeled mobile robot and evaluate its performance in auditory tracking. PMID:28337137
Order of Stimulus Presentation Influences Children's Acquisition in Receptive Identification Tasks
ERIC Educational Resources Information Center
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
2016-01-01
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…
Stimulus Intensity and the Perception of Duration
ERIC Educational Resources Information Center
Matthews, William J.; Stewart, Neil; Wearden, John H.
2011-01-01
This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…
Effect of conditioned stimulus exposure during slow wave sleep on fear memory extinction in humans.
He, Jia; Sun, Hong-Qiang; Li, Su-Xia; Zhang, Wei-Hua; Shi, Jie; Ai, Si-Zhi; Li, Yun; Li, Xiao-Jun; Tang, Xiang-Dong; Lu, Lin
2015-03-01
Repeated exposure to a neutral conditioned stimulus (CS) in the absence of a noxious unconditioned stimulus (US) elicits fear memory extinction. The aim of the current study was to investigate the effects of mild tone exposure (CS) during slow wave sleep (SWS) on fear memory extinction in humans. The healthy volunteers underwent an auditory fear conditioning paradigm on the experimental night, during which tones served as the CS, and a mild shock served as the US. They were then randomly assigned to four groups. Three groups were exposed to the CS for 3 or 10 min or an irrelevant tone (control stimulus, CtrS) for 10 min during SWS. The fourth group served as controls and was not subjected to any interventions. All of the subjects completed a memory test 4 h after SWS-rich stage to evaluate the effect on fear extinction. Moreover, we conducted similar experiments using an independent group of subjects during the daytime to test whether the memory extinction effect was specific to the sleep condition. Ninety-six healthy volunteers (44 males) aged 18-28 y. Participants exhibited undisturbed sleep during 2 consecutive nights, as assessed by sleep variables (all P > 0.05) from polysomnographic recordings and power spectral analysis. Participants who were re-exposed to the 10 min CS either during SWS and wakefulness exhibited attenuated fear responses (wake-10 min CS, P < 0.05; SWS-10 min CS, P < 0.01). Conditioned stimulus re-exposure during SWS promoted fear memory extinction without altering sleep profiles. © 2015 Associated Professional Sleep Societies, LLC.
Auditory memory in monkeys: costs and benefits of proactive interference.
Bigelow, James; Poremba, Amy
2013-05-01
Proactive interference (PI) has traditionally been understood as an adverse consequence of stimulus repetition during memory tasks. Herein, we present data that emphasize costs as well as benefits of PI for monkeys performing an auditory delayed matching-to-sample (DMTS) task. The animals made same/different judgments for a variety of simple and complex sounds separated by a 5-s memory delay. Each session used a stimulus set that included eight sounds; thus, each sound was repeated multiple times per session for match trials and for nonmatch trials as the sample (Cue 1) or test (Cue 2) stimulus. For nonmatch trials, performance was substantially diminished when the test stimulus had been previously presented on a recent trial. However, when the sample stimulus had been recently presented, performance was significantly improved. We also observed a marginal performance benefit when stimuli for match trials had been recently presented. The costs of PI for nonmatch test stimuli were greater than the combined benefits of PI for nonmatch sample stimuli and match trials, indicating that the net influence of PI is detrimental. For all three manifestations of PI, the effects are shown to extend beyond the immediately subsequent trial. Our data suggest that PI in auditory DMTS is best understood as an enduring influence that can be both detrimental and beneficial to memory-task performance. © 2012 Wiley Periodicals, Inc.
Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2014-01-01
The main aim of the present study was to assess whether aging modulates the effects of involuntary capture of attention by novel stimuli on performance, and on event-related potentials (ERPs) associated with target processing (N2b and P3b) and subsequent response processes (stimulus-locked Lateralized Readiness Potential -sLRP- and response-locked Lateralized Readiness Potential -rLRP-). An auditory-visual distraction-attention task was performed by 77 healthy participants, divided into three age groups (Young: 21–29, Middle-aged: 51–64, Old: 65–84 years old). Participants were asked to attend to visual stimuli and to ignore auditory stimuli. Aging was associated with slowed reaction times, target stimulus processing in working memory (WM, longer N2b and P3b latencies) and selection and preparation of the motor response (longer sLRP and earlier rLRP onset latencies). In the novel relative to the standard condition we observed, in the three age groups: (1) a distraction effect, reflected in a slowing of reaction times, of stimuli categorization in WM (longer P3b latency), and of motor response selection (longer sLRP onset latency); (2) a facilitation effect on response preparation (later rLRP onset latency), and (3) an increase in arousal (larger amplitudes of all ERPs evaluated, except for N2b amplitude in the Old group). A distraction effect on the stimulus evaluation processes (longer N2b latency) were also observed, but only in middle-aged and old participants, indicating that the attentional capture slows the stimulus evaluation in WM from early ages (from 50 years onwards, without differences between middle-age and older adults), but not in young adults. PMID:25294999
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1975-01-01
A randomized sequence of tone bursts was delivered to subjects at short inter-stimulus intervals with the tones originating from one of three spatially and frequency specific channels. The subject's task was to count the tones in one of the three channels at a time, ignoring the other two, and press a button after each tenth tone. In different conditions, tones were given at high and low intensities and with or without a background white noise to mask the tones. The N sub 1 component of the auditory vertex potential was found to be larger in response to attended channel tones in relation to unattended tones. This selective enhancement of N sub 1 was minimal for loud tones presented without noise and increased markedly for the lower tone intensity and in noise added conditions.
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Processing of pitch and location in human auditory cortex during visual and auditory tasks.
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand.
Processing of pitch and location in human auditory cortex during visual and auditory tasks
Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu
2015-01-01
The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed task-dependent activation modulations, but no systematic activations to pitch or location. Based on these results, we argue that activations during pitch and location tasks cannot be explained by enhanced stimulus-specific processing alone, but rather that activations in human AC depend in a complex manner on the requirements of the task at hand. PMID:26594185
Dai, Lengshi; Shinn-Cunningham, Barbara G
2016-01-01
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
Auditory steady state response in sound field.
Hernández-Pérez, H; Torres-Fortuny, A
2013-02-01
Physiological and behavioral responses were compared in normal-hearing subjects via analyses of the auditory steady-state response (ASSR) and conventional audiometry under sound field conditions. The auditory stimuli, presented through a loudspeaker, consisted of four carrier tones (500, 1000, 2000, and 4000 Hz), presented singly for behavioral testing but combined (multiple frequency technique), to estimate thresholds using the ASSR. Twenty normal-hearing adults were examined. The average differences between the physiological and behavioral thresholds were between 17 and 22 dB HL. The Spearman rank correlation between ASSR and behavioral thresholds was significant for all frequencies (p < 0.05). Significant differences were found in the ASSR amplitude among frequencies, and strong correlations between the ASSR amplitude and the stimulus level (p < 0.05). The ASSR in sound field testing was found to yield hearing threshold estimates deemed to be reasonably well correlated with behaviorally assessed thresholds.
Habituation deficit of auditory N100m in patients with fibromyalgia.
Choi, W; Lim, M; Kim, J S; Chung, C K
2016-11-01
Habituation refers to the brain's inhibitory mechanism against sensory overload and its brain correlate has been investigated in the form of a well-defined event-related potential, N100 (N1). Fibromyalgia is an extensively described chronic pain syndrome with concurrent manifestations of reduced tolerance and enhanced sensation of painful and non-painful stimulation, suggesting an association with central amplification of all sensory domains. Among diverse sensory modalities, we utilized repetitive auditory stimulation to explore the anomalous sensory information processing in fibromyalgia as evidenced by N1 habituation. Auditory N1 was assessed in 19 fibromyalgia patients and age-, education- and gender-matched 21 healthy control subjects under the duration-deviant passive oddball paradigm and magnetoencephalography recording. The brain signal of the first standard stimulus (following each deviant) and last standard stimulus (preceding each deviant) were analysed to identify N1 responses. N1 amplitude difference and adjusted amplitude ratio were computed as habituation indices. Fibromyalgia patients showed lower N1 amplitude difference (left hemisphere: p = 0.004; right hemisphere: p = 0.034) and adjusted N1 amplitude ratio (left hemisphere: p = 0.001; right hemisphere: p = 0.052) than healthy control subjects, indicating deficient auditory habituation. Further, augmented N1 amplitude pattern (p = 0.029) during the stimulus repetition was observed in fibromyalgia patients. Fibromyalgia patients failed to demonstrate auditory N1 habituation to repetitively presenting stimuli, which indicates their compromised early auditory information processing. Our findings provide neurophysiological evidence of inhibitory failure and cortical augmentation in fibromyalgia. WHAT'S ALREADY KNOWN ABOUT THIS TOPIC?: Fibromyalgia has been associated with altered filtering of irrelevant somatosensory input. However, whether this abnormality can extend to the auditory sensory system remains controversial. N!00, an event-related potential, has been widely utilized to assess the brain's habituation capacity against sensory overload. WHAT DOES THIS STUDY ADD?: Fibromyalgia patients showed defect in N100 habituation to repetitive auditory stimuli, indicating compromised early auditory functioning. This study identified deficient inhibitory control over irrelevant auditory stimuli in fibromyalgia. © 2016 European Pain Federation - EFIC®.
Salient stimuli in advertising: the effect of contrast interval length and type on recall.
Olsen, G Douglas
2002-09-01
Salient auditory stimuli (e.g., music or sound effects) are commonly used in advertising to elicit attention. However, issues related to the effectiveness of such stimuli are not well understood. This research examines the ability of a salient auditory stimulus, in the form of a contrast interval (CI), to enhance recall of message-related information. Researchers have argued that the effectiveness of the CI is a function of the temporal duration between the onset and offset of the change in the background stimulus and the nature of this stimulus. Three experiments investigate these propositions and indicate that recall is enhanced, providing the CI is 3 s or less. Information highlighted with silence is recalled better than information highlighted with music.
Langlet, C; Hainaut, J P; Bolmont, B
2017-03-16
Arousal anxiety has a great impact on reaction time, physiological parameters and motor performance. Numerous studies have focused on the influence of anxiety on muscular activity during simple non ecologic task. We investigate the impact of a moderate state-anxiety (arousal stressor) on the specific component of a complex multi-joint ecologic movement during a reaction time task of auditory stimulus-response. Our objective is to know if central and peripheral voluntary motor processes were modulated in the same way by an arousal stressor. Eighteen women volunteers performed simple reaction time tasks of auditory stimulus-response. Video-recorded Stroop test with interferences was used to induced moderate state-anxiety. Electromyographic activity of the wrist extensor was recorded in order to analyse the two components of the reaction time: the premotor and motor time. In anxiogenic condition, an acceleration and an increase of muscular activity of the reaction time was obtained. This increase was due to a stronger muscle activity during the premotor time in the anxiogenic condition. Arousal anxiety has a different impact on central and peripheral voluntary motor processes. The modifications observed could be related to an increase in arousal related to a higher anxiety in order to prepare the body to act. Copyright © 2017 Elsevier B.V. All rights reserved.
Delorme, Arnaud; Polich, John
2013-01-01
Long-term Vipassana meditators sat in meditation vs. a control (instructed mind wandering) states for 25 min, electroencephalography (EEG) was recorded and condition order counterbalanced. For the last 4 min, a three-stimulus auditory oddball series was presented during both meditation and control periods through headphones and no task imposed. Time-frequency analysis demonstrated that meditation relative to the control condition evinced decreased evoked delta (2–4 Hz) power to distracter stimuli concomitantly with a greater event-related reduction of late (500–900 ms) alpha-1 (8–10 Hz) activity, which indexed altered dynamics of attentional engagement to distracters. Additionally, standard stimuli were associated with increased early event-related alpha phase synchrony (inter-trial coherence) and evoked theta (4–8 Hz) phase synchrony, suggesting enhanced processing of the habituated standard background stimuli. Finally, during meditation, there was a greater differential early-evoked gamma power to the different stimulus classes. Correlation analysis indicated that this effect stemmed from a meditation state-related increase in early distracter-evoked gamma power and phase synchrony specific to longer-term expert practitioners. The findings suggest that Vipassana meditation evokes a brain state of enhanced perceptual clarity and decreased automated reactivity. PMID:22648958
DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS
Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.
2014-01-01
We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757
Differential Effects of Alcohol on Working Memory: Distinguishing Multiple Processes
Saults, J. Scott; Cowan, Nelson; Sher, Kenneth J.; Moreno, Matthew V.
2008-01-01
We assessed effects of alcohol consumption on different types of working memory (WM) tasks in an attempt to characterize the nature of alcohol effects on cognition. The WM tasks varied in two properties of materials to be retained in a two-stimulus comparison procedure. Conditions included (1) spatial arrays of colors, (2) temporal sequences of colors, (3) spatial arrays of spoken digits, and (4) temporal sequences of spoken digits. Alcohol consumption impaired memory for auditory and visual sequences, but not memory for simultaneous arrays of auditory or visual stimuli. These results suggest that processes needed to encode and maintain stimulus sequences, such as rehearsal, are more sensitive to alcohol intoxication than other WM mechanisms needed to maintain multiple concurrent items, such as focusing attention on them. These findings help to resolve disparate findings from prior research into alcohol’s effect on WM and on divided attention. The results suggest that moderate doses of alcohol impair WM by affecting certain mnemonic strategies and executive processes rather than by shrinking the basic holding capacity of WM. PMID:18179311
Dynamic Reweighting of Auditory Modulation Filters.
Joosten, Eva R M; Shamma, Shihab A; Lorenzi, Christian; Neri, Peter
2016-07-01
Sound waveforms convey information largely via amplitude modulations (AM). A large body of experimental evidence has provided support for a modulation (bandpass) filterbank. Details of this model have varied over time partly reflecting different experimental conditions and diverse datasets from distinct task strategies, contributing uncertainty to the bandwidth measurements and leaving important issues unresolved. We adopt here a solely data-driven measurement approach in which we first demonstrate how different models can be subsumed within a common 'cascade' framework, and then proceed to characterize the cascade via system identification analysis using a single stimulus/task specification and hence stable task rules largely unconstrained by any model or parameters. Observers were required to detect a brief change in level superimposed onto random level changes that served as AM noise; the relationship between trial-by-trial noisy fluctuations and corresponding human responses enables targeted identification of distinct cascade elements. The resulting measurements exhibit a dynamic complex picture in which human perception of auditory modulations appears adaptive in nature, evolving from an initial lowpass to bandpass modes (with broad tuning, Q∼1) following repeated stimulus exposure.
Differential effects of alcohol on working memory: distinguishing multiple processes.
Saults, J Scott; Cowan, Nelson; Sher, Kenneth J; Moreno, Matthew V
2007-12-01
The authors assessed effects of alcohol consumption on different types of working memory (WM) tasks in an attempt to characterize the nature of alcohol effects on cognition. The WM tasks varied in 2 properties of materials to be retained in a 2-stimulus comparison procedure. Conditions included (a) spatial arrays of colors, (b) temporal sequences of colors, (c) spatial arrays of spoken digits, and (d) temporal sequences of spoken digits. Alcohol consumption impaired memory for auditory and visual sequences but not memory for simultaneous arrays of auditory or visual stimuli. These results suggest that processes needed to encode and maintain stimulus sequences, such as rehearsal, are more sensitive to alcohol intoxication than other WM mechanisms needed to maintain multiple concurrent items, such as focusing attention on them. These findings help to resolve disparate findings from prior research on alcohol's effect on WM and on divided attention. The results suggest that moderate doses of alcohol impair WM by affecting certain mnemonic strategies and executive processes rather than by shrinking the basic holding capacity of WM. (c) 2008 APA, all rights reserved.
Thermal Imaging of the Periorbital Regions during the Presentation of an Auditory Startle Stimulus
Gane, Luke; Power, Sarah; Kushki, Azadeh; Chau, Tom
2011-01-01
Infrared thermal imaging of the inner canthi of the periorbital regions of the face can potentially serve as an input signal modality for an alternative access system for individuals with conditions that preclude speech or voluntary movement, such as total locked-in syndrome. However, it is unknown if the temperature of these regions is affected by the human startle response, as changes in the facial temperature of the periorbital regions manifested during the startle response could generate false positives in a thermography-based access system. This study presents an examination of the temperature characteristics of the periorbital regions of 11 able-bodied adult participants before and after a 102 dB auditory startle stimulus. The results indicate that the startle response has no substantial effect on the mean temperature of the periorbital regions. This indicates that thermography-based access solutions would be insensitive to startle reactions in their user, an important advantage over other modalities being considered in the context of access solutions for individuals with a severe motor disability. PMID:22073302
Moving Stimuli Facilitate Synchronization But Not Temporal Perception
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap. PMID:27909419
Moving Stimuli Facilitate Synchronization But Not Temporal Perception.
Silva, Susana; Castro, São Luís
2016-01-01
Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.
Rizza, Aurora; Terekhov, Alexander V; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O'Regan, J Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980's and 1990's, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Rizza, Aurora; Terekhov, Alexander V.; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O’Regan, J. Kevin
2018-01-01
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level. PMID:29875719
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
ERIC Educational Resources Information Center
Fujioka, Takako; Ross, Bernhard; Kakigi, Ryusuke; Pantev, Christo; Trainor, Laurel J.
2006-01-01
Auditory evoked responses to a violin tone and a noise-burst stimulus were recorded from 4- to 6-year-old children in four repeated measurements over a 1-year period using magnetoencephalography (MEG). Half of the subjects participated in musical lessons throughout the year; the other half had no music lessons. Auditory evoked magnetic fields…
Bertelson, Paul; Aschersleben, Gisa
2003-10-01
In the well-known visual bias of auditory location (alias the ventriloquist effect), auditory and visual events presented in separate locations appear closer together, provided the presentations are synchronized. Here, we consider the possibility of the converse phenomenon: crossmodal attraction on the time dimension conditional on spatial proximity. Participants judged the order of occurrence of sound bursts and light flashes, respectively, separated in time by varying stimulus onset asynchronies (SOAs) and delivered either in the same or in different locations. Presentation was organized using randomly mixed psychophysical staircases, by which the SOA was reduced progressively until a point of uncertainty was reached. This point was reached at longer SOAs with the sounds in the same frontal location as the flashes than in different places, showing that apparent temporal separation is effectively longer in the first condition. Together with a similar one obtained recently in a case of tactile-visual discrepancy, this result supports a view in which timing and spatial layout of the inputs play to some extent inter-changeable roles in the pairing operation at the base of crossmodal interaction.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N; Iijima, Toshio; Tsutsui, Ken-Ichiro
2015-11-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. Copyright © 2015 the American Physiological Society.
Oyama, Kei; Tateyama, Yukina; Hernádi, István; Tobler, Philippe N.; Iijima, Toshio
2015-01-01
To investigate how the striatum integrates sensory information with reward information for behavioral guidance, we recorded single-unit activity in the dorsal striatum of head-fixed rats participating in a probabilistic Pavlovian conditioning task with auditory conditioned stimuli (CSs) in which reward probability was fixed for each CS but parametrically varied across CSs. We found that the activity of many neurons was linearly correlated with the reward probability indicated by the CSs. The recorded neurons could be classified according to their firing patterns into functional subtypes coding reward probability in different forms such as stimulus value, reward expectation, and reward prediction error. These results suggest that several functional subgroups of dorsal striatal neurons represent different kinds of information formed through extensive prior exposure to CS-reward contingencies. PMID:26378201
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Macías, Silvio; Hernández-Abad, Annette; Hechavarría, Julio C; Kössl, Manfred; Mora, Emanuel C
2015-05-01
It has been reported previously that in the inferior colliculus of the bat Molossus molossus, neuronal duration tuning is ambiguous because the tuning type of the neurons dramatically changes with the sound level. In the present study, duration tuning was examined in the auditory cortex of M. molossus to describe if it is as ambiguous as the collicular tuning. From a population of 174 cortical 104 (60 %) neurons did not show duration selectivity (all-pass). Around 5 % (9 units) responded preferentially to stimuli having longer durations showing long-pass duration response functions, 35 (20 %) responded to a narrow range of stimulus durations showing band-pass duration response functions, 24 (14 %) responded most strongly to short stimulus durations showing short-pass duration response functions and two neurons (1 %) responded best to two different stimulus durations showing a two-peaked duration-response function. The majority of neurons showing short- (16 out of 24) and band-pass (24 out 35) selectivity displayed "O-shaped" duration response areas. In contrast to the inferior colliculus, duration tuning in the auditory cortex of M. molossus appears level tolerant. That is, the type of duration selectivity and the stimulus duration eliciting the maximum response were unaffected by changing sound level.
Anzures, Gizelle; Wheeler, Andrea; Quinn, Paul C.; Pascalis, Olivier; Slater, Alan M.; Heron-Delaney, Michelle; Tanaka, James W.; Lee, Kang
2012-01-01
Perceptual narrowing in the visual, auditory, and multisensory domains has its developmental origins in infancy. The present study shows that experimentally induced experience can reverse the effects of perceptual narrowing on infants’ visual recognition memory of other-race faces. Caucasian 8- to 10-month-olds who could not discriminate between novel and familiarized Asian faces at the beginning of testing were given brief daily experience with Asian female faces in the experimental condition and Caucasian female faces in the control condition. At the end of three weeks, only infants who received daily experience with Asian females showed above-chance recognition of novel Asian female and male faces. Further, infants in the experimental condition showed greater efficiency in learning novel Asian females compared to infants in the control condition. Thus, visual experience with a novel stimulus category can reverse the effects of perceptual narrowing in infancy via improved stimulus recognition and encoding. PMID:22625845
Molecular mechanisms of fear learning and memory.
Johansen, Joshua P; Cain, Christopher K; Ostroff, Linnaea E; LeDoux, Joseph E
2011-10-28
Pavlovian fear conditioning is a particularly useful behavioral paradigm for exploring the molecular mechanisms of learning and memory because a well-defined response to a specific environmental stimulus is produced through associative learning processes. Synaptic plasticity in the lateral nucleus of the amygdala (LA) underlies this form of associative learning. Here, we summarize the molecular mechanisms that contribute to this synaptic plasticity in the context of auditory fear conditioning, the form of fear conditioning best understood at the molecular level. We discuss the neurotransmitter systems and signaling cascades that contribute to three phases of auditory fear conditioning: acquisition, consolidation, and reconsolidation. These studies suggest that multiple intracellular signaling pathways, including those triggered by activation of Hebbian processes and neuromodulatory receptors, interact to produce neural plasticity in the LA and behavioral fear conditioning. Collectively, this body of research illustrates the power of fear conditioning as a model system for characterizing the mechanisms of learning and memory in mammals and potentially for understanding fear-related disorders, such as PTSD and phobias. Copyright © 2011 Elsevier Inc. All rights reserved.
Cerebral Processing of Voice Gender Studied Using a Continuous Carryover fMRI Design
Pernet, Cyril; Latinus, Marianne; Crabbe, Frances; Belin, Pascal
2013-01-01
Normal listeners effortlessly determine a person's gender by voice, but the cerebral mechanisms underlying this ability remain unclear. Here, we demonstrate 2 stages of cerebral processing during voice gender categorization. Using voice morphing along with an adaptation-optimized functional magnetic resonance imaging design, we found that secondary auditory cortex including the anterior part of the temporal voice areas in the right hemisphere responded primarily to acoustical distance with the previously heard stimulus. In contrast, a network of bilateral regions involving inferior prefrontal and anterior and posterior cingulate cortex reflected perceived stimulus ambiguity. These findings suggest that voice gender recognition involves neuronal populations along the auditory ventral stream responsible for auditory feature extraction, functioning in pair with the prefrontal cortex in voice gender perception. PMID:22490550
Temporal variability of spectro-temporal receptive fields in the anesthetized auditory cortex.
Meyer, Arne F; Diepenbrock, Jan-Philipp; Ohl, Frank W; Anemüller, Jörn
2014-01-01
Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF) estimates with characteristic temporal resolution 5-30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms) overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.
Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko
2017-08-15
During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.
fEITER - a new EIT instrument for functional brain imaging
NASA Astrophysics Data System (ADS)
Davidson, J. L.; Wright, P.; Ahsan, S. T.; Robinson, R. L.; Pomfrett, C. J. D.; McCann, H.
2010-04-01
We report on human tests of the new EIT-based system fEITER (functional Electrical Impedance Tomography of Evoked Responses), targeted principally at functional brain imaging. It is designed and built to medical standard BS EN 60601-1:2006 and clinical trials have been approved by the MHRA in the UK. fEITER integrates an EIT sub-system with an evoked response sub-system capable of providing visual, auditory or other stimuli, and the timing of each stimulus is recorded within the EIT data to a resolution of 500 microseconds. The EIT sub-system operates at 100 frames per second using 20 polar/near-polar current patterns distributed among 32 scalp electrodes that are arranged in a 3-dimensional array on the subject. Presently, current injection is fixed in firmware at 1 mA pk-pk and 10 kHz. Performance testing on inanimate subjects has shown voltage measurement SNR better than 75 dB, at 100 frames per second. We describe the fEITER system and give example topographic results for a human subject under no-stimulus (i.e. reference) conditions and on application of auditory stimuli. The system's excellent noise properties and temporal resolution show clearly the influence of basic physiological phenomena on the EIT voltages. In response to stimulus presentation, the voltage data contain fast components (~100 ms) and components that persist for many seconds.
Auditory-Cortex Short-Term Plasticity Induced by Selective Attention
Jääskeläinen, Iiro P.; Ahveninen, Jyrki
2014-01-01
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458
Age Differences in Visual-Auditory Self-Motion Perception during a Simulated Driving Task
Ramkhalawansingh, Robert; Keshavarz, Behrang; Haycock, Bruce; Shahab, Saba; Campos, Jennifer L.
2016-01-01
Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion. PMID:27199829
Segregation and Integration of Auditory Streams when Listening to Multi-Part Music
Ragert, Marie; Fairhurst, Merle T.; Keller, Peter E.
2014-01-01
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively. PMID:24475030
Segregation and integration of auditory streams when listening to multi-part music.
Ragert, Marie; Fairhurst, Merle T; Keller, Peter E
2014-01-01
In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams, respectively.
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
Saha, Debajit; Sun, Wensheng; Li, Chao; Nizampatnam, Srinath; Padovano, William; Chen, Zhengdao; Chen, Alex; Altan, Ege; Lo, Ray; Barbour, Dennis L.; Raman, Baranidharan
2017-01-01
Even simple sensory stimuli evoke neural responses that are dynamic and complex. Are the temporally patterned neural activities important for controlling the behavioral output? Here, we investigated this issue. Our results reveal that in the insect antennal lobe, due to circuit interactions, distinct neural ensembles are activated during and immediately following the termination of every odorant. Such non-overlapping response patterns are not observed even when the stimulus intensity or identities were changed. In addition, we find that ON and OFF ensemble neural activities differ in their ability to recruit recurrent inhibition, entrain field-potential oscillations and more importantly in their relevance to behaviour (initiate versus reset conditioned responses). Notably, we find that a strikingly similar strategy is also used for encoding sound onsets and offsets in the marmoset auditory cortex. In sum, our results suggest a general approach where recurrent inhibition is associated with stimulus ‘recognition' and ‘derecognition'. PMID:28534502
Effect of Conditioned Stimulus Exposure during Slow Wave Sleep on Fear Memory Extinction in Humans
He, Jia; Sun, Hong-Qiang; Li, Su-Xia; Zhang, Wei-Hua; Shi, Jie; Ai, Si-Zhi; Li, Yun; Li, Xiao-Jun; Tang, Xiang-Dong; Lu, Lin
2015-01-01
Study Objectives: Repeated exposure to a neutral conditioned stimulus (CS) in the absence of a noxious unconditioned stimulus (US) elicits fear memory extinction. The aim of the current study was to investigate the effects of mild tone exposure (CS) during slow wave sleep (SWS) on fear memory extinction in humans. Design: The healthy volunteers underwent an auditory fear conditioning paradigm on the experimental night, during which tones served as the CS, and a mild shock served as the US. They were then randomly assigned to four groups. Three groups were exposed to the CS for 3 or 10 min or an irrelevant tone (control stimulus, CtrS) for 10 min during SWS. The fourth group served as controls and was not subjected to any interventions. All of the subjects completed a memory test 4 h after SWS-rich stage to evaluate the effect on fear extinction. Moreover, we conducted similar experiments using an independent group of subjects during the daytime to test whether the memory extinction effect was specific to the sleep condition. Participants: Ninety-six healthy volunteers (44 males) aged 18–28 y. Measurements and Results: Participants exhibited undisturbed sleep during 2 consecutive nights, as assessed by sleep variables (all P > 0.05) from polysomnographic recordings and power spectral analysis. Participants who were re-exposed to the 10 min CS either during SWS and wakefulness exhibited attenuated fear responses (wake-10 min CS, P < 0.05; SWS-10 min CS, P < 0.01). Conclusions: Conditioned stimulus re-exposure during slow wave sleep promoted fear memory extinction without altering sleep profiles. Citation: He J, Sun HQ, Li SX, Zhang WH, Shi J, Ai SZ, Li Y, Li XJ, Tang XD, Lu L. Effect of conditioned stimulus exposure during slow wave sleep on fear memory extinction in humans. SLEEP 2015;38(3):423–431. PMID:25348121
Corley, Michael J; Caruso, Michael J; Takahashi, Lorey K
2012-01-18
Posttraumatic stress disorder (PTSD) is characterized by stress-induced symptoms including exaggerated fear memories, hypervigilance and hyperarousal. However, we are unaware of an animal model that investigates these hallmarks of PTSD especially in relation to fear extinction and habituation. Therefore, to develop a valid animal model of PTSD, we exposed rats to different intensities of footshock stress to determine their effects on either auditory predator odor fear extinction or habituation of fear sensitization. In Experiment 1, rats were exposed to acute footshock stress (no shock control, 0.4 mA, or 0.8 mA) immediately prior to auditory fear conditioning training involving the pairing of auditory clicks with a cloth containing cat odor. When presented to the conditioned auditory clicks in the next 5 days of extinction testing conducted in a runway apparatus with a hide box, rats in the two shock groups engaged in higher levels of freezing and head out vigilance-like behavior from the hide box than the no shock control group. This increase in fear behavior during extinction testing was likely due to auditory activation of the conditioned fear state because Experiment 2 demonstrated that conditioned fear behavior was not broadly increased in the absence of the conditioned auditory stimulus. Experiment 3 was then conducted to determine whether acute exposure to stress induces a habituation resistant sensitized fear state. We found that rats exposed to 0.8 mA footshock stress and subsequently tested for 5 days in the runway hide box apparatus with presentations of nonassociative auditory clicks exhibited high initial levels of freezing, followed by head out behavior and culminating in the occurrence of locomotor hyperactivity. In addition, Experiment 4 indicated that without delivery of nonassociative auditory clicks, 0.8 mA footshock stressed rats did not exhibit robust increases in sensitized freezing and locomotor hyperactivity, albeit head out vigilance-like behavior continued to be observed. In summary, our animal model provides novel information on the effects of different intensities of footshock stress, auditory-predator odor fear conditioning, and their interactions on facilitating either extinction-resistant or habituation-resistant fear-related behavior. These results lay the foundation for exciting new investigations of the hallmarks of PTSD that include the stress-induced formation and persistence of traumatic memories and sensitized fear. Copyright © 2011 Elsevier Inc. All rights reserved.
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.
D'haenens, Wendy; Dhooge, Ingeborg; De Vel, Eddy; Maes, Leen; Bockstael, Annelies; Vinck, Bart M
2007-08-01
The present study utilized a commercially available multiple auditory steady-state response (ASSR) system to test normal hearing adults (n=55). The primary objective was to evaluate the impact of the mixed modulation (MM) and the novel proposed exponential AM(2)/FM stimuli on the signal-to-noise ratio (SNR) and threshold estimation accuracy, through a within-subject comparison. The second aim was to establish a normative database for both stimulus types. The results demonstrated that the AM(2)/FM and MM stimulus had a similar effect on the SNR, whereas the ASSR threshold results revealed that the AM(2)/FM produced better thresholds than the MM stimulus for the 500, 1000, and 4000 Hz carrier frequency. The mean difference scores to tones of 500, 1000, 2000, and 4000 Hz were for the MM stimulus: 20+/-12, 14+/-9, 10+/-8, and 12+/-8 dB; and for the AM(2)/FM stimulus: 18+/-13, 12+/-8, 11+/-8, and 10+/-8 dB, respectively. The current research confirms that the AM(2)/FM stimulus can be used efficiently to test normal hearing adults.
Effects of a cochlear implant simulation on immediate memory in normal-hearing adults
Burkholder, Rose A.; Pisoni, David B.; Svirsky, Mario A.
2012-01-01
This study assessed the effects of stimulus misidentification and memory processing errors on immediate memory span in 25 normal-hearing adults exposed to degraded auditory input simulating signals provided by a cochlear implant. The identification accuracy of degraded digits in isolation was measured before digit span testing. Forward and backward digit spans were shorter when digits were degraded than when they were normal. Participants’ normal digit spans and their accuracy in identifying isolated digits were used to predict digit spans in the degraded speech condition. The observed digit spans in degraded conditions did not differ significantly from predicted digit spans. This suggests that the decrease in memory span is related primarily to misidentification of digits rather than memory processing errors related to cognitive load. These findings provide complementary information to earlier research on auditory memory span of listeners exposed to degraded speech either experimentally or as a consequence of a hearing-impairment. PMID:16317807
Tarantino, V; Stura, M; Raspino, M; Conrad, E; Porcu, A
1989-01-01
In order to study the changes which occur in phase of the click stimulus and its relation to the stimulus repetition rate on the auditory brainstem response (ABR) as a function of age, the Authors recorded the ABR from the scalp's surface of 10 newborns and 40 infants, 3 months, 6 months, 1 year and 3 years old as well as from 10 normal adults. The stimulus was a square wave of 0.1 msec duration and 90 dBHL level. The stimulus equipment was calibrated twice under visual inspection to ensure that the C and R clicks resulted in an initial membrane deflection toward and from the ear drum respectively. No significant differences could be found for the latencies and amplitude in the C-R comparison. However, the mean values of the complete group of test subjects showed most intraindividual stability for the conventional click stimulation. The latency of the ABR with excitation of the cochlea seemed to be mainly determined by the internal oscillation sequence in the cochlea and not by the stimulus polarity. The amplitudes and latencies of the ABR components tend to decrease when the stimulus rate increases and the age decreases. The importance of the stimulus characteristics is discussed and some suggestions for clinical use of ABR are made.
Backward masking, the suffix effect, and preperceptual storage.
Kallman, H J; Massaro, D W
1983-04-01
This article considers the use of auditory backward recognition masking (ABRM) and stimulus suffix experiments as indexes of preperceptual auditory storage. In the first part of the article, two ABRM experiments that failed to demonstrate a mask disinhibition effect found previously in stimulus suffix experiments are reported. The failure to demonstrate mask disinhibition is inconsistent with an explanation of ABRM in terms of lateral inhibition. In the second part of the article, evidence is presented to support the conclusion that the suffix effect involves the contributions of later processing stages and does not provide an uncontaminated index of preperceptual storage. In contrast, it is claimed that ABRM experiments provide the most direct index of the temporal course of perceptual recognition. Partial-report tasks and other paradigms are also evaluated in terms of their contributions to an understanding of preperceptual auditory storage. Differences between interruption and integration masking are discussed along with the role of preperceptual auditory storage in speech perception.
NASA Astrophysics Data System (ADS)
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
2015-02-01
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
The question of simultaneity in multisensory integration
NASA Astrophysics Data System (ADS)
Leone, Lynnette; McCourt, Mark E.
2012-03-01
Early reports of audiovisual (AV) multisensory integration (MI) indicated that unisensory stimuli must evoke simultaneous physiological responses to produce decreases in reaction time (RT) such that for unisensory stimuli with unequal RTs the stimulus eliciting the faster RT had to be delayed relative to the stimulus eliciting the slower RT. The "temporal rule" states that MI depends on the temporal proximity of unisensory stimuli, the neural responses to which must fall within a window of integration. Ecological validity demands that MI should occur only for simultaneous events (which may give rise to non-simultaneous neural activations). However, spurious neural response simultaneities which are unrelated to singular environmental multisensory occurrences must somehow be rejected. Using an RT/race model paradigm we measured AV MI as a function of stimulus onset asynchrony (SOA: +/-200 ms, 50 ms intervals) under fully dark adapted conditions for visual (V) stimuli that were either weak (scotopic 525 nm flashes; 511 ms mean RT) or strong (photopic 630 nm flashes; 356 ms mean RT). Auditory (A) stimulus (1000 Hz pure tone) intensity was constant. Despite the 155 ms slower mean RT to the scotopic versus photopic stimulus, facilitative AV MI in both conditions nevertheless occurred exclusively at an SOA of 0 ms. Thus, facilitative MI demands both physical and physiological simultaneity. We consider the mechanisms by which the nervous system may take account of variations in response latency arising from changes in stimulus intensity in order to selectively integrate only those physiological simultaneities that arise from physical simultaneities.
Transferability of Dual-Task Coordination Skills after Practice with Changing Component Tasks
Schubert, Torsten; Liepelt, Roman; Kübler, Sebastian; Strobach, Tilo
2017-01-01
Recent research has demonstrated that dual-task performance with two simultaneously presented tasks can be substantially improved as a result of practice. Among other mechanisms, theories of dual-task practice-relate this improvement to the acquisition of task coordination skills. These skills are assumed (1) to result from dual-task practice, but not from single-task practice, and (2) to be independent from the specific stimulus and response mappings during the practice situation and, therefore, transferable to new dual task situations. The present study is the first that provides an elaborated test of these assumptions in a context with well-controllable practice and transfer situations. To this end, we compared the effects of dual-task and single-task practice with a visual and an auditory sensory-motor component task on the dual-task performance in a subsequent transfer session. Importantly, stimulus and stimulus-response mapping conditions in the two component tasks changed repeatedly during practice sessions, which prevents that automatized stimulus-response associations may be transferred from practice to transfer. Dual-task performance was found to be improved after practice with the dual tasks in contrast to the single-task practice. These findings are consistent with the assumption that coordination skills had been acquired, which can be transferred to other dual-task situations independently on the specific stimulus and response mapping conditions of the practiced component tasks. PMID:28659844
Zhang, Daogong; Fan, Zhaomin; Han, Yuechen; Wang, Mingming; Xu, Lei; Luo, Jianfen; Ai, Yu; Wang, Haibo
2012-01-01
To investigate the diagnostic value of vestibular test and high stimulus rate auditory brainstem response (ABR) test and the possible mechanism responsible for benign paroxysmal vertigo of childhood (BPVC). Data of 56 patients with BPVC in vertigo clinic of our hospital from May 2007 to September 2008 were retrospectively analyzed in this study. Patients with BPVC were tested with pure tone audiometry, high stimulus rate auditory brainstem response test (ABR), transcranial Doppler sonography (TCD), bithermal caloric test, and VEMP. The results of the hearing and vestibular function test were compared and analyzed. There were 56 patients with BPVC, including 32 men, 24 women, aged 3-12 years old, with an average of 6.5 years. Among 56 cases of BPVC patients, the results of pure tone audiometry were all normal. High stimulus rate ABR was abnormal in 66.1% (37/56) of cases. TCD showed 57.1% abnormality in 56 cases, including faster flow rate in 28 cases and slower flow rate in 4 cases. High stimulus rate ABR and TCD were both abnormal in 48.2% (27/56) of cases. Bithermal caloric test was abnormal in 14.3% (8/56) of cases. VEMP showed 32.1% abnormality, including amplitude abnormality in 16 cases and latency abnormality in 2 cases. The abnormal rate of VEMP was much higher than that of caloric test. Vascular mechanisms might be involved in the pathogenesis of BPVC and there is strong evidence for close relationship between BPVC and migraine. High stimulus rate ABR is helpful in the diagnosis of BPVC. The inferior vestibular pathway is much more impaired than the superior vestibular pathway in BPVC. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
No meditation-related changes in the auditory N1 during first-time meditation.
Barnes, L J; McArthur, G M; Biedermann, B A; de Lissa, P; Polito, V; Badcock, N A
2018-05-01
Recent studies link meditation expertise with enhanced low-level attention, measured through auditory event-related potentials (ERPs). In this study, we tested the reliability and validity of a recent finding that the N1 ERP in first-time meditators is smaller during meditation than non-meditation - an effect not present in long-term meditators. In the first experiment, we replicated the finding in first-time meditators. In two subsequent experiments, we discovered that this finding was not due to stimulus-related instructions, but was explained by an effect of the order of conditions. Extended exposure to the same tones has been linked with N1 decrement in other studies, and may explain N1 decrement across our two conditions. We give examples of existing meditation and ERP studies that may include similar condition order effects. The role of condition order among first-time meditators in this study indicates the importance of counterbalancing meditation and non-mediation conditions in meditation studies that use event-related potentials. Copyright © 2018 Elsevier B.V. All rights reserved.
A Neural Basis for Interindividual Differences in the McGurk Effect, a Multisensory Speech Illusion
Nath, Audrey R.; Beauchamp, Michael S.
2011-01-01
The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception. PMID:21787869
Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul
2016-01-01
Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception. PMID:27042360
Auditory Scene Analysis: An Attention Perspective
ERIC Educational Resources Information Center
Sussman, Elyse S.
2017-01-01
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…
Zenner, Hans P; Pfister, Markus; Birbaumer, Niels
2006-12-01
Acquired centralized tinnitus (ACT) is the most frequent form of chronic tinnitus. The proposed ACT sensitization (ACTS) assumes a peripheral initiation of tinnitus whereby sensitizing signals from the auditory system establish new neuronal connections in the brain. Consequently, permanent neurophysiological malfunction within the information-processing modules results. Successful treatment has to target these malfunctioning information processing. We present in this study the neurophysiological and psychophysiological aspects of a recently suggested neurophysiological model, which may explain the symptoms caused by central cognitive tinnitus sensitization. Although conditioned reflexes, as a causal agent of chronic tinnitus, respond to extinction procedures, sensitization may initiate a vicious circle of overexcitation of the auditory system, resisting extinction and habituation. We used the literature database as indicated under "References" covering English and German works. For the ACTS model we extracted neurophysiological hypotheses of the auditory stimulus processing and the neuronal connections of the central auditory system with other brain regions to explain the malfunctions of auditory information processing. The model does not assume information-processing changes specific for tinnitus but treats the processing of tinnitus signals comparable with the processing of other external stimuli. The model uses the extensive knowledge available on sensitization of perception and memory processes and highlights the similarities of tinnitus with central neuropathic pain. Quality, validity, and comparability of the extracted data were evaluated by peer reviewing. Statistical techniques were not used. According to the tinnitus sensitization model, a tinnitus signal originates (as a type I-IV tinnitus) in the cochlea. In the brain, concerned with perception and cognition, the 1) conditioned associations, as postulated by the tinnitus model of Jastreboff, and the 2) unconditioned sensitized stimulus responses, as postulated in the present ACTS model, are actively connected with and attributed to the tinnitus signal. Attention to the tinnitus constitutes a typical undesired sensitized response. Some of the tinnitus-associated attributes may be called essential, unconditioned sensitization attributes. By a process called facilitation, the tinnitus' essential attributes are suggested to activate the tinnitus response. The result is an undesired increase in responsivity, such as an increase in attentional focus to the eliciting tinnitus stimulus. The mechanisms underlying sensitization are known as a specific nonassociative learning process producing a structural fixation of long-term facilitation at the synaptic level. This sensitization model may be important for the development of a sensitization-specific treatment if extinction procedures alone do not lead to satisfactory outcome. Inasmuch as this model considers sensitization as a nonassociative learning process based on cortical plasticity, it is reasonable to assume that this learning process can be altered by counteracting learning procedures. These counteracting learning procedures may consist of tinnitus-specific cognitive and behavioral procedures.
Penhune, V B; Zatorre, R J; Feindel, W H
1999-03-01
This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
2018-06-01
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Buchholz, Jörg M
2011-07-01
Coloration detection thresholds (CDTs) were measured for a single reflection as a function of spectral content and reflection delay for diotic stimulus presentation. The direct sound was a 320-ms long burst of bandpass-filtered noise with varying lower and upper cut-off frequencies. The resulting threshold data revealed that: (1) sensitivity decreases with decreasing bandwidth and increasing reflection delay and (2) high-frequency components contribute less to detection than low-frequency components. The auditory processes that may be involved in coloration detection (CD) are discussed in terms of a spectrum-based auditory model, which is conceptually similar to the pattern-transformation model of pitch (Wightman, 1973). Hence, the model derives an auto-correlation function of the input stimulus by applying a frequency analysis to an auditory representation of the power spectrum. It was found that, to successfully describe the quantitative behavior of the CDT data, three important mechanisms need to be included: (1) auditory bandpass filters with a narrower bandwidth than classic Gammatone filters, the increase in spectral resolution was here linked to cochlear suppression, (2) a spectral contrast enhancement process that reflects neural inhibition mechanisms, and (3) integration of information across auditory frequency bands. Copyright © 2011 Elsevier B.V. All rights reserved.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
Effects of stimulus characteristics and task demands on pilots' perception of dichotic messages
NASA Technical Reports Server (NTRS)
Wenzel, Elizabeth M.
1986-01-01
The experiment is an initial investigation of pilot performance when auditory advisory messages are presented dichotically, either with or without a concurrent pursuit task requiring visual/motor dexterity. The dependent measures were percent correct and correct reaction times for manual responses to the auditory messages. Two stimulus variables which show facilitory effects in traditional dichotic-listening paradigms, differences in pitch and semantic content of the messages, were examined to determine their effectiveness during the functional simulation of helicopter pursuit. In an effort to accumulate points for the advisory messages for accuracy alone or for both accuracy and reaction times which were faster than their opponent's. In general, the combined effects of the stimulus and task variables are additive. When interactions do occur they suggest that an increase in task demands can sometimes mitigate, but usually does not remove, any processing advantages accrued from stimulus characteristics. The implications of these results for cockpit displays are discussed.
Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration
NASA Technical Reports Server (NTRS)
Hecox, K.; Squires, N.; Galambos, R.
1975-01-01
Short latency (under 10 msec) responses elicited by bursts of white noise were recorded from the scalps of human subjects. Response alterations produced by changes in the noise burst duration (on-time), inter-burst interval (off-time), and onset and offset shapes were analyzed. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise time but was unaffected by changes in fall time. Increases in stimulus duration, and therefore in loudness, resulted in a systematic increase in latency. This was probably due to response recovery processes, since the effect was eliminated with increases in stimulus off-time. The amplitude of wave V was insensitive to changes in signal rise and fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It was concluded that wave V of the human auditory brainstem evoked response is solely an onset response.
Coactivation of response initiation processes with redundant signals.
Maslovat, Dana; Hajj, Joëlle; Carlsen, Anthony N
2018-05-14
During reaction time (RT) tasks, participants respond faster to multiple stimuli from different modalities as compared to a single stimulus, a phenomenon known as the redundant signal effect (RSE). Explanations for this effect typically include coactivation arising from the multiple stimuli, which results in enhanced processing of one or more response production stages. The current study compared empirical RT data with the predictions of a model in which initiation-related activation arising from each stimulus is additive. Participants performed a simple wrist extension RT task following either a visual go-signal, an auditory go-signal, or both stimuli with the auditory stimulus delayed between 0 and 125 ms relative to the visual stimulus. Results showed statistical equivalence between the predictions of an additive initiation model and the observed RT data, providing novel evidence that the RSE can be explained via a coactivation of initiation-related processes. It is speculated that activation summation occurs at the thalamus, leading to the observed facilitation of response initiation. Copyright © 2018 Elsevier B.V. All rights reserved.
Briley, Paul M; Krumbholz, Katrin
2013-12-01
The neural response to a sensory stimulus tends to be more strongly reduced when the stimulus is preceded by the same, rather than a different, stimulus. This stimulus-specific adaptation (SSA) is ubiquitous across the senses. In hearing, SSA has been suggested to play a role in change detection as indexed by the mismatch negativity. This study sought to test whether SSA, measured in human auditory cortex, is caused by neural fatigue (reduction in neural responsiveness) or by sharpening of neural tuning to the adapting stimulus. For that, we measured event-related cortical potentials to pairs of pure tones with varying frequency separation and stimulus onset asynchrony (SOA). This enabled us to examine the relationship between the degree of specificity of adaptation as a function of frequency separation and the rate of decay of adaptation with increasing SOA. Using simulations of tonotopic neuron populations, we demonstrate that the fatigue model predicts independence of adaptation specificity and decay rate, whereas the sharpening model predicts interdependence. The data showed independence and thus supported the fatigue model. In a second experiment, we measured adaptation specificity after multiple presentations of the adapting stimulus. The multiple adapters produced more adaptation overall, but the effect was more specific to the adapting frequency. Within the context of the fatigue model, the observed increase in adaptation specificity could be explained by assuming a 2.5-fold increase in neural frequency selectivity. We discuss possible bottom-up and top-down mechanisms of this effect.
Hall, Amee J; Brown, Trecia A; Grahn, Jessica A; Gati, Joseph S; Nixon, Pam L; Hughes, Sarah M; Menon, Ravi S; Lomber, Stephen G
2014-03-15
When conducting auditory investigations using functional magnetic resonance imaging (fMRI), there are inherent potential confounds that need to be considered. Traditional continuous fMRI acquisition methods produce sounds >90 dB which compete with stimuli or produce neural activation masking evoked activity. Sparse scanning methods insert a period of reduced MRI-related noise, between image acquisitions, in which a stimulus can be presented without competition. In this study, we compared sparse and continuous scanning methods to identify the optimal approach to investigate acoustically evoked cortical, thalamic and midbrain activity in the cat. Using a 7 T magnet, we presented broadband noise, 10 kHz tones, or 0.5 kHz tones in a block design, interleaved with blocks in which no stimulus was presented. Continuous scanning resulted in larger clusters of activation and more peak voxels within the auditory cortex. However, no significant activation was observed within the thalamus. Also, there was no significant difference found, between continuous or sparse scanning, in activations of midbrain structures. Higher magnitude activations were identified in auditory cortex compared to the midbrain using both continuous and sparse scanning. These results indicate that continuous scanning is the preferred method for investigations of auditory cortex in the cat using fMRI. Also, choice of method for future investigations of midbrain activity should be driven by other experimental factors, such as stimulus intensity and task performance during scanning. Copyright © 2014 Elsevier B.V. All rights reserved.
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
Laterality of basic auditory perception.
Sininger, Yvonne S; Bhatara, Anjali
2012-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.
Laterality of Basic Auditory Perception
Sininger, Yvonne S.; Bhatara, Anjali
2010-01-01
Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: 1) gap detection 2) frequency discrimination and 3) intensity discrimination. Stimuli included tones (500, 1000 and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was: processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by 1) spectral width, a narrow band noise (NBN) of 450 Hz bandwidth was evaluated using intensity discrimination and 2) stimulus duration, 200, 500 and 1000 ms duration tones were evaluated using frequency discrimination. Results A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterized as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli. PMID:22385138
Auditory short-term memory in the primate auditory cortex
Scott, Brian H.; Mishkin, Mortimer
2015-01-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ‘working memory’ bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ‘match’ stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. PMID:26541581
Puffe, Lydia; Dittrich, Kerstin; Klauer, Karl Christoph
2017-01-01
In a joint go/no-go Simon task, each of two participants is to respond to one of two non-spatial stimulus features by means of a spatially lateralized response. Stimulus position varies horizontally and responses are faster and more accurate when response side and stimulus position match (compatible trial) than when they mismatch (incompatible trial), defining the social Simon effect or joint spatial compatibility effect. This effect was originally explained in terms of action/task co-representation, assuming that the co-actor's action is automatically co-represented. Recent research by Dolk, Hommel, Prinz, and Liepelt (2013) challenged this account by demonstrating joint spatial compatibility effects in a task-setting in which non-social objects like a Japanese waving cat were present, but no real co-actor. They postulated that every sufficiently salient object induces joint spatial compatibility effects. However, what makes an object sufficiently salient is so far not well defined. To scrutinize this open question, the current study manipulated auditory and/or visual attention-attracting cues of a Japanese waving cat within an auditory (Experiment 1) and a visual joint go/no-go Simon task (Experiment 2). Results revealed that joint spatial compatibility effects only occurred in an auditory Simon task when the cat provided auditory cues while no joint spatial compatibility effects were found in a visual Simon task. This demonstrates that it is not the sufficiently salient object alone that leads to joint spatial compatibility effects but instead, a complex interaction between features of the object and the stimulus material of the joint go/no-go Simon task.
The Effects of Auditory Tempo Changes on Rates of Stereotypic Behavior in Handicapped Children.
ERIC Educational Resources Information Center
Christopher, R.; Lewis, B.
1984-01-01
Rates of stereotypic behaviors in six severely/profoundly retarded children (eight to 15 years old) were observed during varying presentations of auditory beats produced by a metronome. Visual and statistical analysis of research results suggested a significant reaction to stimulus presentation. However, additional data following…
Stimulus-Dependent Flexibility in Non-Human Auditory Pitch Processing
ERIC Educational Resources Information Center
Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.
2012-01-01
Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are…
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485
NASA Astrophysics Data System (ADS)
Leek, Marjorie R.; Neff, Donna L.
2004-05-01
Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.
Jang, Jongmoon; Lee, JangWoo; Woo, Seongyong; Sly, David J; Campbell, Luke J; Cho, Jin-Ho; O'Leary, Stephen J; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Jang, Jeong Hun; Choi, Hongsoo
2015-07-31
We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92-12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs.
Bryan, Myranda A.; Popov, Pavlo; Scarff, Raymond; Carter, Cody; Wright, Erin; Aragona, Brandon J.; Robinson, Terry E.
2016-01-01
The sensory properties of a reward-paired cue (a conditioned stimulus; CS) may impact the motivational value attributed to the cue, and in turn influence the form of the conditioned response (CR) that develops. A cue with multiple sensory qualities, such as a moving lever-CS, may activate numerous neural pathways that process auditory and visual information, resulting in CRs that vary both within and between individuals. For example, CRs include approach to the lever-CS itself (rats that “sign-track”; ST), approach to the location of reward delivery (rats that “goal-track”; GT), or an “intermediate” combination of these behaviors. We found that the multimodal sensory features of the lever-CS were important to the development and expression of sign-tracking. When the lever-CS was covered, and thus could only be heard moving, STs not only continued to approach the lever location but also started to approach the food cup during the CS period. While still predictive of reward, the auditory component of the lever-CS was a much weaker conditioned reinforcer than the visible lever-CS. This plasticity in behavioral responding observed in STs closely resembled behaviors normally seen in rats classified as “intermediates.” Furthermore, the ability of both the lever-CS and the reward-delivery to evoke dopamine release in the nucleus accumbens was also altered by covering the lever—dopamine signaling in STs resembled neurotransmission observed in rats that normally only GT. These data suggest that while the visible lever-CS was attractive, wanted, and had incentive value for STs, when presented in isolation, the auditory component of the cue was simply predictive of reward, lacking incentive salience. Therefore, the specific sensory features of cues may differentially contribute to responding and ensure behavioral flexibility. PMID:27918279
Auditory short-term memory in the primate auditory cortex.
Scott, Brian H; Mishkin, Mortimer
2016-06-01
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Evaluating the operations underlying multisensory integration in the cat superior colliculus.
Stanford, Terrence R; Quessy, Stephan; Stein, Barry E
2005-07-13
It is well established that superior colliculus (SC) multisensory neurons integrate cues from different senses; however, the mechanisms responsible for producing multisensory responses are poorly understood. Previous studies have shown that spatially congruent cues from different modalities (e.g., auditory and visual) yield enhanced responses and that the greatest relative enhancements occur for combinations of the least effective modality-specific stimuli. Although these phenomena are well documented, little is known about the mechanisms that underlie them, because no study has systematically examined the operation that multisensory neurons perform on their modality-specific inputs. The goal of this study was to evaluate the computations that multisensory neurons perform in combining the influences of stimuli from two modalities. The extracellular activities of single neurons in the SC of the cat were recorded in response to visual, auditory, and bimodal visual-auditory stimulation. Each neuron was tested across a range of stimulus intensities and multisensory responses evaluated against the null hypothesis of simple summation of unisensory influences. We found that the multisensory response could be superadditive, additive, or subadditive but that the computation was strongly dictated by the efficacies of the modality-specific stimulus components. Superadditivity was most common within a restricted range of near-threshold stimulus efficacies, whereas for the majority of stimuli, response magnitudes were consistent with the linear summation of modality-specific influences. In addition to providing a constraint for developing models of multisensory integration, the relationship between response mode and stimulus efficacy emphasizes the importance of considering stimulus parameters when inducing or interpreting multisensory phenomena.
Muenssinger, Jana; Stingl, Krunoslav T.; Matuz, Tamara; Binder, Gerhard; Ehehalt, Stefan; Preissl, Hubert
2013-01-01
Habituation—the response decrement to repetitively presented stimulation—is a basic cognitive capability and suited to investigate development and integrity of the human brain. To evaluate the developmental process of auditory habituation, the current study used magnetoencephalography (MEG) to investigate auditory habituation, dishabituation and stimulus specificity in children and adults and compared the results between age groups. Twenty-nine children (Mage = 9.69 years, SD ± 0.47) and 14 adults (Mage = 29.29 years, SD ± 3.47) participated in the study and passively listened to a habituation paradigm consisting of 100 trains of tones which were composed of five 500 Hz tones, one 750 Hz tone (dishabituator) and another two 500 Hz tones, respectively while focusing their attention on a silent movie. Adults showed the expected habituation and stimulus specificity within-trains while no response decrement was found between trains. Sensory adaptation or fatigue as a source for response decrement in adults is unlikely due to the strong reaction to the dishabituator (stimulus specificity) and strong mismatch negativity (MMN) responses. However, in children neither habituation nor dishabituation or stimulus specificity could be found within-trains, response decrement was found across trains. It can be speculated that the differences between children and adults are linked to differences in stimulus processing due to attentional processes. This study shows developmental differences in task-related brain activation and discusses the possible influence of broader concepts such as attention, which should be taken into account when comparing performance in an identical task between age groups. PMID:23882207
Szymanski, Francois D; Rabinowitz, Neil C; Magri, Cesare; Panzeri, Stefano; Schnupp, Jan W H
2011-11-02
Recent studies have shown that the phase of low-frequency local field potentials (LFPs) in sensory cortices carries a significant amount of information about complex naturalistic stimuli, yet the laminar circuit mechanisms and the aspects of stimulus dynamics responsible for generating this phase information remain essentially unknown. Here we investigated these issues by means of an information theoretic analysis of LFPs and current source densities (CSDs) recorded with laminar multi-electrode arrays in the primary auditory area of anesthetized rats during complex acoustic stimulation (music and broadband 1/f stimuli). We found that most LFP phase information originated from discrete "CSD events" consisting of granular-superficial layer dipoles of short duration and large amplitude, which we hypothesize to be triggered by transient thalamocortical activation. These CSD events occurred at rates of 2-4 Hz during both stimulation with complex sounds and silence. During stimulation with complex sounds, these events reliably reset the LFP phases at specific times during the stimulation history. These facts suggest that the informativeness of LFP phase in rat auditory cortex is the result of transient, large-amplitude events, of the "evoked" or "driving" type, reflecting strong depolarization in thalamo-recipient layers of cortex. Finally, the CSD events were characterized by a small number of discrete types of infragranular activation. The extent to which infragranular regions were activated was stimulus dependent. These patterns of infragranular activations may reflect a categorical evaluation of stimulus episodes by the local circuit to determine whether to pass on stimulus information through the output layers.
Audio-Visual, Visuo-Tactile and Audio-Tactile Correspondences in Preschoolers.
Nava, Elena; Grassi, Massimo; Turati, Chiara
2016-01-01
Interest in crossmodal correspondences has recently seen a renaissance thanks to numerous studies in human adults. Yet, still very little is known about crossmodal correspondences in children, particularly in sensory pairings other than audition and vision. In the current study, we investigated whether 4-5-year-old children match auditory pitch to the spatial motion of visual objects (audio-visual condition). In addition, we investigated whether this correspondence extends to touch, i.e., whether children also match auditory pitch to the spatial motion of touch (audio-tactile condition) and the spatial motion of visual objects to touch (visuo-tactile condition). In two experiments, two different groups of children were asked to indicate which of two stimuli fitted best with a centrally located third stimulus (Experiment 1), or to report whether two presented stimuli fitted together well (Experiment 2). We found sensitivity to the congruency of all of the sensory pairings only in Experiment 2, suggesting that only under specific circumstances can these correspondences be observed. Our results suggest that pitch-height correspondences for audio-visual and audio-tactile combinations may still be weak in preschool children, and speculate that this could be due to immature linguistic and auditory cues that are still developing at age five.
Sensitivity and specificity of auditory steady‐state response testing
Rabelo, Camila Maia; Schochat, Eliane
2011-01-01
INTRODUCTION: The ASSR test is an electrophysiological test that evaluates, among other aspects, neural synchrony, based on the frequency or amplitude modulation of tones. OBJECTIVE: The aim of this study was to determine the sensitivity and specificity of auditory steady‐state response testing in detecting lesions and dysfunctions of the central auditory nervous system. METHODS: Seventy volunteers were divided into three groups: those with normal hearing; those with mesial temporal sclerosis; and those with central auditory processing disorder. All subjects underwent auditory steady‐state response testing of both ears at 500 Hz and 2000 Hz (frequency modulation, 46 Hz). The difference between auditory steady‐state response‐estimated thresholds and behavioral thresholds (audiometric evaluation) was calculated. RESULTS: Estimated thresholds were significantly higher in the mesial temporal sclerosis group than in the normal and central auditory processing disorder groups. In addition, the difference between auditory steady‐state response‐estimated and behavioral thresholds was greatest in the mesial temporal sclerosis group when compared to the normal group than in the central auditory processing disorder group compared to the normal group. DISCUSSION: Research focusing on central auditory nervous system (CANS) lesions has shown that individuals with CANS lesions present a greater difference between ASSR‐estimated thresholds and actual behavioral thresholds; ASSR‐estimated thresholds being significantly worse than behavioral thresholds in subjects with CANS insults. This is most likely because the disorder prevents the transmission of the sound stimulus from being in phase with the received stimulus, resulting in asynchronous transmitter release. Another possible cause of the greater difference between the ASSR‐estimated thresholds and the behavioral thresholds is impaired temporal resolution. CONCLUSIONS: The overall sensitivity of auditory steady‐state response testing was lower than its overall specificity. Although the overall specificity was high, it was lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. Overall sensitivity was also lower in the central auditory processing disorder group than in the mesial temporal sclerosis group. PMID:21437442
Enhancing second-order conditioning with lesions of the basolateral amygdala.
Holland, Peter C
2016-04-01
Because the occurrence of primary reinforcers in natural environments is relatively rare, conditioned reinforcement plays an important role in many accounts of behavior, including pathological behaviors such as the abuse of alcohol or drugs. As a result of pairing with natural or drug reinforcers, initially neutral cues acquire the ability to serve as reinforcers for subsequent learning. Accepting a major role for conditioned reinforcement in everyday learning is complicated by the often-evanescent nature of this phenomenon in the laboratory, especially when primary reinforcers are entirely absent from the test situation. Here, I found that under certain conditions, the impact of conditioned reinforcement could be extended by lesions of the basolateral amygdala (BLA). Rats received first-order Pavlovian conditioning pairings of 1 visual conditioned stimulus (CS) with food prior to receiving excitotoxic or sham lesions of the BLA, and first-order pairings of another visual CS with food after that surgery. Finally, each rat received second-order pairings of a different auditory cue with each visual first-order CS. As in prior studies, relative to sham-lesioned control rats, lesioned rats were impaired in their acquisition of second-order conditioning to the auditory cue paired with the first-order CS that was trained after surgery. However, lesioned rats showed enhanced and prolonged second-order conditioning to the auditory cue paired with the first-order CS that was trained before amygdala damage was made. Implications for an enhanced role for conditioned reinforcement by drug-related cues after drug-induced alterations in neural plasticity are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Design of Training Systems, Phase II-A Report. An Educational Technology Assessment Model (ETAM)
1975-07-01
34format" for the perceptual tasks. This is applicable to auditory as well as visual tasks. Student Participation in Learning Route. When a student enters...skill formats Skill training 05.05 Vehicle properties Instructional functions: Type of stimulus presented to student visual auditory ...Subtask 05.05. For example, a trainer to identify and interpret auditory signals would not be represented in the above list. Trainers in the vehicle
Reconstruction of audio waveforms from spike trains of artificial cochlea models
Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii
2015-01-01
Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113
Lateralization of music processing with noises in the auditory cortex: an fNIRS study.
Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik
2014-01-01
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts.
Lateralization of music processing with noises in the auditory cortex: an fNIRS study
Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik
2014-01-01
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts. PMID:25538583
NASA Astrophysics Data System (ADS)
Narendran, Mini M.; Humes, Larry E.
2003-04-01
Increasing the rate of presentation can have a deleterious effect on auditory processing, especially among the elderly. Rate can be manipulated by changing the duration of individual components of a sequence of sounds, by changing the inter-stimulus interval (ISI) between components, or both. Consequently, when age-related deficits in performance appear to be attributable to rate of stimulus presentation, it is often the case that alternative explanations in terms of the effects of stimulus duration or ISI are also possible. In this study, the independent effects of duration and ISI on the discrimination of temporal order for four-tone sequences were investigated in a group of young normal-hearing and elderly hearing-impaired listeners. It was found that discrimination performance was driven by the rate of presentation, rather than stimulus duration or ISI alone, for both groups of listeners. The performance of the two groups of listeners differed significantly for the fastest presentation rates, but was similar for the slower rates. Slowing the rate of presentation seemed to improve performance, regardless of whether this was done by increasing stimulus duration or increasing ISI, and this was observed for both groups of listeners. [Work supported, in part, by NIA.
Wild-Wall, Nele; Falkenstein, Michael
2010-01-01
By using event-related potentials (ERPs) the present study examines if age-related differences in preparation and processing especially emerge during divided attention. Binaurally presented auditory cues called for focused (valid and invalid) or divided attention to one or both ears. Responses were required to subsequent monaurally presented valid targets (vowels), but had to be suppressed to non-target vowels or invalidly cued vowels. Middle-aged participants were more impaired under divided attention than young ones, likely due to an age-related decline in preparatory attention following cues as was reflected in a decreased CNV. Under divided attention, target processing was increased in the middle-aged, likely reflecting compensatory effort to fulfill task requirements in the difficult condition. Additionally, middle-aged participants processed invalidly cued stimuli more intensely as was reflected by stimulus ERPs. The results suggest an age-related impairment in attentional preparation after auditory cues especially under divided attention and latent difficulties to suppress irrelevant information.
Doelling, Keith; Arnal, Luc; Ghitza, Oded; Poeppel, David
2013-01-01
A growing body of research suggests that intrinsic neuronal slow (< 10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the ‘sharpness’ of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility. PMID:23791839
Contextual fear conditioning differs for infant, adolescent, and adult rats
Esmorís-Arranz, Francisco J.; Méndez, Cástor; Spear, Norman E.
2009-01-01
Contextual fear conditioning was tested in infant, adolescent, and adult rats in terms of Pavlovian conditioned suppression. When a discrete auditory conditioned stimulus (CS) was paired with footshock (unconditioned stimulus, US) within the largely olfactory context, infants and adolescents conditioned to the context with substantial effectiveness but adult rats did not. When unpaired presentations of the CS and US occurred within the context, contextual fear conditioning was strong for adults, weak for infants, but about as strong for adolescents as when pairings of CS and US occurred in the context. Nonreinforced presentations of either the CS or context markedly reduced contextual fear conditioning in infants, but, in adolescents, CS extinction had no effect on contextual fear conditioning, although context extinction significantly reduced it. Neither CS extinction nor context extinction affected responding to the CS-context compound in infants, suggesting striking discrimination between the compound and its components. Female adolescents showed the same lack of effect of component extinction on response to the compound as infants, but CS extinction reduced responding to the compound in adolescent males, a sex difference seen also in adults. Theoretical implications are discussed for the development of perceptual-cognitive processing and hippocampus role. PMID:18343048
Nourski, Kirill V; Abbas, Paul J; Miller, Charles A; Robinson, Barbara K; Jeng, Fuh-Cherng
2005-04-01
This study investigated the effects of acoustic noise on the auditory nerve compound action potentials in response to electric pulse trains. Subjects were adult guinea pigs, implanted with a minimally invasive electrode to preserve acoustic sensitivity. Electrically evoked compound action potentials (ECAP) were recorded from the auditory nerve trunk in response to electric pulse trains both during and after the presentation of acoustic white noise. Simultaneously presented acoustic noise produced a decrease in ECAP amplitude. The effect of the acoustic masker on the electric probe was greatest at the onset of the acoustic stimulus and it was followed by a partial recovery of the ECAP amplitude. Following cessation of the acoustic noise, ECAP amplitude recovered over a period of approximately 100-200 ms. The effects of the acoustic noise were more prominent at lower electric pulse rates (interpulse intervals of 3 ms and higher). At higher pulse rates, the ECAP adaptation to the electric pulse train alone was larger and the acoustic noise, when presented, produced little additional effect. The observed effects of noise on ECAP were the greatest at high electric stimulus levels and, for a particular electric stimulus level, at high acoustic noise levels.
Functional mapping of the primate auditory system.
Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer
2003-01-24
Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.
Test of the neurolinguistic programming hypothesis that eye-movements relate to processing imagery.
Wertheim, E H; Habib, C; Cumming, G
1986-04-01
Bandler and Grinder's hypothesis that eye-movements reflect sensory processing was examined. 28 volunteers first memorized and then recalled visual, auditory, and kinesthetic stimuli. Changes in eye-positions during recall were videotaped and categorized by two raters into positions hypothesized by Bandler and Grinder's model to represent visual, auditory, and kinesthetic recall. Planned contrast analyses suggested that visual stimulus items, when recalled, elicited significantly more upward eye-positions and stares than auditory and kinesthetic items. Auditory and kinesthetic items, however, did not elicit more changes in eye-position hypothesized by the model to represent auditory and kinesthetic recall, respectively.
Two-dimensional adaptation in the auditory forebrain
Nagel, Katherine I.; Doupe, Allison J.
2011-01-01
Sensory neurons exhibit two universal properties: sensitivity to multiple stimulus dimensions, and adaptation to stimulus statistics. How adaptation affects encoding along primary dimensions is well characterized for most sensory pathways, but if and how it affects secondary dimensions is less clear. We studied these effects for neurons in the avian equivalent of primary auditory cortex, responding to temporally modulated sounds. We showed that the firing rate of single neurons in field L was affected by at least two components of the time-varying sound log-amplitude. When overall sound amplitude was low, neural responses were based on nonlinear combinations of the mean log-amplitude and its rate of change (first time differential). At high mean sound amplitude, the two relevant stimulus features became the first and second time derivatives of the sound log-amplitude. Thus a strikingly systematic relationship between dimensions was conserved across changes in stimulus intensity, whereby one of the relevant dimensions approximated the time differential of the other dimension. In contrast to stimulus mean, increases in stimulus variance did not change relevant dimensions, but selectively increased the contribution of the second dimension to neural firing, illustrating a new adaptive behavior enabled by multidimensional encoding. Finally, we demonstrated theoretically that inclusion of time differentials as additional stimulus features, as seen so prominently in the single-neuron responses studied here, is a useful strategy for encoding naturalistic stimuli, because it can lower the necessary sampling rate while maintaining the robustness of stimulus reconstruction to correlated noise. PMID:21753019
Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.
Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D
2016-01-01
The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
2016-10-01
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Local inhibition of GABA affects precedence effect in the inferior colliculus
Wang, Yanjun; Wang, Ningyu; Wang, Dan; Jia, Jun; Liu, Jinfeng; Xie, Yan; Wen, Xiaohui; Li, Xiaoting
2014-01-01
The precedence effect is a prerequisite for faithful sound localization in a complex auditory environment, and is a physiological phenomenon in which the auditory system selectively suppresses the directional information from echoes. Here we investigated how neurons in the inferior colliculus respond to the paired sounds that produce precedence-effect illusions, and whether their firing behavior can be modulated through inhibition with gamma-aminobutyric acid (GABA). We recorded extracellularly from 36 neurons in rat inferior colliculus under three conditions: no injection, injection with saline, and injection with gamma-aminobutyric acid. The paired sounds that produced precedence effects were two identical 4-ms noise bursts, which were delivered contralaterally or ipsilaterally to the recording site. The normalized neural responses were measured as a function of different inter-stimulus delays and half-maximal interstimulus delays were acquired. Neuronal responses to the lagging sounds were weak when the inter-stimulus delay was short, but increased gradually as the delay was lengthened. Saline injection produced no changes in neural responses, but after local gamma-aminobutyric acid application, responses to the lagging stimulus were suppressed. Application of gamma-aminobutyric acid affected the normalized response to lagging sounds, independently of whether they or the paired sounds were contralateral or ipsilateral to the recording site. These observations suggest that local inhibition by gamma-aminobutyric acid in the rat inferior colliculus shapes the neural responses to lagging sounds, and modulates the precedence effect. PMID:25206830
Burst Firing is a Neural Code in an Insect Auditory System
Eyherabide, Hugo G.; Rokem, Ariel; Herz, Andreas V. M.; Samengo, Inés
2008-01-01
Various classes of neurons alternate between high-frequency discharges and silent intervals. This phenomenon is called burst firing. To analyze burst activity in an insect system, grasshopper auditory receptor neurons were recorded in vivo for several distinct stimulus types. The experimental data show that both burst probability and burst characteristics are strongly influenced by temporal modulations of the acoustic stimulus. The tendency to burst, hence, is not only determined by cell-intrinsic processes, but also by their interaction with the stimulus time course. We study this interaction quantitatively and observe that bursts containing a certain number of spikes occur shortly after stimulus deflections of specific intensity and duration. Our findings suggest a sparse neural code where information about the stimulus is represented by the number of spikes per burst, irrespective of the detailed interspike-interval structure within a burst. This compact representation cannot be interpreted as a firing-rate code. An information-theoretical analysis reveals that the number of spikes per burst reliably conveys information about the amplitude and duration of sound transients, whereas their time of occurrence is reflected by the burst onset time. The investigated neurons encode almost half of the total transmitted information in burst activity. PMID:18946533
Cognitive mechanisms associated with auditory sensory gating
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
2016-01-01
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Rapid tuning shifts in human auditory cortex enhance speech intelligibility
Holdgraf, Christopher R.; de Heer, Wendy; Pasley, Brian; Rieger, Jochem; Crone, Nathan; Lin, Jack J.; Knight, Robert T.; Theunissen, Frédéric E.
2016-01-01
Experience shapes our perception of the world on a moment-to-moment basis. This robust perceptual effect of experience parallels a change in the neural representation of stimulus features, though the nature of this representation and its plasticity are not well-understood. Spectrotemporal receptive field (STRF) mapping describes the neural response to acoustic features, and has been used to study contextual effects on auditory receptive fields in animal models. We performed a STRF plasticity analysis on electrophysiological data from recordings obtained directly from the human auditory cortex. Here, we report rapid, automatic plasticity of the spectrotemporal response of recorded neural ensembles, driven by previous experience with acoustic and linguistic information, and with a neurophysiological effect in the sub-second range. This plasticity reflects increased sensitivity to spectrotemporal features, enhancing the extraction of more speech-like features from a degraded stimulus and providing the physiological basis for the observed ‘perceptual enhancement' in understanding speech. PMID:27996965
Gong, Diankun; Hu, Jiehui; Yao, Dezhong
2012-04-01
With the two-choice go/no-go paradigm, we investigated whether timbre attribute can be transmitted as partial information from the stimulus identification stage to the response preparation stage in auditory tone processing. We manipulated two attributes of the stimulus: timbre (piano vs. violin) and acoustic intensity (soft vs. loud) to ensure an earlier processing of timbre than intensity. We associated the timbre attribute more with go trials. Results showed that lateralized readiness potentials (LRPs) were consistently elicited in no-go trials. This showed that the timbre attribute had been transmitted to the response preparation stage before the intensity attribute was processed in the stimuli identification stage. Such a result provides evidence for the continuous model and asynchronous discrete coding (ADC) model in information processing. We suggest that partial information can be transmitted in an auditory channel. Copyright © 2011 Society for Psychophysiological Research.
Changes in otoacoustic emissions during selective auditory and visual attention
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2015-01-01
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703
Chau, Lily S; Prakapenka, Alesia; Fleming, Stephen A; Davis, Ashley S; Galvez, Roberto
2013-11-01
The underlying neuronal mechanisms of learning and memory have been heavily explored using associative learning paradigms. Two of the more commonly employed learning paradigms have been contextual and delay fear conditioning. In fear conditioning, a subject learns to associate a neutral stimulus (conditioned stimulus; CS), such as a tone or the context of the room, with a fear provoking stimulus (unconditioned stimulus; US), such as a mild footshock. Utilizing these two paradigms, various analyses have elegantly demonstrated that the amygdala plays a role in both fear-related associative learning paradigms. However, the amygdala's involvement in trace fear conditioning, a forebrain-dependent fear associative learning paradigm that has been suggested to tap into higher cognitive processes, has not been closely investigated. Furthermore, to our knowledge, the specific amygdala nuclei involved with trace fear conditioning has not been examined. The present study used Arc expression as an activity marker to determine the amygdala's involvement in trace fear associative learning and to further explore involvement of specific amygdalar nuclei. Arc is an immediate early gene that has been shown to be associated with neuronal activation and is believed to be necessary for neuronal plasticity. Findings from the present study demonstrated that trace-conditioned mice, compared to backward-conditioned (stimulation-control), delay-conditioned and naïve mice, exhibited elevated amygdalar Arc expression in the basolateral (BLA) but not the central (CeA) or the lateral amygdala (LA). These findings are consistent with previous reports demonstrating that the amygdala plays a critical role in trace conditioning. Furthermore, these findings parallel studies demonstrating hippocampal-BLA activation following contextual fear conditioning, suggesting that trace fear conditioning and contextual fear conditioning may involve similar amygdala nuclei. Together, findings from this study demonstrate similarities in the pathway for trace and contextual fear conditioning, and further suggest possible underlying mechanisms for acquisition and consolidation of these two types of fear-related learning. Copyright © 2013 Elsevier Inc. All rights reserved.
Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary
2012-01-01
We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
2017-02-01
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Haesen, Birgitt; Boets, Bart; Wagemans, Johan
2011-01-01
This literature review aims to interpret behavioural and electrophysiological studies addressing auditory processing in children and adults with autism spectrum disorder (ASD). Data have been organised according to the applied methodology (behavioural versus electrophysiological studies) and according to stimulus complexity (pure versus complex…
Auditory Habituation in the Fetus and Neonate: An fMEG Study
ERIC Educational Resources Information Center
Muenssinger, Jana; Matuz, Tamara; Schleger, Franziska; Kiefer-Schmidt, Isabelle; Goelz, Rangmar; Wacker-Gussmann, Annette; Birbaumer, Niels; Preissl, Hubert
2013-01-01
Habituation--the most basic form of learning--is used to evaluate central nervous system (CNS) maturation and to detect abnormalities in fetal brain development. In the current study, habituation, stimulus specificity and dishabituation of auditory evoked responses were measured in fetuses and newborns using fetal magnetoencephalography (fMEG). An…
Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam
2013-01-01
Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. PMID:23791629
Gavrilescu, M; Rossell, S; Stuart, G W; Shea, T L; Innes-Brown, H; Henshall, K; McKay, C; Sergejew, A A; Copolov, D; Egan, G F
2010-07-01
Previous research has reported auditory processing deficits that are specific to schizophrenia patients with a history of auditory hallucinations (AH). One explanation for these findings is that there are abnormalities in the interhemispheric connectivity of auditory cortex pathways in AH patients; as yet this explanation has not been experimentally investigated. We assessed the interhemispheric connectivity of both primary (A1) and secondary (A2) auditory cortices in n=13 AH patients, n=13 schizophrenia patients without auditory hallucinations (non-AH) and n=16 healthy controls using functional connectivity measures from functional magnetic resonance imaging (fMRI) data. Functional connectivity was estimated from resting state fMRI data using regions of interest defined for each participant based on functional activation maps in response to passive listening to words. Additionally, stimulus-induced responses were regressed out of the stimulus data and the functional connectivity was estimated for the same regions to investigate the reliability of the estimates. AH patients had significantly reduced interhemispheric connectivity in both A1 and A2 when compared with non-AH patients and healthy controls. The latter two groups did not show any differences in functional connectivity. Further, this pattern of findings was similar across the two datasets, indicating the reliability of our estimates. These data have identified a trait deficit specific to AH patients. Since this deficit was characterized within both A1 and A2 it is expected to result in the disruption of multiple auditory functions, for example, the integration of basic auditory information between hemispheres (via A1) and higher-order language processing abilities (via A2).
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
2014-11-01
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Distortions of Subjective Time Perception Within and Across Senses
van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan
2008-01-01
Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248
Visual Information Present in Infragranular Layers of Mouse Auditory Cortex.
Morrill, Ryan J; Hasenstaub, Andrea R
2018-03-14
The cerebral cortex is a major hub for the convergence and integration of signals from across the sensory modalities; sensory cortices, including primary regions, are no exception. Here we show that visual stimuli influence neural firing in the auditory cortex of awake male and female mice, using multisite probes to sample single units across multiple cortical layers. We demonstrate that visual stimuli influence firing in both primary and secondary auditory cortex. We then determine the laminar location of recording sites through electrode track tracing with fluorescent dye and optogenetic identification using layer-specific markers. Spiking responses to visual stimulation occur deep in auditory cortex and are particularly prominent in layer 6. Visual modulation of firing rate occurs more frequently at areas with secondary-like auditory responses than those with primary-like responses. Auditory cortical responses to drifting visual gratings are not orientation-tuned, unlike visual cortex responses. The deepest cortical layers thus appear to be an important locus for cross-modal integration in auditory cortex. SIGNIFICANCE STATEMENT The deepest layers of the auditory cortex are often considered its most enigmatic, possessing a wide range of cell morphologies and atypical sensory responses. Here we show that, in mouse auditory cortex, these layers represent a locus of cross-modal convergence, containing many units responsive to visual stimuli. Our results suggest that this visual signal conveys the presence and timing of a stimulus rather than specifics about that stimulus, such as its orientation. These results shed light on both how and what types of cross-modal information is integrated at the earliest stages of sensory cortical processing. Copyright © 2018 the authors 0270-6474/18/382854-09$15.00/0.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
2017-08-01
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
High-Field Functional Imaging of Pitch Processing in Auditory Cortex of the Cat
Butler, Blake E.; Hall, Amee J.; Lomber, Stephen G.
2015-01-01
The perception of pitch is a widely studied and hotly debated topic in human hearing. Many of these studies combine functional imaging techniques with stimuli designed to disambiguate the percept of pitch from frequency information present in the stimulus. While useful in identifying potential “pitch centres” in cortex, the existence of truly pitch-responsive neurons requires single neuron-level measures that can only be undertaken in animal models. While a number of animals have been shown to be sensitive to pitch, few studies have addressed the location of cortical generators of pitch percepts in non-human models. The current study uses high-field functional magnetic resonance imaging (fMRI) of the feline brain in an attempt to identify regions of cortex that show increased activity in response to pitch-evoking stimuli. Cats were presented with iterated rippled noise (IRN) stimuli, narrowband noise stimuli with the same spectral profile but no perceivable pitch, and a processed IRN stimulus in which phase components were randomized to preserve slowly changing modulations in the absence of pitch (IRNo). Pitch-related activity was not observed to occur in either primary auditory cortex (A1) or the anterior auditory field (AAF) which comprise the core auditory cortex in cats. Rather, cortical areas surrounding the posterior ectosylvian sulcus responded preferentially to the IRN stimulus when compared to narrowband noise, with group analyses revealing bilateral activity centred in the posterior auditory field (PAF). This study demonstrates that fMRI is useful for identifying pitch-related processing in cat cortex, and identifies cortical areas that warrant further investigation. Moreover, we have taken the first steps in identifying a useful animal model for the study of pitch perception. PMID:26225563
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Stimulus induced reset of 40-Hz auditory steady-state responses.
Ross, B; Herdman, A T; Pantev, C
2004-11-30
Auditory steady-state responses (ASSR) were evoked with 40-Hz amplitude modulated 500-Hz tones. An additional impulse-like noise stimulus (2,000 +/- 500 Hz) with spectrum clearly distinct from the one of the AM sound, induced pronounced perturbations in the ASSR. The effect of the interfering noise was interpreted as (1) reset of the ASSR because of a sudden loss in phase coherence, (2) a decrease in signal power immediately after presentation of the noise impulse, and (3) a modulation of ASSR amplitude and phase resembling the time course of the ASSR onset. The time-course of the ASSR onset was interpreted as reflecting temporal integration over several 100 ms. The reset of the ASSR was discussed as a powerful mechanism, which allows for fast reaction to a short stimulus change that overcomes the disadvantage of the ASSR's long integration time constant.
School-aged children can benefit from audiovisual semantic congruency during memory encoding.
Heikkilä, Jenni; Tiippana, Kaisa
2016-05-01
Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.
Clinical applications of the human brainstem responses to auditory stimuli
NASA Technical Reports Server (NTRS)
Galambos, R.; Hecox, K.
1975-01-01
A technique utilizing the frequency following response (FFR) (obtained by auditory stimulation, whereby the stimulus frequency and duration are mirror-imaged in the resulting brainwaves) as a clinical tool for hearing disorders in humans of all ages is presented. Various medical studies are discussed to support the clinical value of the technique. The discovery and origin of the FFR and another significant brainstem auditory response involved in studying the eighth nerve is also discussed.
Investigating brain response to music: a comparison of different fMRI acquisition schemes.
Mueller, Karsten; Mildner, Toralf; Fritz, Thomas; Lepsien, Jöran; Schwarzbauer, Christian; Schroeter, Matthias L; Möller, Harald E
2011-01-01
Functional magnetic resonance imaging (fMRI) in auditory experiments is a challenge, because the scanning procedure produces considerable noise that can interfere with the auditory paradigm. The noise might either mask the auditory material presented, or interfere with stimuli designed to evoke emotions because it sounds loud and rather unpleasant. Therefore, scanning paradigms that allow interleaved auditory stimulation and image acquisition appear to be advantageous. The sparse temporal sampling (STS) technique uses a very long repetition time in order to achieve a stimulus presentation in the absence of scanner noise. Although only relatively few volumes are acquired for the resulting data sets, there have been recent studies where this method has furthered remarkable results. A new development is the interleaved silent steady state (ISSS) technique. Compared with STS, this method is capable of acquiring several volumes in the time frame between the auditory trials (while the magnetization is kept in a steady state during stimulus presentation). In order to draw conclusions about the optimum fMRI procedure with auditory stimulation, different echo-planar imaging (EPI) acquisition schemes were compared: Continuous scanning, STS, and ISSS. The total acquisition time of each sequence was adjusted to about 12.5 min. The results indicate that the ISSS approach exhibits the highest sensitivity in detecting subtle activity in sub-cortical brain regions. Copyright © 2010 Elsevier Inc. All rights reserved.
Role of Sleep Deprivation in Fear Conditioning and Extinction: Implications for Treatment of PTSD
2014-10-01
The auditory stimulus during the NA trials, a brief (40 ms) pulse of 108 dB is used to induce a startle response, the magnitude of which is the...processing in humans and rodents would aid in mechanistic studies examining SD- induced inattention. We assessed the effects of 36 hours of: 1) Total SD...were required to inhibit from responding. TSD- induced effects on human Psychomotor Vigilance Test (PVT) were also examined. Effects of SD were also
The use of picture prompts and prompt delay to teach receptive labeling.
Vedora, Joseph; Barry, Tiffany
2016-12-01
The current study extended research on picture prompts by using them with a progressive prompt delay to teach receptive labeling of pictures to 2 teenagers with autism. The procedure differed from prior research because the auditory stimulus was not presented or was presented only once during the picture-prompt condition. The results indicated that the combination of picture prompts and prompt delay was effective, although 1 participant required a procedural modification. © 2016 Society for the Experimental Analysis of Behavior.
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Fear Conditioning is Disrupted by Damage to the Postsubiculum
Robinson, Siobhan; Bucci, David J.
2011-01-01
The hippocampus plays a central role in spatial and contextual learning and memory, however relatively little is known about the specific contributions of parahippocampal structures that interface with the hippocampus. The postsubiculum (PoSub) is reciprocally connected with a number of hippocampal, parahippocampal and subcortical structures that are involved in spatial learning and memory. In addition, behavioral data suggest that PoSub is needed for optimal performance during tests of spatial memory. Together, these data suggest that PoSub plays a prominent role in spatial navigation. Currently it is unknown whether the PoSub is needed for other forms of learning and memory that also require the formation of associations among multiple environmental stimuli. To address this gap in the literature we investigated the role of PoSub in Pavlovian fear conditioning. In Experiment 1 male rats received either lesions of PoSub or Sham surgery prior to training in a classical fear conditioning procedure. On the training day a tone was paired with foot shock three times. Conditioned fear to the training context was evaluated 24 hr later by placing rats back into the conditioning chamber without presenting any tones or shocks. Auditory fear was assessed on the third day by presenting the auditory stimulus in a novel environment (no shock). PoSub-lesioned rats exhibited impaired acquisition of the conditioned fear response as well as impaired expression of contextual and auditory fear conditioning. In Experiment 2, PoSub lesions were made 1 day after training to specifically assess the role of PoSub in fear memory. No deficits in the expression of contextual fear were observed, but freezing to the tone was significantly reduced in PoSub-lesioned rats compared to shams. Together, these results indicate that PoSub is necessary for normal acquisition of conditioned fear, and that PoSub contributes to the expression of auditory but not contextual fear memory. PMID:22076971
Neural entrainment to rhythmic speech in children with developmental dyslexia
Power, Alan J.; Mead, Natasha; Barnes, Lisa; Goswami, Usha
2013-01-01
A rhythmic paradigm based on repetition of the syllable “ba” was used to study auditory, visual, and audio-visual oscillatory entrainment to speech in children with and without dyslexia using EEG. Children pressed a button whenever they identified a delay in the isochronous stimulus delivery (500 ms; 2 Hz delta band rate). Response power, strength of entrainment and preferred phase of entrainment in the delta and theta frequency bands were compared between groups. The quality of stimulus representation was also measured using cross-correlation of the stimulus envelope with the neural response. The data showed a significant group difference in the preferred phase of entrainment in the delta band in response to the auditory and audio-visual stimulus streams. A different preferred phase has significant implications for the quality of speech information that is encoded neurally, as it implies enhanced neuronal processing (phase alignment) at less informative temporal points in the incoming signal. Consistent with this possibility, the cross-correlogram analysis revealed superior stimulus representation by the control children, who showed a trend for larger peak r-values and significantly later lags in peak r-values compared to participants with dyslexia. Significant relationships between both peak r-values and peak lags were found with behavioral measures of reading. The data indicate that the auditory temporal reference frame for speech processing is atypical in developmental dyslexia, with low frequency (delta) oscillations entraining to a different phase of the rhythmic syllabic input. This would affect the quality of encoding of speech, and could underlie the cognitive impairments in phonological representation that are the behavioral hallmark of this developmental disorder across languages. PMID:24376407
Cobb, Kensi M; Stuart, Andrew
The purpose of the study was to examine the differences in auditory brainstem response (ABR) latency and amplitude indices to the CE-Chirp stimuli in neonates versus young adults as a function of stimulus level, rate, polarity, frequency and gender. Participants were 168 healthy neonates and 20 normal-hearing young adults. ABRs were obtained to air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps. The effect of stimulus level was also examined with bone-conducted CE-Chirps and CE-Chirp octave band stimuli. The effect of gender was examined across all stimulus manipulations. In general, ABR wave V amplitudes were significantly larger (p < 0.0001) and latencies were significantly shorter (p < 0.0001) for adults versus neonates for all air-conducted CE-Chirp stimuli with all stimulus manipulations. For bone-conducted CE-Chirps, infants had significantly shorter wave V latencies than adults at 15 dB nHL and 45 dB nHL (p = 0.02). Adult wave V amplitude was significantly larger for bone-conducted CE-Chirps only at 30 dB nHL (p = 0.02). The effect of gender was not statistically significant across all measures (p > 0.05). Significant differences in ABR latencies and amplitudes exist between newborns and young adults using CE-Chirp stimuli. These differences are consistent with differences to traditional click and tone burst stimuli and reflect maturational differences as a function of age. These findings continue to emphasize the importance of interpreting ABR results using age-based normative data.
ERIC Educational Resources Information Center
Mossbridge, Julia A.; Scissors, Beth N.; Wright, Beverly A.
2008-01-01
Normal auditory perception relies on accurate judgments about the temporal relationships between sounds. Previously, we used a perceptual-learning paradigm to investigate the neural substrates of two such relative-timing judgments made at sound onset: detecting stimulus asynchrony and discriminating stimulus order. Here, we conducted parallel…
Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan
2017-06-01
The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.
Ozdamar, Ozcan; Bohorquez, Jorge; Mihajloski, Todor; Yavuz, Erdem; Lachowska, Magdalena
2011-01-01
Electrophysiological indices of auditory binaural beats illusions are studied using late latency evoked responses. Binaural beats are generated by continuous monaural FM tones with slightly different ascending and descending frequencies lasting about 25 ms presented at 1 sec intervals. Frequency changes are carefully adjusted to avoid any creation of abrupt waveform changes. Binaural Interaction Component (BIC) analysis is used to separate the neural responses due to binaural involvement. The results show that the transient auditory evoked responses can be obtained from the auditory illusion of binaural beats.
Jang, Jongmoon; Lee, JangWoo; Woo, Seongyong; Sly, David J.; Campbell, Luke J.; Cho, Jin-Ho; O’Leary, Stephen J.; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Hun Jang, Jeong; Choi, Hongsoo
2015-01-01
We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92–12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs. PMID:26227924
Nöstl, Anatole; Marsh, John E; Sörqvist, Patrik
2014-01-01
Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2). Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3). The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture--rather than violated expectations--in complex sound environments.
NASA Astrophysics Data System (ADS)
Jang, Jongmoon; Lee, Jangwoo; Woo, Seongyong; Sly, David J.; Campbell, Luke J.; Cho, Jin-Ho; O'Leary, Stephen J.; Park, Min-Hyun; Han, Sungmin; Choi, Ji-Wong; Hun Jang, Jeong; Choi, Hongsoo
2015-07-01
We proposed a piezoelectric artificial basilar membrane (ABM) composed of a microelectromechanical system cantilever array. The ABM mimics the tonotopy of the cochlea: frequency selectivity and mechanoelectric transduction. The fabricated ABM exhibits a clear tonotopy in an audible frequency range (2.92-12.6 kHz). Also, an animal model was used to verify the characteristics of the ABM as a front end for potential cochlear implant applications. For this, a signal processor was used to convert the piezoelectric output from the ABM to an electrical stimulus for auditory neurons. The electrical stimulus for auditory neurons was delivered through an implanted intra-cochlear electrode array. The amplitude of the electrical stimulus was modulated in the range of 0.15 to 3.5 V with incoming sound pressure levels (SPL) of 70.1 to 94.8 dB SPL. The electrical stimulus was used to elicit an electrically evoked auditory brainstem response (EABR) from deafened guinea pigs. EABRs were successfully measured and their magnitude increased upon application of acoustic stimuli from 75 to 95 dB SPL. The frequency selectivity of the ABM was estimated by measuring the magnitude of EABRs while applying sound pressure at the resonance and off-resonance frequencies of the corresponding cantilever of the selected channel. In this study, we demonstrated a novel piezoelectric ABM and verified its characteristics by measuring EABRs.
Centanni, T M; Pantazis, D; Truong, D T; Gruen, J R; Gabrieli, J D E; Hogan, T P
2018-05-26
Individuals with dyslexia exhibit increased brainstem variability in response to sound. It is unknown as to whether increased variability extends to neocortical regions associated with audition and reading, extends to visual stimuli, and whether increased variability characterizes all children with dyslexia or, instead, a specific subset of children. We evaluated the consistency of stimulus-evoked neural responses in children with (N = 20) or without dyslexia (N = 12) as measured by magnetoencephalography (MEG). Approximately half of the children with dyslexia had significantly higher levels of variability in cortical responses to both auditory and visual stimuli in multiple nodes of the reading network. There was a significant and positive relationship between the number of risk alleles at rs6935076 in the dyslexia-susceptibility gene KIAA0319 and the degree of neural variability in primary auditory cortex across all participants. This gene has been linked with neural variability in rodents and in typical readers. These findings indicate that unstable representations of auditory and visual stimuli in auditory and other reading-related neocortical regions are present in a subset of children with dyslexia and support the link between the gene KIAA0319 and the auditory neural variability across children with or without dyslexia. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.
Nir, Yuval; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Banks, Matthew I.; Tononi, Giulio
2015-01-01
Sleep entails a disconnection from the external environment. By and large, sensory stimuli do not trigger behavioral responses and are not consciously perceived as they usually are in wakefulness. Traditionally, sleep disconnection was ascribed to a thalamic “gate,” which would prevent signal propagation along ascending sensory pathways to primary cortical areas. Here, we compared single-unit and LFP responses in core auditory cortex as freely moving rats spontaneously switched between wakefulness and sleep states. Despite robust differences in baseline neuronal activity, both the selectivity and the magnitude of auditory-evoked responses were comparable across wakefulness, Nonrapid eye movement (NREM) and rapid eye movement (REM) sleep (pairwise differences <8% between states). The processing of deviant tones was also compared in sleep and wakefulness using an oddball paradigm. Robust stimulus-specific adaptation (SSA) was observed following the onset of repetitive tones, and the strength of SSA effects (13–20%) was comparable across vigilance states. Thus, responses in core auditory cortex are preserved across sleep states, suggesting that evoked activity in primary sensory cortices is driven by external physical stimuli with little modulation by vigilance state. We suggest that sensory disconnection during sleep occurs at a stage later than primary sensory areas. PMID:24323498
Thresholding of auditory cortical representation by background noise
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
2014-01-01
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
Evoked potential correlates of selective attention with multi-channel auditory inputs
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.
1975-01-01
Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.
ERIC Educational Resources Information Center
Shibley, Ralph, Jr.; And Others
Event-related Potentials (ERPs) were recorded to both auditory and visual stimuli from the scalps of nine autistic males and nine normal controls (all Ss between 12 and 22 years of age) to examine the differences in information processing strategies. Ss were tested on three different tasks: an auditory missing stimulus paradigm, a visual color…
The Effectiveness of Bimodal Text Presentation for Poor Readers.
ERIC Educational Resources Information Center
Steele, Emily; And Others
A study explored the effects of bimodal (concurrent auditory and visual stimulus modes) versus unimodal reading on 8 poor readers between the ages of 9 and 12 years. An alternating treatments design was used to compare student performance on 12 passages, 45 in each of 3 presentations modes: bimodal, visual, and auditory. Session measures included…
Spiousas, Ignacio; Etchemendy, Pablo E.; Eguia, Manuel C.; Calcagno, Esteban R.; Abregú, Ezequiel; Vergara, Ramiro O.
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it. PMID:28690556
Spiousas, Ignacio; Etchemendy, Pablo E; Eguia, Manuel C; Calcagno, Esteban R; Abregú, Ezequiel; Vergara, Ramiro O
2017-01-01
Previous studies on the effect of spectral content on auditory distance perception (ADP) focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction) or when the sound travels distances >15 m (high-frequency energy losses due to air absorption). Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1-6 m) influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum) of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave) pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects). Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation). The results obtained in this study show that, depending on the spectrum of the auditory stimulus, reverberation can degrade ADP rather than improve it.
Phantom auditory sensation in rats: an animal model for tinnitus.
Jastreboff, P J; Brennan, J F; Coleman, J K; Sasaki, C T
1988-12-01
In order to measure tinnitus induced by sodium salicylate injections, 84 pigmented rats, distributed among 14 groups in five experiments, were used in a conditioned suppression paradigm. In Experiment 1, all groups were trained with a conditioned stimulus (CS) consisting of the offset of a continuous background noise. One group began salicylate injections before Pavlovian training, a second group started injections after training, and a control group received daily saline injections. Resistance to extinction was profound when injections started before training, but minimal when initiated after training, which suggests that salicylate-induced effects acquired differential conditioned value. In Experiment 2 we mimicked the salicylate treatments by substituting a 7 kHz tone in place of respective injections, resulting in effects equivalent to salicylate-induced behavior. In a third experiment we included a 3 kHz CS, and again replicated the salicylate findings. In Experiment 4 we decreased the motivational level, and the sequential relation between salicylate-induced effects and suppression training was retained. Finally, no salicylate effects emerged when the visual modality was used. These findings support the demonstration of phantom auditory sensations in animals.
Effect of negative emotions evoked by light, noise and taste on trigeminal thermal sensitivity.
Yang, Guangju; Baad-Hansen, Lene; Wang, Kelun; Xie, Qiu-Fei; Svensson, Peter
2014-11-07
Patients with migraine often have impaired somatosensory function and experience headache attacks triggered by exogenous stimulus, such as light, sound or taste. This study aimed to assess the influence of three controlled conditioning stimuli (visual, auditory and gustatory stimuli and combined stimuli) on affective state and thermal sensitivity in healthy human participants. All participants attended four experimental sessions with visual, auditory and gustatory conditioning stimuli and combination of all stimuli, in a randomized sequence. In each session, the somatosensory sensitivity was tested in the perioral region with use of thermal stimuli with and without the conditioning stimuli. Positive and Negative Affect States (PANAS) were assessed before and after the tests. Subject based ratings of the conditioning and test stimuli in addition to skin temperature and heart rate as indicators of arousal responses were collected in real time during the tests. The three conditioning stimuli all induced significant increases in negative PANAS scores (paired t-test, P ≤0.016). Compared with baseline, the increases were in a near dose-dependent manner during visual and auditory conditioning stimulation. No significant effects of any single conditioning stimuli were observed on trigeminal thermal sensitivity (P ≥0.051) or arousal parameters (P ≥0.057). The effects of combined conditioning stimuli on subjective ratings (P ≤0.038) and negative affect (P = 0.011) were stronger than those of single stimuli. All three conditioning stimuli provided a simple way to evoke a negative affective state without physical arousal or influence on trigeminal thermal sensitivity. Multisensory conditioning had stronger effects but also failed to modulate thermal sensitivity, suggesting that so-called exogenous trigger stimuli e.g. bright light, noise, unpleasant taste in patients with migraine may require a predisposed or sensitized nervous system.
Brown, Trecia A; Joanisse, Marc F; Gati, Joseph S; Hughes, Sarah M; Nixon, Pam L; Menon, Ravi S; Lomber, Stephen G
2013-01-01
Much of what is known about the cortical organization for audition in humans draws from studies of auditory cortex in the cat. However, these data build largely on electrophysiological recordings that are both highly invasive and provide less evidence concerning macroscopic patterns of brain activation. Optical imaging, using intrinsic signals or dyes, allows visualization of surface-based activity but is also quite invasive. Functional magnetic resonance imaging (fMRI) overcomes these limitations by providing a large-scale perspective of distributed activity across the brain in a non-invasive manner. The present study used fMRI to characterize stimulus-evoked activity in auditory cortex of an anesthetized (ketamine/isoflurane) cat, focusing specifically on the blood-oxygen-level-dependent (BOLD) signal time course. Functional images were acquired for adult cats in a 7 T MRI scanner. To determine the BOLD signal time course, we presented 1s broadband noise bursts between widely spaced scan acquisitions at randomized delays (1-12 s in 1s increments) prior to each scan. Baseline trials in which no stimulus was presented were also acquired. Our results indicate that the BOLD response peaks at about 3.5s in primary auditory cortex (AI) and at about 4.5 s in non-primary areas (AII, PAF) of cat auditory cortex. The observed peak latency is within the range reported for humans and non-human primates (3-4 s). The time course of hemodynamic activity in cat auditory cortex also occurs on a comparatively shorter scale than in cat visual cortex. The results of this study will provide a foundation for future auditory fMRI studies in the cat to incorporate these hemodynamic response properties into appropriate analyses of cat auditory cortex. Copyright © 2012 Elsevier Inc. All rights reserved.
Encoding of a spectrally-complex communication sound in the bullfrog's auditory nerve.
Schwartz, J J; Simmons, A M
1990-02-01
1. A population study of eighth nerve responses in the bullfrog, Rana catesbeiana, was undertaken to analyze how the eighth nerve codes the complex spectral and temporal structure of the species-specific advertisement call over a biologically-realistic range of intensities. Synthetic advertisement calls were generated by Fourier synthesis and presented to individual eighth nerve fibers of anesthetized bullfrogs. Fiber responses were analyzed by calculating rate responses based on post-stimulus-time (PST) histograms and temporal responses based on Fourier transforms of period histograms. 2. At stimulus intensities of 70 and 80 dB SPL, normalized rate responses provide a fairly good representation of the complex spectral structure of the stimulus, particularly in the low- and mid-frequency range. At higher intensities, rate responses saturate, and very little of the spectral structure of the complex stimulus can be seen in the profile of rate responses of the population. 3. Both AP and BP fibers phase-lock strongly to the fundamental (100 Hz) of the complex stimulus. These effects are relatively resistant to changes in stimulus intensity. Only a small number of fibers synchronize to the low-frequency spectral energy in the stimulus. The underlying spectral complexity of the stimulus is not accurately reflected in the timing of fiber firing, presumably because firing is 'captured' by the fundamental frequency. 4. Plots of average localized synchronized rate (ALSR), which combine both spectral and temporal information, show a similar, low-pass shape at all stimulus intensities. ALSR plots do not generally provide an accurate representation of the structure of the advertisement call. 5. The data suggest that anuran peripheral auditory fibers may be particularly sensitive to the amplitude envelope of sounds.
Phasic heart rate responses and cardiac cycle time in auditory choice reaction time.
van der Molen, M W; Somsen, R J; Orlebeke, J F
1983-01-01
This study investigated the cardiovascular-behavioral interaction under short and long stimulus interval conditions. In addition, the cardiovascular-behavioral interaction was studied as affected by cardiac cycle duration. Fourteen subjects performed a choice reaction time (RT) task employing a mixed speed-accuracy tradeoff design in which reactions were paced to coincide with a signal that occurs randomly at either 200 or 500 msec after the reaction stimulus. The preparatory interval between a warning stimulus and a lead-reaction stimulus complex was also varied (2 vs. 4.5 sec). Anticipatory deceleration occurred within the 4.5 sec interval but not in the 2 sec interval. The depth of anticipatory deceleration did not discriminate between fast and slow reactions; but an earlier shift from deceleration to acceleration was associated with fast reactions. The effect of stimulus timing relative to the R-wave of the electrocardiogram was also analysed. Meaningful stimuli tended to produce cardiac slowing as previously described in the literature. Early occurring stimuli prolong the cycle of their occurrence more than late occurring stimuli. The later prolong the subsequent cycle. Cardiac cycle time effects were absent for unattended stimuli. The results of anticipatory deceleration suggested that the depth of deceleration was regulated by time-uncertainty and speed-accuracy criterion.
Transmission matrix analysis of the chinchilla middle ear
Songer, Jocelyn E.; Rosowski, John J.
2008-01-01
Despite the common use of the chinchilla as an animal model in auditory research, a complete characterization of the chinchilla middle ear using transmission matrix analysis has not been performed. In this paper we describe measurements of middle-ear input admittance and stapes velocity in ears with the middle-ear cavity opened under three conditions: intact tympano-ossicular system and cochlea, after the cochlea has been drained, and after the stapes has been fixed. These measurements, made with stimulus frequencies of 100–8000 Hz, are used to define the transmission matrix parameters of the middle ear and to calculate the cochlear input impedance as well as the middle-ear output impedance. This transmission characterization of the chinchilla middle ear will be useful for modeling auditory sensitivity in the normal and pathological chinchilla ear. PMID:17672642
Marchant, Jennifer L; Ruff, Christian C; Driver, Jon
2012-01-01
The brain seeks to combine related inputs from different senses (e.g., hearing and vision), via multisensory integration. Temporal information can indicate whether stimuli in different senses are related or not. A recent human fMRI study (Noesselt et al. [2007]: J Neurosci 27:11431–11441) used auditory and visual trains of beeps and flashes with erratic timing, manipulating whether auditory and visual trains were synchronous or unrelated in temporal pattern. A region of superior temporal sulcus (STS) showed higher BOLD signal for the synchronous condition. But this could not be related to performance, and it remained unclear if the erratic, unpredictable nature of the stimulus trains was important. Here we compared synchronous audiovisual trains to asynchronous trains, while using a behavioral task requiring detection of higher-intensity target events in either modality. We further varied whether the stimulus trains had predictable temporal pattern or not. Synchrony (versus lag) between auditory and visual trains enhanced behavioral sensitivity (d') to intensity targets in either modality, regardless of predictable versus unpredictable patterning. The analogous contrast in fMRI revealed BOLD increases in several brain areas, including the left STS region reported by Noesselt et al. [2007: J Neurosci 27:11431–11441]. The synchrony effect on BOLD here correlated with the subject-by-subject impact on performance. Predictability of temporal pattern did not affect target detection performance or STS activity, but did lead to an interaction with audiovisual synchrony for BOLD in inferior parietal cortex. PMID:21953980
Tavakoli, Paniz; Campbell, Kenneth
2016-10-01
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
New perspectives on the auditory cortex: learning and memory.
Weinberger, Norman M
2015-01-01
Primary ("early") sensory cortices have been viewed as stimulus analyzers devoid of function in learning, memory, and cognition. However, studies combining sensory neurophysiology and learning protocols have revealed that associative learning systematically modifies the encoding of stimulus dimensions in the primary auditory cortex (A1) to accentuate behaviorally important sounds. This "representational plasticity" (RP) is manifest at different levels. The sensitivity and selectivity of signal tones increase near threshold, tuning above threshold shifts toward the frequency of acoustic signals, and their area of representation can increase within the tonotopic map of A1. The magnitude of area gain encodes the level of behavioral stimulus importance and serves as a substrate of memory strength. RP has the same characteristics as behavioral memory: it is associative, specific, develops rapidly, consolidates, and can last indefinitely. Pairing tone with stimulation of the cholinergic nucleus basalis induces RP and implants specific behavioral memory, while directly increasing the representational area of a tone in A1 produces matching behavioral memory. Thus, RP satisfies key criteria for serving as a substrate of auditory memory. The findings suggest a basis for posttraumatic stress disorder in abnormally augmented cortical representations and emphasize the need for a new model of the cerebral cortex. © 2015 Elsevier B.V. All rights reserved.
2017-01-01
In the field of evaluative conditioning (EC), two opposing theories—propositional single-process theory versus dual-process theory—are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms. PMID:28989730
Heycke, Tobias; Aust, Frederik; Stahl, Christoph
2017-09-01
In the field of evaluative conditioning (EC), two opposing theories-propositional single-process theory versus dual-process theory-are currently being discussed in the literature. The present set of experiments test a crucial prediction to adjudicate between these two theories: Dual-process theory postulates that evaluative conditioning can occur without awareness of the contingency between conditioned stimulus (CS) and unconditioned stimulus (US); in contrast, single-process propositional theory postulates that EC requires CS-US contingency awareness. In a set of three studies, we experimentally manipulate contingency awareness by presenting the CSs very briefly, thereby rendering it unlikely to be processed consciously. We address potential issues with previous studies on EC with subliminal or near-threshold CSs that limited their interpretation. Across two experiments, we consistently found an EC effect for CSs presented for 1000 ms and consistently failed to find an EC effect for briefly presented CSs. In a third pre-registered experiment, we again found evidence for an EC effect with CSs presented for 1000 ms, and we found some indication for an EC effect for CSs presented for 20 ms.
Human Factors Engineering Bibliographic Series. Volume 2: 1960-1964 Literature
1966-10-01
flutter discrimination, melodic and temporal) binaural vs. monaural equipment and methods (e.g., anechoic chambers, audiometric devices, communication...brightness, duration, timbre, vocality) stimulus mixtures (e.g., harmonics, beats , combination tones, modulations) thresholds training, nonverbal--see Training...scales and aids) Beats --see Audition (stimulus mixtures) Bells--see Auditory (displays, nonverbal) Belts, Harnesses, and other Restraining Devices--see
Gulick, Danielle; Gould, Thomas J.
2009-01-01
Background Ethanol is a frequently abused, addictive drug that impairs cognitive function. Ethanol may disrupt cognitive processes by altering attention, short-term memory, and/ or long-term memory. Interestingly, some research suggests that ethanol may enhance cognitive processes at lower doses. The current research examined the dose-dependent effects of ethanol on contextual and cued fear conditioning. In addition, the present studies assessed the importance of stimulus salience in the effects of ethanol and directly compared the effects of ethanol on short-term and long-term memory. Methods This study employed both foreground and background fear conditioning, which differ in the salience of contextual stimuli, and tested conditioning at 4 hours, 24 hours, and 1 week in order to assess the effects of ethanol on short-term and long-term memory. Foreground conditioning consisted of 2 presentations of a foot shock unconditioned stimulus (US) (2 seconds, 0.57 mA). Background conditioning consisted of 2 auditory conditioned stimulus (30 seconds, 85 dB white noise)–foot shock (US; 2 seconds, 0.57 mA) pairings. Results For both foreground and background conditioning, ethanol enhanced short-term and long-term memory for contextual and cued conditioning at a low dose (0.25 g/kg) and impaired short-term and long-term memory for contextual and cued conditioning at a high dose (1.0 g/kg). Conclusions These results suggest that ethanol has long-lasting, biphasic effects on short-term and long-term memory for contextual and cued conditioning. Furthermore, the effects of ethanol on contextual fear conditioning are independent of the salience of the context. PMID:17760787
Kraus, Kari Suzanne; Canlon, Barbara
2012-06-01
Acoustic experience such as sound, noise, or absence of sound induces structural or functional changes in the central auditory system but can also affect limbic regions such as the amygdala and hippocampus. The amygdala is particularly sensitive to sound with valence or meaning, such as vocalizations, crying or music. The amygdala plays a central role in auditory fear conditioning, regulation of the acoustic startle response and can modulate auditory cortex plasticity. A stressful acoustic stimulus, such as noise, causes amygdala-mediated release of stress hormones via the HPA-axis, which may have negative effects on health, as well as on the central nervous system. On the contrary, short-term exposure to stress hormones elicits positive effects such as hearing protection. The hippocampus can affect auditory processing by adding a temporal dimension, as well as being able to mediate novelty detection via theta wave phase-locking. Noise exposure affects hippocampal neurogenesis and LTP in a manner that affects structural plasticity, learning and memory. Tinnitus, typically induced by hearing malfunctions, is associated with emotional stress, depression and anatomical changes of the hippocampus. In turn, the limbic system may play a role in the generation as well as the suppression of tinnitus indicating that the limbic system may be essential for tinnitus treatment. A further understanding of auditory-limbic interactions will contribute to future treatment strategies of tinnitus and noise trauma. Copyright © 2012 Elsevier B.V. All rights reserved.
A simple automated system for appetitive conditioning of zebrafish in their home tanks.
Doyle, Jillian M; Merovitch, Neil; Wyeth, Russell C; Stoyek, Matthew R; Schmidt, Michael; Wilfart, Florentin; Fine, Alan; Croll, Roger P
2017-01-15
We describe here an automated apparatus that permits rapid conditioning paradigms for zebrafish. Arduino microprocessors were used to control the delivery of auditory or visual stimuli to groups of adult or juvenile zebrafish in their home tanks in a conventional zebrafish facility. An automatic feeder dispensed precise amounts of food immediately after the conditioned stimuli, or at variable delays for controls. Responses were recorded using inexpensive cameras, with the video sequences analysed with ImageJ or Matlab. Fish showed significant conditioned responses in as few as 5 trials, learning that the conditioned stimulus was a predictor of food presentation at the water surface and at the end of the tank where the food was dispensed. Memories of these conditioned associations persisted for at least 2days after training when fish were tested either as groups or as individuals. Control fish, for which the auditory or visual stimuli were specifically unpaired with food, showed no comparable responses. This simple, low-cost, automated system permits scalable conditioning of zebrafish with minimal human intervention, greatly reducing both variability and labour-intensiveness. It will be useful for studies of the neural basis of learning and memory, and for high-throughput screening of compounds modifying those processes. Copyright © 2016 Elsevier B.V. All rights reserved.
Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs
Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
2013-01-01
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs. PMID:23638374
Validation of the Emotiv EPOC(®) EEG gaming system for measuring research quality auditory ERPs.
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
2013-01-01
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants - particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC(®)). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs.
Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki
2015-01-01
Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060
ERIC Educational Resources Information Center
Ramkissoon, Ishara; Beverly, Brenda L.
2014-01-01
Purpose: Effects of clicks and tonebursts on early and late auditory middle latency response (AMLR) components were evaluated in young and older cigarette smokers and nonsmokers. Method: Participants ( n = 49) were categorized by smoking and age into 4 groups: (a) older smokers, (b) older nonsmokers, (c) young smokers, and (d) young nonsmokers.…
ERIC Educational Resources Information Center
Marcus, Ann; Sinnott, Brigit; Bradley, Stephen; Grey, Ian
2010-01-01
This study aimed to examine the effectiveness of a simplified habit reversal procedure (SHR) using differential reinforcement of incompatible behaviour (DRI) and a stimulus prompt (GaitSpot Auditory Squeakers) to reduce the frequency of idiopathic toe-walking (ITW) and increase the frequency of correct heel-to-toe-walking in three children with…
ERIC Educational Resources Information Center
Macdonald, Margaret; Campbell, Kenneth
2011-01-01
An infrequent physical increase in the intensity of an auditory stimulus relative to an already loud frequently occurring "standard" is processed differently than an equally perceptible physical decrease in intensity. This may be because a physical increment results in increased activation in two different systems, a transient and a change…
ERIC Educational Resources Information Center
Murray, Hugh
Proposed is a study to evaluate the auditory systems of learning disabled (LD) students with a new audiological, diagnostic, stimulus apparatus which is capable of objectively measuring the interaction of the binaural aspects of hearing. The author points out problems with LD definitions that exclude neurological disorders. The detection of…
Rapid recalibration of speech perception after experiencing the McGurk illusion.
Lüttke, Claudia S; Pérez-Bellido, Alexis; de Lange, Floris P
2018-03-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as 'ada'). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as 'ada'. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to 'ada' (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization.
Visually induced plasticity of auditory spatial perception in macaques.
Woods, Timothy M; Recanzone, Gregg H
2004-09-07
When experiencing spatially disparate visual and auditory stimuli, a common percept is that the sound originates from the location of the visual stimulus, an illusion known as the ventriloquism effect. This illusion can persist for tens of minutes, a phenomenon termed the ventriloquism aftereffect. The underlying neuronal mechanisms of this rapidly induced plasticity remain unclear; indeed, it remains untested whether similar multimodal interactions occur in other species. We therefore tested whether macaque monkeys experience the ventriloquism aftereffect similar to the way humans do. The ability of two monkeys to determine which side of the midline a sound was presented from was tested before and after a period of 20-60 min in which the monkeys experienced either spatially identical or spatially disparate auditory and visual stimuli. In agreement with human studies, the monkeys did experience a shift in their auditory spatial perception in the direction of the spatially disparate visual stimulus, and the aftereffect did not transfer across sounds that differed in frequency by two octaves. These results show that macaque monkeys experience the ventriloquism aftereffect similar to the way humans do in all tested respects, indicating that these multimodal interactions are a basic phenomenon of the central nervous system.
The ability for cocaine and cocaine-associated cues to compete for attention
Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin
2017-01-01
In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441
Avey, Marc T; Phillmore, Leslie S; MacDougall-Shackleton, Scott A
2005-12-07
Sensory driven immediate early gene expression (IEG) has been a key tool to explore auditory perceptual areas in the avian brain. Most work on IEG expression in songbirds such as zebra finches has focused on playback of acoustic stimuli and its effect on auditory processing areas such as caudal medial mesopallium (CMM) caudal medial nidopallium (NCM). However, in a natural setting, the courtship displays of songbirds (including zebra finches) include visual as well as acoustic components. To determine whether the visual stimulus of a courting male modifies song-induced expression of the IEG ZENK in the auditory forebrain we exposed male and female zebra finches to acoustic (song) and visual (dancing) components of courtship. Birds were played digital movies with either combined audio and video, audio only, video only, or neither audio nor video (control). We found significantly increased levels of Zenk response in the auditory region CMM in the two treatment groups exposed to acoustic stimuli compared to the control group. The video only group had an intermediate response, suggesting potential effect of visual input on activity in these auditory brain regions. Finally, we unexpectedly found a lateralization of Zenk response that was independent of sex, brain region, or treatment condition, such that Zenk immunoreactivity was consistently higher in the left hemisphere than in the right and the majority of individual birds were left-hemisphere dominant.
Electrophysiological measurement of interest during walking in a simulated environment.
Takeda, Yuji; Okuma, Takashi; Kimura, Motohiro; Kurata, Takeshi; Takenaka, Takeshi; Iwaki, Sunao
2014-09-01
A reliable neuroscientific technique for objectively estimating the degree of interest in a real environment is currently required in the research fields of neuroergonomics and neuroeconomics. Toward the development of such a technique, the present study explored electrophysiological measures that reflect an observer's interest in a nearly-real visual environment. Participants were asked to walk through a simulated shopping mall and the attractiveness of the shopping mall was manipulated by opening and closing the shutters of stores. During the walking task, participants were exposed to task-irrelevant auditory probes (two-stimulus oddball sequence). The results showed a smaller P2/early P3a component of task-irrelevant auditory event-related potentials and a larger lambda response of eye-fixation-related potentials in an interesting environment (i.e., open-shutter condition) than in a boring environment (i.e., closed-shutter condition); these findings can be reasonably explained by supposing that participants allocated more attentional resources to visual information in an interesting environment than in a boring environment, and thus residual attentional resources that could be allocated to task-irrelevant auditory probes were reduced. The P2/early P3a component and the lambda response may be useful measures of interest in a real visual environment. Copyright © 2014 Elsevier B.V. All rights reserved.
Age-Related Changes in Binaural Interaction at Brainstem Level.
Van Yper, Lindsey N; Vermeire, Katrien; De Vel, Eddy F J; Beynon, Andy J; Dhooge, Ingeborg J M
2016-01-01
Age-related hearing loss hampers the ability to understand speech in adverse listening conditions. This is attributed to a complex interaction of changes in the peripheral and central auditory system. One aspect that may deteriorate across the lifespan is binaural interaction. The present study investigates binaural interaction at the level of the auditory brainstem. It is hypothesized that brainstem binaural interaction deteriorates with advancing age. Forty-two subjects of various age participated in the study. Auditory brainstem responses (ABRs) were recorded using clicks and 500 Hz tone-bursts. ABRs were elicited by monaural right, monaural left, and binaural stimulation. Binaural interaction was investigated in two ways. First, grand averages of the binaural interaction component were computed for each age group. Second, wave V characteristics of the binaural ABR were compared with those of the summed left and right ABRs. Binaural interaction in the click ABR was demonstrated by shorter latencies and smaller amplitudes in the binaural compared with the summed monaural responses. For 500 Hz tone-burst ABR, no latency differences were found. However, amplitudes were significantly smaller in the binaural than summed monaural condition. An age-effect was found for 500 Hz tone-burst, but not for click ABR. Brainstem binaural interaction seems to decline with age. Interestingly, these changes seem to be stimulus-dependent.
Individual Differences in the Frequency-Following Response: Relation to Pitch Perception
Coffey, Emily B. J.; Colagrosso, Emilia M. G.; Lehmann, Alexandre; Schönwiesner, Marc; Zatorre, Robert J.
2016-01-01
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system. PMID:27015271
Auditory-visual stimulus pairing enhances perceptual learning in a songbird.
Hultsch; Schleuss; Todt
1999-07-01
In many oscine birds, song learning is affected by social variables, for example the behaviour of a tutor. This implies that both auditory and visual perceptual systems should be involved in the acquisition process. To examine whether and how particular visual stimuli can affect song acquisition, we tested the impact of a tutoring design in which the presentation of auditory stimuli (i.e. species-specific master songs) was paired with a well-defined nonauditory stimulus (i.e. stroboscope light flashes: Strobe regime). The subjects were male hand-reared nightingales, Luscinia megarhynchos. For controls, males were exposed to tutoring without a light stimulus (Control regime). The males' singing recorded 9 months later showed that the Strobe regime had enhanced the acquisition of song patterns. During this treatment birds had acquired more songs than during the Control regime; the observed increase in repertoire size was from 20 to 30% in most cases. Furthermore, the copy quality of imitations acquired during the Strobe regime was better than that of imitations developed from the Control regime, and this was due to a significant increase in the number of 'perfect' song copies. We conclude that these effects were mediated by an intrinsic component (e.g. attention or arousal) which specifically responded to the Strobe regime. Our findings also show that mechanisms of song learning are well prepared to process information from cross-modal perception. Thus, more detailed enquiries into stimulus complexes that are usually referred to as social variables are promising. Copyright 1999 The Association for the Study of Animal Behaviour.
Cross-modal detection using various temporal and spatial configurations.
Schirillo, James A
2011-01-01
To better understand temporal and spatial cross-modal interactions, two signal detection experiments were conducted in which an auditory target was sometimes accompanied by an irrelevant flash of light. In the first, a psychometric function for detecting a unisensory auditory target in varying signal-to-noise ratios (SNRs) was derived. Then auditory target detection was measured while an irrelevant light was presented with light/sound stimulus onset asynchronies (SOAs) between 0 and ±700 ms. When the light preceded the sound by 100 ms or was coincident, target detection (d') improved for low SNR conditions. In contrast, for larger SOAs (350 and 700 ms), the behavioral gain resulted from a change in both d' and response criterion (β). However, when the light followed the sound, performance changed little. In the second experiment, observers detected multimodal target sounds at eccentricities of ±8°, and ±24°. Sensitivity benefits occurred at both locations, with a larger change at the more peripheral location. Thus, both temporal and spatial factors affect signal detection measures, effectively parsing sensory and decision-making processes.
Stimulus induced bursts in severe postanoxic encephalopathy.
Tjepkema-Cloostermans, Marleen C; Wijers, Elisabeth T; van Putten, Michel J A M
2016-11-01
To report on a distinct effect of auditory and sensory stimuli on the EEG in comatose patients with severe postanoxic encephalopathy. In two comatose patients admitted to the Intensive Care Unit (ICU) with severe postanoxic encephalopathy and burst-suppression EEG, we studied the effect of external stimuli (sound and touch) on the occurrence of bursts. In patient A bursts could be induced by either auditory or sensory stimuli. In patient B bursts could only be induced by touching different facial regions (forehead, nose and chin). When stimuli were presented with relatively long intervals, bursts persistently followed the stimuli, while stimuli with short intervals (<1s) did not induce bursts. In both patients bursts were not accompanied by myoclonia. Both patients deceased. Bursts in patients with a severe postanoxic encephalopathy can be induced by external stimuli, resulting in stimulus-dependent burst-suppression. Stimulus induced bursts should not be interpreted as prognostic favourable EEG reactivity. Copyright © 2016 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Memory and learning with rapid audiovisual sequences
Keller, Arielle S.; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed. PMID:26575193
Memory and learning with rapid audiovisual sequences.
Keller, Arielle S; Sekuler, Robert
2015-01-01
We examined short-term memory for sequences of visual stimuli embedded in varying multisensory contexts. In two experiments, subjects judged the structure of the visual sequences while disregarding concurrent, but task-irrelevant auditory sequences. Stimuli were eight-item sequences in which varying luminances and frequencies were presented concurrently and rapidly (at 8 Hz). Subjects judged whether the final four items in a visual sequence identically replicated the first four items. Luminances and frequencies in each sequence were either perceptually correlated (Congruent) or were unrelated to one another (Incongruent). Experiment 1 showed that, despite encouragement to ignore the auditory stream, subjects' categorization of visual sequences was strongly influenced by the accompanying auditory sequences. Moreover, this influence tracked the similarity between a stimulus's separate audio and visual sequences, demonstrating that task-irrelevant auditory sequences underwent a considerable degree of processing. Using a variant of Hebb's repetition design, Experiment 2 compared musically trained subjects and subjects who had little or no musical training on the same task as used in Experiment 1. Test sequences included some that intermittently and randomly recurred, which produced better performance than sequences that were generated anew for each trial. The auditory component of a recurring audiovisual sequence influenced musically trained subjects more than it did other subjects. This result demonstrates that stimulus-selective, task-irrelevant learning of sequences can occur even when such learning is an incidental by-product of the task being performed.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
NASA Astrophysics Data System (ADS)
Pineda, Gustavo; Atehortúa, Angélica; Iregui, Marcela; García-Arteaga, Juan D.; Romero, Eduardo
2017-11-01
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson's (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 +/-0.008) and PD subject in stage 2 (R=0.95 +/-0.03) and stage 3 (R=0.89 +/-0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 +/-39.2 step/min, 0.12 +/- 0.06 in step length and 0.33 +/- 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and could need others specific external cues. In conclusion the current protocol (and their selected parameters, kind of sound time for training, step of variation, range of variation) provide a suitable gait facilitation method specially for patients with the highest gait disturbance (stage 2 and 3). The method should be adjusted for initial stages and evaluated in a rehabilitation program.
Perception of non-verbal auditory stimuli in Italian dyslexic children.
Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo
2010-01-01
Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).
Greening, Steven G.; Lee, Tae-Ho; Mather, Mara
2016-01-01
Anxiety is associated with an exaggerated expectancy of harm, including overestimation of how likely a conditioned stimulus (CS+) predicts a harmful unconditioned stimulus (US). In the current study we tested whether anxiety-associated expectancy of harm increases primary sensory cortex (S1) activity on non-reinforced (i.e., no shock) CS+ trials. Twenty healthy volunteers completed a differential-tone trace conditioning task while undergoing fMRI, with shock delivered to the left hand. We found a positive correlation between trait anxiety and activity in right, but not left, S1 during CS+ versus CS− conditions. Right S1 activity also correlated with individual differences in both primary auditory cortices (A1) and amygdala activity. Lastly, a seed-based functional connectivity analysis demonstrated that trial-wise S1 activity was positively correlated with regions of dorsolateral prefrontal cortex (dlPFC), suggesting that higher-order cognitive processes contribute to the anticipatory sensory reactivity. Our findings indicate that individual differences in trait anxiety relate to anticipatory reactivity for the US during associative learning. This anticipatory reactivity is also integrated along with emotion-related sensory signals into a brain network implicated in fear-conditioned responding. PMID:26751483
Greening, Steven G; Lee, Tae-Ho; Mather, Mara
2016-01-06
Anxiety is associated with an exaggerated expectancy of harm, including overestimation of how likely a conditioned stimulus (CS+) predicts a harmful unconditioned stimulus (US). In the current study we tested whether anxiety-associated expectancy of harm increases primary sensory cortex (S1) activity on non-reinforced (i.e., no shock) CS+ trials. Twenty healthy volunteers completed a differential-tone trace conditioning task while undergoing fMRI, with shock delivered to the left hand. We found a positive correlation between trait anxiety and activity in right, but not left, S1 during CS+ versus CS- conditions. Right S1 activity also correlated with individual differences in both primary auditory cortices (A1) and amygdala activity. Lastly, a seed-based functional connectivity analysis demonstrated that trial-wise S1 activity was positively correlated with regions of dorsolateral prefrontal cortex (dlPFC), suggesting that higher-order cognitive processes contribute to the anticipatory sensory reactivity. Our findings indicate that individual differences in trait anxiety relate to anticipatory reactivity for the US during associative learning. This anticipatory reactivity is also integrated along with emotion-related sensory signals into a brain network implicated in fear-conditioned responding.
Yoder, Kathleen M.; Vicario, David S.
2012-01-01
Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281
Brain state-dependent abnormal LFP activity in the auditory cortex of a schizophrenia mouse model
Nakao, Kazuhito; Nakazawa, Kazu
2014-01-01
In schizophrenia, evoked 40-Hz auditory steady-state responses (ASSRs) are impaired, which reflects the sensory deficits in this disorder, and baseline spontaneous oscillatory activity also appears to be abnormal. It has been debated whether the evoked ASSR impairments are due to the possible increase in baseline power. GABAergic interneuron-specific NMDA receptor (NMDAR) hypofunction mutant mice mimic some behavioral and pathophysiological aspects of schizophrenia. To determine the presence and extent of sensory deficits in these mutant mice, we recorded spontaneous local field potential (LFP) activity and its click-train evoked ASSRs from primary auditory cortex of awake, head-restrained mice. Baseline spontaneous LFP power in the pre-stimulus period before application of the first click trains was augmented at a wide range of frequencies. However, when repetitive ASSR stimuli were presented every 20 s, averaged spontaneous LFP power amplitudes during the inter-ASSR stimulus intervals in the mutant mice became indistinguishable from the levels of control mice. Nonetheless, the evoked 40-Hz ASSR power and their phase locking to click trains were robustly impaired in the mutants, although the evoked 20-Hz ASSRs were also somewhat diminished. These results suggested that NMDAR hypofunction in cortical GABAergic neurons confers two brain state-dependent LFP abnormalities in the auditory cortex; (1) a broadband increase in spontaneous LFP power in the absence of external inputs, and (2) a robust deficit in the evoked ASSR power and its phase-locking despite of normal baseline LFP power magnitude during the repetitive auditory stimuli. The “paradoxically” high spontaneous LFP activity of the primary auditory cortex in the absence of external stimuli may possibly contribute to the emergence of schizophrenia-related aberrant auditory perception. PMID:25018691
A corollary discharge maintains auditory sensitivity during sound production
NASA Astrophysics Data System (ADS)
Poulet, James F. A.; Hedwig, Berthold
2002-08-01
Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.
Neural coding strategies in auditory cortex.
Wang, Xiaoqin
2007-07-01
In contrast to the visual system, the auditory system has longer subcortical pathways and more spiking synapses between the peripheral receptors and the cortex. This unique organization reflects the needs of the auditory system to extract behaviorally relevant information from a complex acoustic environment using strategies different from those used by other sensory systems. The neural representations of acoustic information in auditory cortex can be characterized by three types: (1) isomorphic (faithful) representations of acoustic structures; (2) non-isomorphic transformations of acoustic features and (3) transformations from acoustical to perceptual dimensions. The challenge facing auditory neurophysiologists is to understand the nature of the latter two transformations. In this article, I will review recent studies from our laboratory regarding temporal discharge patterns in auditory cortex of awake marmosets and cortical representations of time-varying signals. Findings from these studies show that (1) firing patterns of neurons in auditory cortex are dependent on stimulus optimality and context and (2) the auditory cortex forms internal representations of sounds that are no longer faithful replicas of their acoustic structures.
Prestimulus brain activity predicts primacy in list learning
Galli, Giulia; Choy, Tsee Leng; Otten, Leun J.
2012-01-01
Brain activity immediately before an event can predict whether the event will later be remembered. This indicates that memory formation is influenced by anticipatory mechanisms engaged ahead of stimulus presentation. Here, we asked whether anticipatory processes affect the learning of short word lists, and whether such activity varies as a function of serial position. Participants memorized lists of intermixed visual and auditory words with either an elaborative or rote rehearsal strategy. At the end of each list, a distraction task was performed followed by free recall. Recall performance was better for words in initial list positions and following elaborative rehearsal. Electrical brain activity before auditory words predicted later recall in the elaborative rehearsal condition. Crucially, anticipatory activity only affected recall when words occurred in initial list positions. This indicates that anticipatory processes, possibly related to general semantic preparation, contribute to primacy effects. PMID:22888370
Responses in Rat Core Auditory Cortex are Preserved during Sleep Spindle Oscillations
Sela, Yaniv; Vyazovskiy, Vladyslav V.; Cirelli, Chiara; Tononi, Giulio; Nir, Yuval
2016-01-01
Study Objectives: Sleep is defined as a reversible state of reduction in sensory responsiveness and immobility. A long-standing hypothesis suggests that a high arousal threshold during non-rapid eye movement (NREM) sleep is mediated by sleep spindle oscillations, impairing thalamocortical transmission of incoming sensory stimuli. Here we set out to test this idea directly by examining sensory-evoked neuronal spiking activity during natural sleep. Methods: We compared neuronal (n = 269) and multiunit activity (MUA), as well as local field potentials (LFP) in rat core auditory cortex (A1) during NREM sleep, comparing responses to sounds depending on the presence or absence of sleep spindles. Results: We found that sleep spindles robustly modulated the timing of neuronal discharges in A1. However, responses to sounds were nearly identical for all measured signals including isolated neurons, MUA, and LFPs (all differences < 10%). Furthermore, in 10% of trials, auditory stimulation led to an early termination of the sleep spindle oscillation around 150–250 msec following stimulus onset. Finally, active ON states and inactive OFF periods during slow waves in NREM sleep affected the auditory response in opposite ways, depending on stimulus intensity. Conclusions: Responses in core auditory cortex are well preserved regardless of sleep spindles recorded in that area, suggesting that thalamocortical sensory relay remains functional during sleep spindles, and that sensory disconnection in sleep is mediated by other mechanisms. Citation: Sela Y, Vyazovskiy VV, Cirelli C, Tononi G, Nir Y. Responses in rat core auditory cortex are preserved during sleep spindle oscillations. SLEEP 2016;39(5):1069–1082. PMID:26856904
Kim, Hyung-Su; Cho, Hye-Yeon; Augustine, George J; Han, Jin-Hee
2016-01-01
Evidence from rodent and human studies has identified the ventromedial prefrontal cortex, specifically the infralimbic cortex (IL), as a critical brain structure in the extinction of conditioned fear. However, how IL activity controls fear expression at the time of extinction memory retrieval is unclear and controversial. To address this issue, we used optogenetics to precisely manipulate the activity of genetically targeted cells and to examine the real-time contribution of IL activity to expression of auditory-conditioned fear extinction in mice. We found that inactivation of IL, but not prelimbic cortex, impaired extinction retrieval. Conversely, photostimulation of IL excitatory neurons robustly enhanced the inhibition of fear expression after extinction, but not before extinction. Moreover, this effect was specific to the conditioned stimulus (CS): IL activity had no effect on expression of fear in response to the conditioned context after auditory fear extinction. Thus, in contrast to the expectation from a generally held view, artificial activation of IL produced no significant effect on expression of non-extinguished conditioned fear. Therefore, our data provide compelling evidence that IL activity is critical for expression of fear extinction and establish a causal role for IL activity in controlling fear expression in a CS-specific manner after extinction. PMID:26354044
The inferior colliculus encodes the Franssen auditory spatial illusion
Rajala, Abigail Z.; Yan, Yonghe; Dent, Micheal L.; Populin, Luis C.
2014-01-01
Illusions are effective tools for the study of the neural mechanisms underlying perception because neural responses can be correlated to the physical properties of stimuli and the subject’s perceptions. The Franssen illusion (FI) is an auditory spatial illusion evoked by presenting a transient, abrupt tone and a slowly rising, sustained tone of the same frequency simultaneously on opposite sides of the subject. Perception of the FI consists of hearing a single sound, the sustained tone, on the side that the transient was presented. Both subcortical and cortical mechanisms for the FI have been proposed, but, to date, there is no direct evidence for either. The data show that humans and rhesus monkeys perceive the FI similarly. Recordings were taken from single units of the inferior colliculus in the monkey while they indicated the perceived location of sound sources with their gaze. The results show that the transient component of the Franssen stimulus, with a shorter first spike latency and higher discharge rate than the sustained tone, encodes the perception of sound location. Furthermore, the persistent erroneous perception of the sustained stimulus location is due to continued excitation of the same neurons, first activated by the transient, by the sustained stimulus without location information. These results demonstrate for the first time, on a trial-by-trial basis, a correlation between perception of an auditory spatial illusion and a subcortical physiological substrate. PMID:23899307
Physiological reactivity of pregnant women to evoked fetal startle
DiPietro, Janet A.; Voegtline, Kristin M.; Costigan, Kathleen A.; Aguirre, Frank; Kivlighan, Katie; Chen, Ping
2013-01-01
Objective The bidirectional nature of mother-child interaction is widely acknowledged during infancy and childhood. Prevailing models during pregnancy focus on unidirectional influences exerted by the pregnant woman on the developing fetus. Prior work has indicated that the fetus also affects the pregnant woman. Our objective was to determine whether a maternal psychophysiological response to stimulation of the fetus could be isolated. Methods Using a longitudinal design, an airborne auditory stimulus was used to elicit a fetal heart rate and motor response at 24 (n = 47) and 36 weeks (n = 45) gestation. Women were blind to condition (stimulus versus sham). Maternal parameters included cardiac (heart rate) and electrodermal (skin conductance) responses. Multilevel modeling of repeated measures with 5 data points per second was used to examine fetal and maternal responses. Results As expected, compared to a sham condition, the stimulus generated a fetal motor response at both gestational ages, consistent with a mild fetal startle. Fetal stimulation was associated with significant, transient slowing of maternal heart rate coupled with increased skin conductance within 10 s of the stimulus at both gestational ages. Nulliparous women showed greater electrodermal responsiveness. The magnitude of the fetal motor response significantly corresponded to the maternal skin conductance response at 5, 10, 15, and 30 s following stimulation. Conclusion Elicited fetal movement exerts an independent influence on the maternal autonomic nervous system. This finding contributes to current models of the dyadic relationship during pregnancy between fetus and pregnant woman. PMID:24119937
Hill, N J; Schölkopf, B
2012-01-01
We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135
A New Approach to Model Pitch Perception Using Sparse Coding
Furst, Miriam; Barak, Omri
2017-01-01
Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content–these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments. PMID:28099436
A New Approach to Model Pitch Perception Using Sparse Coding.
Barzelay, Oded; Furst, Miriam; Barak, Omri
2017-01-01
Our acoustical environment abounds with repetitive sounds, some of which are related to pitch perception. It is still unknown how the auditory system, in processing these sounds, relates a physical stimulus and its percept. Since, in mammals, all auditory stimuli are conveyed into the nervous system through the auditory nerve (AN) fibers, a model should explain the perception of pitch as a function of this particular input. However, pitch perception is invariant to certain features of the physical stimulus. For example, a missing fundamental stimulus with resolved or unresolved harmonics, or a low and high-level amplitude stimulus with the same spectral content-these all give rise to the same percept of pitch. In contrast, the AN representations for these different stimuli are not invariant to these effects. In fact, due to saturation and non-linearity of both cochlear and inner hair cells responses, these differences are enhanced by the AN fibers. Thus there is a difficulty in explaining how pitch percept arises from the activity of the AN fibers. We introduce a novel approach for extracting pitch cues from the AN population activity for a given arbitrary stimulus. The method is based on a technique known as sparse coding (SC). It is the representation of pitch cues by a few spatiotemporal atoms (templates) from among a large set of possible ones (a dictionary). The amount of activity of each atom is represented by a non-zero coefficient, analogous to an active neuron. Such a technique has been successfully applied to other modalities, particularly vision. The model is composed of a cochlear model, an SC processing unit, and a harmonic sieve. We show that the model copes with different pitch phenomena: extracting resolved and non-resolved harmonics, missing fundamental pitches, stimuli with both high and low amplitudes, iterated rippled noises, and recorded musical instruments.
NASA Astrophysics Data System (ADS)
Hill, N. J.; Schölkopf, B.
2012-04-01
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Epp, Bastian; Yasin, Ifat; Verhey, Jesko L
2013-12-01
The audibility of important sounds is often hampered due to the presence of other masking sounds. The present study investigates if a correlate of the audibility of a tone masked by noise is found in late auditory evoked potentials measured from human listeners. The audibility of the target sound at a fixed physical intensity is varied by introducing auditory cues of (i) interaural target signal phase disparity and (ii) coherent masker level fluctuations in different frequency regions. In agreement with previous studies, psychoacoustical experiments showed that both stimulus manipulations result in a masking release (i: binaural masking level difference; ii: comodulation masking release) compared to a condition where those cues are not present. Late auditory evoked potentials (N1, P2) were recorded for the stimuli at a constant masker level, but different signal levels within the same set of listeners who participated in the psychoacoustical experiment. The data indicate differences in N1 and P2 between stimuli with and without interaural phase disparities. However, differences for stimuli with and without coherent masker modulation were only found for P2, i.e., only P2 is sensitive to the increase in audibility, irrespective of the cue that caused the masking release. The amplitude of P2 is consistent with the psychoacoustical finding of an addition of the masking releases when both cues are present. Even though it cannot be concluded where along the auditory pathway the audibility is represented, the P2 component of auditory evoked potentials is a candidate for an objective measure of audibility in the human auditory system. Copyright © 2013 Elsevier B.V. All rights reserved.
Davis, Chris; Kislyuk, Daniel; Kim, Jeesun; Sams, Mikko
2008-11-25
We used whole-head magnetoencephalograpy (MEG) to record changes in neuromagnetic N100m responses generated in the left and right auditory cortex as a function of the match between visual and auditory speech signals. Stimuli were auditory-only (AO) and auditory-visual (AV) presentations of /pi/, /ti/ and /vi/. Three types of intensity matched auditory stimuli were used: intact speech (Normal), frequency band filtered speech (Band) and speech-shaped white noise (Noise). The behavioural task was to detect the /vi/ syllables which comprised 12% of stimuli. N100m responses were measured to averaged /pi/ and /ti/ stimuli. Behavioural data showed that identification of the stimuli was faster and more accurate for Normal than for Band stimuli, and for Band than for Noise stimuli. Reaction times were faster for AV than AO stimuli. MEG data showed that in the left hemisphere, N100m to both AO and AV stimuli was largest for the Normal, smaller for Band and smallest for Noise stimuli. In the right hemisphere, Normal and Band AO stimuli elicited N100m responses of quite similar amplitudes, but N100m amplitude to Noise was about half of that. There was a reduction in N100m for the AV compared to the AO conditions. The size of this reduction for each stimulus type was same in the left hemisphere but graded in the right (being largest to the Normal, smaller to the Band and smallest to the Noise stimuli). The N100m decrease for the Normal stimuli was significantly larger in the right than in the left hemisphere. We suggest that the effect of processing visual speech seen in the right hemisphere likely reflects suppression of the auditory response based on AV cues for place of articulation.
ten Oever, Sanne; Sack, Alexander T.; Wheat, Katherine L.; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception. PMID:23805110
Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke
2013-01-01
Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.
The 'F-complex' and MMN tap different aspects of deviance.
Laufer, Ilan; Pratt, Hillel
2005-02-01
To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Neuromechanistic Model of Auditory Bistability
Rankin, James; Sussman, Elyse; Rinzel, John
2015-01-01
Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1). Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept—a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition. PMID:26562507
Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman
2016-01-01
Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies.
Müller, Viktor; Perdikis, Dionysios; von Oertzen, Timo; Sleimen-Malkoun, Rita; Jirsa, Viktor; Lindenberger, Ulman
2016-01-01
Resting-state and task-related recordings are characterized by oscillatory brain activity and widely distributed networks of synchronized oscillatory circuits. Electroencephalographic recordings (EEG) were used to assess network structure and network dynamics during resting state with eyes open and closed, and auditory oddball performance through phase synchronization between EEG channels. For this assessment, we constructed a hyper-frequency network (HFN) based on within- and cross-frequency coupling (WFC and CFC, respectively) at 10 oscillation frequencies ranging between 2 and 20 Hz. We found that CFC generally differentiates between task conditions better than WFC. CFC was the highest during resting state with eyes open. Using a graph-theoretical approach (GTA), we found that HFNs possess small-world network (SWN) topology with a slight tendency to random network characteristics. Moreover, analysis of the temporal fluctuations of HFNs revealed specific network topology dynamics (NTD), i.e., temporal changes of different graph-theoretical measures such as strength, clustering coefficient, characteristic path length (CPL), local, and global efficiency determined for HFNs at different time windows. The different topology metrics showed significant differences between conditions in the mean and standard deviation of these metrics both across time and nodes. In addition, using an artificial neural network approach, we found stimulus-related dynamics that varied across the different network topology metrics. We conclude that functional connectivity dynamics (FCD), or NTD, which was found using the HFN approach during rest and stimulus processing, reflects temporal and topological changes in the functional organization and reorganization of neuronal cell assemblies. PMID:27799906
Change deafness for real spatialized environmental scenes.
Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter
2017-01-01
The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.
Challenges of recording human fetal auditory-evoked response using magnetoencephalography.
Eswaran, H; Lowery, C L; Robinson, S E; Wilson, J D; Cheyne, D; McKenzie, D
2000-01-01
Our goals were to successfully perform fetal auditory-evoked responses using the magnetoencephalography technique, understand its problems and limitations, and propose instrument design modifications to improve the signal quality and success rate. Fetal auditory-evoked responses were recorded from four fetuses with gestational ages ranging from 33-40+ weeks. The signals were recorded using a gantry-based superconducting quantum interference device. Auditory stimulus was 1 kHz tone burst. The evoked signals were digitized and averaged over an 800 ms window. After several trials of positioning and repositioning the subjects, we were able to record auditory-evoked responses in three out of the four fetuses. Since the superconducting quantum interference device array design was not shaped to fit over the mother's abdomen, we experienced difficulty in positioning the sensors over the fetal head. Based on this pilot study, we propose instrument design that may improve signal quality and success rate of the fetal magnetic auditory-evoked response.
Brainstem auditory evoked responses in man. 1: Effect of stimulus rise-fall time and duration
NASA Technical Reports Server (NTRS)
Hecox, K.; Squires, N.; Galambos, R.
1975-01-01
Short latency (under 10 msec) evoked responses elicited by bursts of white noise were recorded from the scalp of human subjects. Response alterations produced by changes in the noise burst duration (on-time) inter-burst interval (off-time), and onset and offset shapes are reported and evaluated. The latency of the most prominent response component, wave V, was markedly delayed with increases in stimulus rise-time but was unaffected by changes in fall-time. The amplitude of wave V was insensitive to changes in signal rise-and-fall times, while increasing signal on-time produced smaller amplitude responses only for sufficiently short off-times. It is concluded that wave V of the human auditory brainstem evoked response is solely an onset response.
Memory for pictures and sounds: independence of auditory and visual codes.
Thompson, V A; Paivio, A
1994-09-01
Three experiments examined the mnemonic independence of auditory and visual nonverbal stimuli in free recall. Stimulus lists consisted of (1) pictures, (2) the corresponding environmental sounds, or (3) picture-sound pairs. In Experiment 1, free recall was tested under three learning conditions: standard intentional, intentional with a rehearsal-inhibiting distracter task, or incidental with the distracter task. In all three groups, recall was best for the picture-sound items. In addition, recall for the picture-sound stimuli appeared to be additive relative to pictures or sounds alone when the distracter task was used. Experiment 2 included two additional groups: In one, two copies of the same picture were shown simultaneously; in the other, two different pictures of the same concept were shown. There was no difference in recall among any of the picture groups; in contrast, recall in the picture-sound condition was greater than recall in either single-modality condition. However, doubling the exposure time in a third experiment resulted in additively higher recall for repeated pictures with different exemplars than ones with identical exemplars. The results are discussed in terms of dual coding theory and alternative conceptions of the memory trace.
Adaptation, saturation, and physiological masking in single auditory-nerve fibers.
Smith, R L
1979-01-01
Results are reviewed concerning some effects, at a units's characteristic frequency, of a short-term conditioning stimulus on the responses to perstimulatory and poststimulatory test tones. A phenomenological equation is developed from the poststimulatory results and shown to be consistent with the perstimulatory results. According to the results and equation, the response to a test tone equals the unconditioned or unadapted response minus the decrement produced by adaptation to the conditioning tone. Furthermore, the decrement is proportional to the driven response to the conditioning tone and does not depend on sound intensity per se. The equation has a simple interpretation in terms of two processes in cascade--a static saturating nonlinearity followed by additive adaptation. Results are presented to show that this functional model is sufficient to account for the "physiological masking" produced by wide-band backgrounds. According to this interpretation, a sufficiently intense background produces saturation. Consequently, a superimposed test tone cause no change in response. In addition, when the onset of the background precedes the onset of the test tone, the total firing rate is reduced by adaptation. Evidence is reviewed concerning the possible correspondence between the variables in the model and intracellular events in the auditory periphery.
Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D
2014-08-15
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce
2015-01-01
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological awareness and literacy, deemed central to both typical and atypical reading development. PMID:24746955
Qin, Pengmin; Duncan, Niall W; Wiebking, Christine; Gravel, Paul; Lyttelton, Oliver; Hayes, Dave J; Verhaeghe, Jeroen; Kostikov, Alexey; Schirrmacher, Ralf; Reader, Andrew J; Northoff, Georg
2012-01-01
Recent imaging studies have demonstrated that levels of resting γ-aminobutyric acid (GABA) in the visual cortex predict the degree of stimulus-induced activity in the same region. These studies have used the presentation of discrete visual stimulus; the change from closed eyes to open also represents a simple visual stimulus, however, and has been shown to induce changes in local brain activity and in functional connectivity between regions. We thus aimed to investigate the role of the GABA system, specifically GABA(A) receptors, in the changes in brain activity between the eyes closed (EC) and eyes open (EO) state in order to provide detail at the receptor level to complement previous studies of GABA concentrations. We conducted an fMRI study involving two different modes of the change from EC to EO: an EO and EC block design, allowing the modeling of the haemodynamic response, followed by longer periods of EC and EO to allow the measuring of functional connectivity. The same subjects also underwent [(18)F]Flumazenil PET to measure GABA(A) receptor binding potentials. It was demonstrated that the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex predicted the degree of changes in neural activity from EC to EO. This same relationship was also shown in the auditory cortex. Furthermore, the local-to-global ratio of GABA(A) receptor binding potential in the visual cortex also predicted the change in functional connectivity between the visual and auditory cortex from EC to EO. These findings contribute to our understanding of the role of GABA(A) receptors in stimulus-induced neural activity in local regions and in inter-regional functional connectivity.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.
Poremba, Amy; Mishkin, Mortimer
2007-07-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.
Exploring the extent and function of higher-order auditory cortex in rhesus monkeys
Mishkin, Mortimer
2009-01-01
Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left hemisphere “dominance” during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole “dominance” appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys. PMID:17321703
Farkas, Dávid; Denham, Susan L.; Bendixen, Alexandra; Tóth, Dénes; Kondo, Hirohito M.; Winkler, István
2016-01-01
Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual’s tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions. PMID:27135945
Rapid recalibration of speech perception after experiencing the McGurk illusion
Pérez-Bellido, Alexis; de Lange, Floris P.
2018-01-01
The human brain can quickly adapt to changes in the environment. One example is phonetic recalibration: a speech sound is interpreted differently depending on the visual speech and this interpretation persists in the absence of visual information. Here, we examined the mechanisms of phonetic recalibration. Participants categorized the auditory syllables /aba/ and /ada/, which were sometimes preceded by the so-called McGurk stimuli (in which an /aba/ sound, due to visual /aga/ input, is often perceived as ‘ada’). We found that only one trial of exposure to the McGurk illusion was sufficient to induce a recalibration effect, i.e. an auditory /aba/ stimulus was subsequently more often perceived as ‘ada’. Furthermore, phonetic recalibration took place only when auditory and visual inputs were integrated to ‘ada’ (McGurk illusion). Moreover, this recalibration depended on the sensory similarity between the preceding and current auditory stimulus. Finally, signal detection theoretical analysis showed that McGurk-induced phonetic recalibration resulted in both a criterion shift towards /ada/ and a reduced sensitivity to distinguish between /aba/ and /ada/ sounds. The current study shows that phonetic recalibration is dependent on the perceptual integration of audiovisual information and leads to a perceptual shift in phoneme categorization. PMID:29657743
Matsuoka, A J; Abbas, P J; Rubinstein, J T; Miller, C A
2000-11-01
Experimental results from humans and animals show that electrically evoked compound action potential (EAP) responses to constant-amplitude pulse train stimulation can demonstrate an alternating pattern, due to the combined effects of highly synchronized responses to electrical stimulation and refractory effects (Wilson et al., 1994). One way to improve signal representation is to reduce the level of across-fiber synchrony and hence, the level of the amplitude alternation. To accomplish this goal, we have examined EAP responses in the presence of Gaussian noise added to the pulse train stimulus. Addition of Gaussian noise at a level approximately -30 dB relative to EAP threshold to the pulse trains decreased the amount of alternation, indicating that stochastic resonance may be induced in the auditory nerve. The use of some type of conditioning stimulus such as Gaussian noise may provide a more 'normal' neural response pattern.
Nakagawa, A; Sukigara, M
2000-09-01
The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1976-01-01
The effects of varying the rate of delivery of dichotic tone pip stimuli on selective attention measured by evoked-potential amplitudes and signal detectability scores were studied. The subjects attended to one channel (ear) of tones, ignored the other, and pressed a button whenever occasional targets - tones of a slightly higher pitch were detected in the attended ear. Under separate conditions, randomized interstimulus intervals were short, medium, and long. Another study compared the effects of attention on the N1 component of the auditory evoked potential for tone pips presented alone and when white noise was added to make the tones barely above detectability threshold in a three-channel listening task. Major conclusions are that (1) N1 is enlarged to stimuli in an attended channel only in the short interstimulus interval condition (averaging 350 msec), (2) N1 and P3 are related to different modes of selective attention, and (3) attention selectivity in multichannel listening task is greater when tones are faint and/or difficult to detect.
van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel
2015-06-01
Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.
Auditory pathways: anatomy and physiology.
Pickles, James O
2015-01-01
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Background sounds contribute to spectrotemporal plasticity in primary auditory cortex.
Moucha, Raluca; Pandya, Pritesh K; Engineer, Navzer D; Rathbun, Daniel L; Kilgard, Michael P
2005-05-01
The mammalian auditory system evolved to extract meaningful information from complex acoustic environments. Spectrotemporal selectivity of auditory neurons provides a potential mechanism to represent natural sounds. Experience-dependent plasticity mechanisms can remodel the spectrotemporal selectivity of neurons in primary auditory cortex (A1). Electrical stimulation of the cholinergic nucleus basalis (NB) enables plasticity in A1 that parallels natural learning and is specific to acoustic features associated with NB activity. In this study, we used NB stimulation to explore how cortical networks reorganize after experience with frequency-modulated (FM) sweeps, and how background stimuli contribute to spectrotemporal plasticity in rat auditory cortex. Pairing an 8-4 kHz FM sweep with NB stimulation 300 times per day for 20 days decreased tone thresholds, frequency selectivity, and response latency of A1 neurons in the region of the tonotopic map activated by the sound. In an attempt to modify neuronal response properties across all of A1 the same NB activation was paired in a second group of rats with five downward FM sweeps, each spanning a different octave. No changes in FM selectivity or receptive field (RF) structure were observed when the neural activation was distributed across the cortical surface. However, the addition of unpaired background sweeps of different rates or direction was sufficient to alter RF characteristics across the tonotopic map in a third group of rats. These results extend earlier observations that cortical neurons can develop stimulus specific plasticity and indicate that background conditions can strongly influence cortical plasticity.
Effect of Human Auditory Efferent Feedback on Cochlear Gain and Compression
Drga, Vit; Plack, Christopher J.
2014-01-01
The mammalian auditory system includes a brainstem-mediated efferent pathway from the superior olivary complex by way of the medial olivocochlear system, which reduces the cochlear response to sound (Warr and Guinan, 1979; Liberman et al., 1996). The human medial olivocochlear response has an onset delay of between 25 and 40 ms and rise and decay constants in the region of 280 and 160 ms, respectively (Backus and Guinan, 2006). Physiological studies with nonhuman mammals indicate that onset and decay characteristics of efferent activation are dependent on the temporal and level characteristics of the auditory stimulus (Bacon and Smith, 1991; Guinan and Stankovic, 1996). This study uses a novel psychoacoustical masking technique using a precursor sound to obtain a measure of the efferent effect in humans. This technique avoids confounds currently associated with other psychoacoustical measures. Both temporal and level dependency of the efferent effect was measured, providing a comprehensive measure of the effect of human auditory efferents on cochlear gain and compression. Results indicate that a precursor (>20 dB SPL) induced efferent activation, resulting in a decrease in both maximum gain and maximum compression, with linearization of the compressive function for input sound levels between 50 and 70 dB SPL. Estimated gain decreased as precursor level increased, and increased as the silent interval between the precursor and combined masker-signal stimulus increased, consistent with a decay of the efferent effect. Human auditory efferent activation linearizes the cochlear response for mid-level sounds while reducing maximum gain. PMID:25392499