Medial Auditory Thalamus Inactivation Prevents Acquisition and Retention of Eyeblink Conditioning
ERIC Educational Resources Information Center
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
2008-01-01
The auditory conditioned stimulus (CS) pathway that is necessary for delay eyeblink conditioning was investigated using reversible inactivation of the medial auditory thalamic nuclei (MATN) consisting of the medial division of the medial geniculate (MGm), suprageniculate (SG), and posterior intralaminar nucleus (PIN). Rats were given saline or…
ERIC Educational Resources Information Center
Halverson, Hunter E.; Poremba, Amy; Freeman, John H.
2015-01-01
Associative learning tasks commonly involve an auditory stimulus, which must be projected through the auditory system to the sites of memory induction for learning to occur. The cochlear nucleus (CN) projection to the pontine nuclei has been posited as the necessary auditory pathway for cerebellar learning, including eyeblink conditioning.…
Medial Auditory Thalamic Stimulation as a Conditioned Stimulus for Eyeblink Conditioning in Rats
ERIC Educational Resources Information Center
Campolattaro, Matthew M.; Halverson, Hunter E.; Freeman, John H.
2007-01-01
The neural pathways that convey conditioned stimulus (CS) information to the cerebellum during eyeblink conditioning have not been fully delineated. It is well established that pontine mossy fiber inputs to the cerebellum convey CS-related stimulation for different sensory modalities (e.g., auditory, visual, tactile). Less is known about the…
Okuda, Yuji; Shikata, Hiroshi; Song, Wen-Jie
2011-09-01
As a step to develop auditory prosthesis by cortical stimulation, we tested whether a single train of pulses applied to the primary auditory cortex could elicit classically conditioned behavior in guinea pigs. Animals were trained using a tone as the conditioned stimulus and an electrical shock to the right eyelid as the unconditioned stimulus. After conditioning, a train of 11 pulses applied to the left AI induced the conditioned eye-blink response. Cortical stimulation induced no response after extinction. Our results support the feasibility of auditory prosthesis by electrical stimulation of the cortex. Copyright © 2011 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Impaired eye-blink conditioning in waggler, a mutant mouse with cerebellar BDNF deficiency.
Bao, S; Chen, L; Qiao, X; Knusel, B; Thompson, R F
1998-01-01
In addition to their trophic functions, neurotrophins are also implicated in synaptic modulation and learning and memory. Although gene knockout techniques have been used widely in studying the roles of neurotrophins at molecular and cellular levels, behavioral studies using neurotrophin knockouts are limited by the early-onset lethality and various sensory deficits associated with the gene knockout mice. In the present study, we found that in a spontaneous mutant mouse, waggler, the expression of brain-derived neurotrophic factor (BDNF) was selectively absent in the cerebellar granule cells. The cytoarchitecture of the waggler cerebellum appeared to be normal at the light microscope level. The mutant mice exhibited no sensory deficits to auditory stimuli or heat-induced pain. However, they were massively impaired in classic eye-blink conditioning. These results suggest that BDNF may have a role in normal cerebellar neuronal function, which, in turn, is essential for classic eye-blink conditioning.
Eyeblink conditioning in the developing rabbit
Brown, Kevin L.; Woodruff-Pak, Diana S.
2011-01-01
Eyeblink classical conditioning in pre-weanling rabbits was examined in the present study. Using a custom lightweight headpiece and restrainer, New Zealand white littermates were trained once daily in 400 ms delay eyeblink classical conditioning from postnatal days (PD) 17–21 or PD 24–28. These ages were chosen because eyeblink conditioning emerges gradually over PD 17–24 in rats (Stanton, Freeman, & Skelton, 1992), another altricial species with neurodevelopmental features similar to those of rabbits. Consistent with well-established findings in rats, rabbits trained from PD 24–28 showed greater conditioning relative to littermates trained from PD 17–21. Both age groups displayed poor retention of eyeblink conditioning at retraining one month after acquisition. These findings are the first to demonstrate eyeblink conditioning in the developing rabbit. With further characterization of optimal conditioning parameters, this preparation may have applications to neurodevelopmental disease models as well as research exploring the ontogeny of memory. PMID:21953433
Cerebellar learning mechanisms
Freeman, John H.
2014-01-01
The mechanisms underlying cerebellar learning are reviewed with an emphasis on old arguments and new perspectives on eyeblink conditioning. Eyeblink conditioning has been used for decades a model system for elucidating cerebellar learning mechanisms. The standard model of the mechanisms underlying eyeblink conditioning is that there two synaptic plasticity processes within the cerebellum that are necessary for acquisition of the conditioned response: 1) long-term depression (LTD) at parallel fiber-Purkinje cell synapses and 2) long-term potentiation (LTP) at mossy fiber-interpositus nucleus synapses. Additional Purkinje cell plasticity mechanisms may also contribute to eyeblink conditioning including LTP, excitability, and entrainment of deep nucleus activity. Recent analyses of the sensory input pathways necessary for eyeblink conditioning indicate that the cerebellum regulates its inputs to facilitate learning and maintain plasticity. Cerebellar learning during eyeblink conditioning is therefore a dynamic interactive process which maximizes responding to significant stimuli and suppresses responding to irrelevant or redundant stimuli. PMID:25289586
Parallel Acquisition of Awareness and Differential Delay Eyeblink Conditioning
ERIC Educational Resources Information Center
Weidemann, Gabrielle; Antees, Cassandra
2012-01-01
There is considerable debate about whether differential delay eyeblink conditioning can be acquired without awareness of the stimulus contingencies. Previous investigations of the relationship between differential-delay eyeblink conditioning and awareness of the stimulus contingencies have assessed awareness after the conditioning session was…
Neural Circuitry and Plasticity Mechanisms Underlying Delay Eyeblink Conditioning
ERIC Educational Resources Information Center
Freeman, John H.; Steinmetz, Adam B.
2011-01-01
Pavlovian eyeblink conditioning has been used extensively as a model system for examining the neural mechanisms underlying associative learning. Delay eyeblink conditioning depends on the intermediate cerebellum ipsilateral to the conditioned eye. Evidence favors a two-site plasticity model within the cerebellum with long-term depression of…
Neural circuitry and plasticity mechanisms underlying delay eyeblink conditioning
Freeman, John H.; Steinmetz, Adam B.
2011-01-01
Pavlovian eyeblink conditioning has been used extensively as a model system for examining the neural mechanisms underlying associative learning. Delay eyeblink conditioning depends on the intermediate cerebellum ipsilateral to the conditioned eye. Evidence favors a two-site plasticity model within the cerebellum with long-term depression of parallel fiber synapses on Purkinje cells and long-term potentiation of mossy fiber synapses on neurons in the anterior interpositus nucleus. Conditioned stimulus and unconditioned stimulus inputs arise from the pontine nuclei and inferior olive, respectively, converging in the cerebellar cortex and deep nuclei. Projections from subcortical sensory nuclei to the pontine nuclei that are necessary for eyeblink conditioning are beginning to be identified, and recent studies indicate that there are dynamic interactions between sensory thalamic nuclei and the cerebellum during eyeblink conditioning. Cerebellar output is projected to the magnocellular red nucleus and then to the motor nuclei that generate the blink response(s). Tremendous progress has been made toward determining the neural mechanisms of delay eyeblink conditioning but there are still significant gaps in our understanding of the necessary neural circuitry and plasticity mechanisms underlying cerebellar learning. PMID:21969489
Steinmetz, Adam B; Ng, Ka H; Freeman, John H
2017-06-01
Amygdala lesions impair, but do not prevent, acquisition of cerebellum-dependent eyeblink conditioning suggesting that the amygdala modulates cerebellar learning. Two-factor theories of eyeblink conditioning posit that a fast-developing memory within the amygdala facilitates slower-developing memory within the cerebellum. The current study tested this hypothesis by impairing memory consolidation within the amygdala with inhibition of protein synthesis, transcription, and NMDA receptors in rats. Rats given infusions of anisomycin or DRB into the central amygdala (CeA) immediately after each eyeblink conditioning session were severely impaired in contextual and cued fear conditioning, but were completely unimpaired in eyeblink conditioning. Rats given the NMDA antagonist ifenprodil into the CeA before each eyeblink conditioning session also showed impaired fear conditioning, but no deficit in eyeblink conditioning. The results indicate that memory formation within the CeA is not necessary for its modulation of cerebellar learning mechanisms. The CeA may modulate cerebellar learning and retention through an attentional mechanism that develops within the training sessions. © 2017 Steinmetz et al.; Published by Cold Spring Harbor Laboratory Press.
Extinction, Reacquisition, and Rapid Forgetting of Eyeblink Conditioning in Developing Rats
ERIC Educational Resources Information Center
Brown, Kevin L.; Freeman, John H.
2014-01-01
Eyeblink conditioning is a well-established model for studying the developmental neurobiology of associative learning and memory. However, age differences in extinction and subsequent reacquisition have yet to be studied using this model. The present study examined extinction and reacquisition of eyeblink conditioning in developing rats. In…
ERIC Educational Resources Information Center
Halverson, Hunter E.; Freeman, John H.
2010-01-01
The conditioned stimulus (CS) pathway that is necessary for visual delay eyeblink conditioning was investigated in the current study. Rats were initially given eyeblink conditioning with stimulation of the ventral nucleus of the lateral geniculate (LGNv) as the CS followed by conditioning with light and tone CSs in separate training phases.…
Contextual Specificity of Extinction of Delay but Not Trace Eyeblink Conditioning in Humans
ERIC Educational Resources Information Center
Grillon, Christian; Alvarez, Ruben P.; Johnson, Linda; Chavis, Chanen
2008-01-01
Renewal of an extinguished conditioned response has been demonstrated in humans and in animals using various types of procedures, except renewal of motor learning such as eyeblink conditioning. We tested renewal of delay and trace eyeblink conditioning in a virtual environment in an ABA design. Following acquisition in one context (A, e.g., an…
De Pascalis, Vilfredo; Russo, Emanuela
2013-01-01
A working model of the neurophysiology of hypnosis suggests that highly hypnotizable individuals (HHs) have more effective frontal attentional systems implementing control, monitoring performance, and inhibiting unwanted stimuli from conscious awareness, than low hypnotizable individuals (LHs). Recent studies, using prepulse inhibition (PPI) of the auditory startle reflex (ASR), suggest that HHs, in the waking condition, may show reduced sensory gating although they may selectively attend and disattend different stimuli. Using a within subject design and a strict subject selection procedure, in waking and hypnosis conditions we tested whether HHs compared to LHs showed a significantly lower inhibition of the ASR and startle-related brain activity in both time and intracerebral source localization domains. HHs, as compared to LH participants, exhibited (a) longer latency of the eyeblink startle reflex, (b) reduced N100 responses to startle stimuli, and (c) higher PPI of eyeblink startle and of the P200 and P300 waves. Hypnosis yielded smaller N100 waves to startle stimuli and greater PPI of this component than in the waking condition. sLORETA analysis revealed that, for the N100 (107 msec) elicited during startle trials, HHs had a smaller activation in the left parietal lobe (BA2/40) than LHs. Auditory pulses of pulse-with prepulse trials in HHs yielded less activity of the P300 (280 msec) wave than LHs, in the cingulate and posterior cingulate gyrus (BA23/31). The present results, on the whole, are in the opposite direction to PPI findings on hypnotizability previously reported in the literature. These results provide support to the neuropsychophysiological model that HHs have more effective sensory integration and gating (or filtering) of irrelevant stimuli than LHs. PMID:24278150
De Pascalis, Vilfredo; Russo, Emanuela
2013-01-01
A working model of the neurophysiology of hypnosis suggests that highly hypnotizable individuals (HHs) have more effective frontal attentional systems implementing control, monitoring performance, and inhibiting unwanted stimuli from conscious awareness, than low hypnotizable individuals (LHs). Recent studies, using prepulse inhibition (PPI) of the auditory startle reflex (ASR), suggest that HHs, in the waking condition, may show reduced sensory gating although they may selectively attend and disattend different stimuli. Using a within subject design and a strict subject selection procedure, in waking and hypnosis conditions we tested whether HHs compared to LHs showed a significantly lower inhibition of the ASR and startle-related brain activity in both time and intracerebral source localization domains. HHs, as compared to LH participants, exhibited (a) longer latency of the eyeblink startle reflex, (b) reduced N100 responses to startle stimuli, and (c) higher PPI of eyeblink startle and of the P200 and P300 waves. Hypnosis yielded smaller N100 waves to startle stimuli and greater PPI of this component than in the waking condition. sLORETA analysis revealed that, for the N100 (107 msec) elicited during startle trials, HHs had a smaller activation in the left parietal lobe (BA2/40) than LHs. Auditory pulses of pulse-with prepulse trials in HHs yielded less activity of the P300 (280 msec) wave than LHs, in the cingulate and posterior cingulate gyrus (BA23/31). The present results, on the whole, are in the opposite direction to PPI findings on hypnotizability previously reported in the literature. These results provide support to the neuropsychophysiological model that HHs have more effective sensory integration and gating (or filtering) of irrelevant stimuli than LHs.
Eyeblink conditioning is impaired in subjects with essential tremor.
Kronenbuerger, Martin; Gerwig, Marcus; Brol, Beate; Block, Frank; Timmann, Dagmar
2007-06-01
Several lines of evidence point to an involvement of the olivo-cerebellar system in the pathogenesis of essential tremor (ET), with clinical signs of cerebellar dysfunction being present in some subjects in the advanced stage. Besides motor coordination, the cerebellum is critically involved in motor learning. Evidence of motor learning deficits would strengthen the hypothesis of olivo-cerebellar involvement in ET. Conditioning of the eyeblink reflex is a well-established paradigm to assess motor learning. Twenty-three ET subjects (13 males, 10 females; mean age 44.3 +/- 22.3 years, mean disease duration 17.4 +/- 17.3 years) and 23 age-matched healthy controls were studied on two consecutive days using a standard delay eyeblink conditioning protocol. Six ET subjects exhibited accompanying clinical signs of cerebellar dysfunction. Care was taken to examine subjects without medication affecting central nervous functioning. Seven ET subjects and three controls on low-dose beta-blocker treatments, which had no effect on eyeblink conditioning in animal studies, were allowed into the study. The ability to acquire conditioned eyeblink responses was significantly reduced in ET subjects compared with controls. Impairment of eyeblink conditioning was not due to low-dose beta-blocker medication. Additionally, acquisition of conditioned eyeblink response was reduced in ET subjects regardless of the presence of cerebellar signs in clinical examination. There were no differences in timing or extinction of conditioned responses between groups and conditioning deficits did not correlate with the degree of tremor or ataxia as rated by clinical scores. The findings of disordered eyeblink conditioning support the hypothesis that ET is caused by a functional disturbance of olivo-cerebellar circuits which may cause cerebellar dysfunction. In particular, results point to an involvement of the olivo-cerebellar system in early stages of ET.
Retention and Extinction of Delay Eyeblink Conditioning Are Modulated by Central Cannabinoids
ERIC Educational Resources Information Center
Steinmetz, Adam B.; Freeman, John H.
2011-01-01
Rats administered the cannabinoid agonist WIN55,212-2 or the antagonist SR141716A exhibit marked deficits during acquisition of delay eyeblink conditioning, as noted by Steinmetz and Freeman in an earlier study. However, the effects of these drugs on retention and extinction of eyeblink conditioning have not been assessed. The present study…
Chen, Hao; Wang, Yi-jie; Yang, Li; Sui, Jian-feng; Hu, Zhi-an; Hu, Bo
2016-01-01
Associative learning is thought to require coordinated activities among distributed brain regions. For example, to direct behavior appropriately, the medial prefrontal cortex (mPFC) must encode and maintain sensory information and then interact with the cerebellum during trace eyeblink conditioning (TEBC), a commonly-used associative learning model. However, the mechanisms by which these two distant areas interact remain elusive. By simultaneously recording local field potential (LFP) signals from the mPFC and the cerebellum in guinea pigs undergoing TEBC, we found that theta-frequency (5.0–12.0 Hz) oscillations in the mPFC and the cerebellum became strongly synchronized following presentation of auditory conditioned stimulus. Intriguingly, the conditioned eyeblink response (CR) with adaptive timing occurred preferentially in the trials where mPFC-cerebellum theta coherence was stronger. Moreover, both the mPFC-cerebellum theta coherence and the adaptive CR performance were impaired after the disruption of endogenous orexins in the cerebellum. Finally, association of the mPFC -cerebellum theta coherence with adaptive CR performance was time-limited occurring in the early stage of associative learning. These findings suggest that the mPFC and the cerebellum may act together to contribute to the adaptive performance of associative learning behavior by means of theta synchronization. PMID:26879632
Knuttinen, M-G; Parrish, T B; Weiss, C; LaBar, K S; Gitelman, D R; Power, J M; Mesulam, M-M; Disterhoft, J F
2002-10-01
This study was designed to develop a suitable method of recording eyeblink responses while conducting functional magnetic resonance imaging (fMRI). Given the complexity of this behavioral setup outside of the magnet, this study sought to adapt and further optimize an approach to eyeblink conditioning that would be suitable for conducting event-related fMRI experiments. This method involved the acquisition of electromyographic (EMG) signals from the orbicularis oculi of the right eye, which were subsequently amplified and converted into an optical signal outside of the head coil. This optical signal was converted back into an electrical signal once outside the magnet room. Electromyography (EMG)-detected eyeblinks were used to measure responses in a delay eyeblink conditioning paradigm. Our results indicate that: (1) electromyography is a sensitive method for the detection of eyeblinks during fMRI; (2) minimal interactions or artifacts of the EMG signal were created from the magnetic resonance pulse sequence; and (3) no electromyography-related artifacts were detected in the magnetic resonance images. Furthermore, an analysis of the functional data showed areas of activation that have previously been shown in positron emission tomography studies of human eyeblink conditioning. Our results support the strength of this behavioral setup as a suitable method to be used in association with fMRI.
Weidemann, Gabrielle; Tangen, Jason M; Lovibond, Peter F; Mitchell, Christopher J
2009-04-01
P. Perruchet (1985b) showed a double dissociation of conditioned responses (CRs) and expectancy for an airpuff unconditioned stimulus (US) in a 50% partial reinforcement schedule in human eyeblink conditioning. In the Perruchet effect, participants show an increase in CRs and a concurrent decrease in expectancy for the airpuff across runs of reinforced trials; conversely, participants show a decrease in CRs and a concurrent increase in expectancy for the airpuff across runs of nonreinforced trials. Three eyeblink conditioning experiments investigated whether the linear trend in eyeblink CRs in the Perruchet effect is a result of changes in associative strength of the conditioned stimulus (CS), US sensitization, or learning the precise timing of the US. Experiments 1 and 2 demonstrated that the linear trend in eyeblink CRs is not the result of US sensitization. Experiment 3 showed that the linear trend in eyeblink CRs is present with both a fixed and a variable CS-US interval and so is not the result of learning the precise timing of the US. The results are difficult to reconcile with a single learning process model of associative learning in which expectancy mediates CRs. Copyright (c) 2009 APA, all rights reserved.
Both trace and delay conditioned eyeblink responding can be dissociated from outcome expectancy.
Weidemann, Gabrielle; Broderick, Joshua; Lovibond, Peter F; Mitchell, Christopher J
2012-01-01
Squire and colleagues have proposed that trace and delay eyeblink conditioning are fundamentally different kinds of learning: trace conditioning requires acquisition of a conscious declarative memory for the stimulus contingencies whereas delay conditioning does not. Declarative memory in trace conditioning is thought to generate conditioned responding through the activation of a conscious expectancy for when the unconditioned stimulus (US) is going to occur. Perruchet (1985) has previously shown that in a 50% partial reinforcement design it is possible to dissociate single cue delay eyeblink conditioning from conscious expectancy for the US by examining performance over runs of reinforced and nonreinforced trials. Clark, Manns, and Squire (2001) claim that this dissociation does not occur in trace eyeblink conditioning. In the present experiment we examined the Perruchet effect for short, moderate, and long trace intervals (600, 1000, and 1400 ms) and for the equivalent interstimulus intervals (ISIs) in a delay conditioning procedure. We found evidence for a dissociation of eyeblink CRs and US expectancy over runs regardless of whether there was a delay or a trace arrangement of cues. The reasons for the Perruchet effect are still unclear, but the present data suggest that it does not depend on a separate nondeclarative system of the type proposed by Squire and colleagues. (c) 2012 APA, all rights reserved.
Thanellou, Alexandra; Green, John T.
2011-01-01
Reinstatement, the return of an extinguished conditioned response (CR) after reexposure to the unconditioned stimulus (US), and spontaneous recovery, the return of an extinguished CR with the passage of time, are two of four well-established phenomena which demonstrate that extinction does not erase the conditioned stimulus (CS)-US association. However, reinstatement of extinguished eyeblink CRs has never been demonstrated and spontaneous recovery of extinguished eyeblink CRs has not been systematically demonstrated in rodent eyeblink conditioning. In Experiment 1, US reexposure was administered 24 hours prior to a reinstatement test. In Experiment 2, US reexposure was administered 5 min prior to a reinstatement test. In Experiment 3, a long, discrete cue (a houselight), present in all phases of training and testing, served as a context within which each trial occurred to maximize context processing, which in other preparations has been shown to be required for reinstatement. In Experiment 4, an additional group was included that received footshock exposure, rather than US reexposure, between extinction and test, and contextual freezing was measured prior to test. Spontaneous recovery was robust in Experiments 3 and 4. In Experiment 4, context freezing was strong in a group given footshock exposure but not in a group given eyeshock US reexposure. There was no reinstatement observed in any experiment. With stimulus conditions that produce eyeblink conditioning and research designs that produce reinstatement in other forms of classical conditioning, we observed spontaneous recovery but not reinstatement of extinguished eyeblink CRs. This suggests that reinstatement, but not spontaneous recovery, is a preparation- or substrate-dependent phenomenon. PMID:21517145
Disrupted sensory gating in pathological gambling.
Stojanov, Wendy; Karayanidis, Frini; Johnston, Patrick; Bailey, Andrew; Carr, Vaughan; Schall, Ulrich
2003-08-15
Some neurochemical evidence as well as recent studies on molecular genetics suggest that pathologic gambling may be related to dysregulated dopamine neurotransmission. The current study examined sensory (motor) gating in pathologic gamblers as a putative measure of endogenous brain dopamine activity with prepulse inhibition of the acoustic startle eye-blink response and the auditory P300 event-related potential. Seventeen pathologic gamblers and 21 age- and gender-matched healthy control subjects were assessed. Both prepulse inhibition measures were recorded under passive listening and two-tone prepulse discrimination conditions. Compared to the control group, pathologic gamblers exhibited disrupted sensory (motor) gating on all measures of prepulse inhibition. Sensory motor gating deficits of eye-blink responses were most profound at 120-millisecond prepulse lead intervals in the passive listening task and at 240-millisecond prepulse lead intervals in the two-tone prepulse discrimination task. Sensory gating of P300 was also impaired in pathologic gamblers, particularly at 500-millisecond lead intervals, when performing the discrimination task on the prepulse. In the context of preclinical studies on the disruptive effects of dopamine agonists on prepulse inhibition, our findings suggest increased endogenous brain dopamine activity in pathologic gambling in line with previous neurobiological findings.
Central Cannabinoid Receptors Modulate Acquisition of Eyeblink Conditioning
ERIC Educational Resources Information Center
Steinmetz, Adam B.; Freeman, John H.
2010-01-01
Delay eyeblink conditioning is established by paired presentations of a conditioned stimulus (CS) such as a tone or light, and an unconditioned stimulus (US) that elicits the blink reflex. Conditioned stimulus information is projected from the basilar pontine nuclei to the cerebellar interpositus nucleus and cortex. The cerebellar cortex,…
Eyeblink Conditioning in Healthy Adults: A Positron Emission Tomography Study
Andreasen, Nancy C.; Liu, Dawei; Freeman, John H.; Boles Ponto, Laura L.; O’Leary, Daniel S.
2013-01-01
Eyeblink conditioning is a paradigm commonly used to investigate the neural mechanisms underlying motor learning. It involves the paired presentation of a toneconditioning stimulus which precedes and co-terminates with an airpuff unconditioned stimulus. Following repeated paired presentations a conditioned eyeblink develops which precedes the airpuff. This type of learning has been intensively studied and the cerebellum is known to be essential in both humans and animals. The study presented here was designed to investigate the role of the cerebellum during eyeblink conditioning in humans using positron emission tomography (PET). The sample includes 20 subjects (10 male and 10 female) with an average age of 29.2 years. PET imaging was used to measure regional cerebral blood flow (rCBF) changes occurring during the first, second, and third blocks of conditioning. In addition, stimuli-specific rCBF to unpaired tones and airpuffs (“pseudoconditioning”) was used as a baseline level that was subtracted from each block. Conditioning was performed using three, 15-trial blocks of classical eyeblink conditioning with the last five trials in each block imaged. As expected, subjects quickly acquired conditioned responses. A comparison between the conditioning tasks and the baseline task revealed that during learning there was activation of the cerebellum and recruitment of several higher cortical regions. Specifically, large peaks were noted in cerebellar lobules IV/V, the frontal lobes, and cingulate gyri. PMID:22430943
Claassen, J; Mazilescu, L; Thieme, A; Bracha, V; Timmann, D
2016-01-01
Context dependency of extinction is well known and has extensively been studied in fear conditioning, but has rarely been assessed in eyeblink conditioning. One way to demonstrate context dependency of extinction is the renewal effect. ABA paradigms are most commonly used to show the renewal effect of extinguished learned fear: if acquisition takes place in context A, and extinction takes place in context B (extinction phase), learned responses will recover in subsequent extinction trials presented in context A (renewal phase). The renewal effect of the visual threat eyeblink response (VTER), a conditioned eyeblink response, which is naturally acquired in early infancy, was examined in a total of 48 young and healthy participants with two experiments using an ABA paradigm. Twenty paired trials were performed in context A (baseline trials), followed by 50 extinction trials in context B (extinction phase) and 50 extinction trials in context A (renewal phase). In 24 participants, contexts A and B were two different rooms, and in the other 24 participants, two different background colors (orange and blue) and noises were used. To rule out spontaneous recovery, an AAA design was used for comparison. There were significant effects of extinction in both experiments. No significant renewal effects were observed. In experiment 2, however, extinction was significantly less using orange background during extinction compared to the blue background. The present findings suggest that extinction of conditioned eyeblinks depends on the physical context. Findings add to the animal literature that context can play a role in the acquisition of classically conditioned eyeblink responses. Future studies, however, need to be performed to confirm the present findings. Lack of renewal effect may be explained by the highly overlearned character of the VTER.
The Role of Contingency Awareness in Single-Cue Human Eyeblink Conditioning
ERIC Educational Resources Information Center
Weidemann, Gabrielle; Best, Erin; Lee, Jessica C; Lovibond, Peter F.
2013-01-01
Single-cue delay eyeblink conditioning is presented as a prototypical example of automatic, nonsymbolic learning that is carried out by subcortical circuits. However, it has been difficult to assess the role of cognition in single-cue conditioning because participants become aware of the simple stimulus contingency so quickly. In this experiment…
ERIC Educational Resources Information Center
Cicchese, Joseph J.; Darling, Ryan D.; Berry, Stephen D.
2015-01-01
Eyeblink conditioning given in the explicit presence of hippocampal ? results in accelerated learning and enhanced multiple-unit responses, with slower learning and suppression of unit activity under non-? conditions. Recordings from putative pyramidal cells during ?-contingent training show that pretrial ?-state is linked to the probability of…
ERIC Educational Resources Information Center
Weeks, Andrew C. W.; Connor, Steve; Hinchcliff, Richard; LeBoutillier, Janelle C.; Thompson, Richard F.; Petit, Ted L.
2007-01-01
Eye-blink conditioning involves the pairing of a conditioned stimulus (usually a tone) to an unconditioned stimulus (air puff), and it is well established that an intact cerebellum and interpositus nucleus, in particular, are required for this form of classical conditioning. Changes in synaptic number or structure have long been proposed as a…
Eyeblink Conditioning Deficits Indicate Timing and Cerebellar Abnormalities in Schizophrenia
ERIC Educational Resources Information Center
Brown, S.M.; Kieffaber, P.D.; Carroll, C.A.; Vohs, J.L.; Tracy, J.A.; Shekhar, A.; O'Donnell, B.F.; Steinmetz, J.E.; Hetrick, W.P.
2005-01-01
Accumulating evidence indicates that individuals with schizophrenia manifest abnormalities in structures (cerebellum and basal ganglia) and neurotransmitter systems (dopamine) linked to internal-timing processes. A single-cue tone delay eyeblink conditioning paradigm comprised of 100 learning and 50 extinction trials was used to examine cerebellar…
Blocking the BK Channel Impedes Acquisition of Trace Eyeblink Conditioning
ERIC Educational Resources Information Center
Matthews, Elizabeth A.; Disterhoft, John F.
2009-01-01
Big-K[superscript +] conductance (BK)-channel mediated fast afterhyperpolarizations (AHPs) following action potentials are reduced after eyeblink conditioning. Blocking BK channels with paxilline increases evoked firing frequency in vitro and spontaneous pyramidal activity in vivo. To examine how increased excitability after BK-channel blockade…
Sakamoto, Toshiro; Endo, Shogo
2013-01-01
Previous studies have shown that deep cerebellar nuclei (DCN)-lesioned mice develop conditioned responses (CR) on delay eyeblink conditioning when a salient tone conditioned stimulus (CS) is used, which suggests that the cerebellum potentially plays a role in more complicated cognitive functions. In the present study, we examined the role of DCN in tone frequency discrimination in the delay eyeblink-conditioning paradigm. In the first experiment, DCN-lesioned and sham-operated mice were subjected to standard simple eyeblink conditioning under low-frequency tone CS (LCS: 1 kHz, 80 dB) or high-frequency tone CS (HCS: 10 kHz, 70 dB) conditions. DCN-lesioned mice developed CR in both CS conditions as well as sham-operated mice. In the second experiment, DCN-lesioned and sham-operated mice were subjected to two-tone discrimination tasks, with LCS+ (or HCS+) paired with unconditioned stimulus (US), and HCS− (or LCS−) without US. CR% in sham-operated mice increased in LCS+ (or HCS+) trials, regardless of tone frequency of CS, but not in HCS− (or LCS−) trials. The results indicate that sham-operated mice can discriminate between LCS+ and HCS− (or HCS+ and LCS−). In contrast, DCN-lesioned mice showed high CR% in not only LCS+ (or HCS+) trials but also HCS− (or LCS−) trials. The results indicate that DCN lesions impair the discrimination between tone frequency in eyeblink conditioning. Our results suggest that the cerebellum plays a pivotal role in the discrimination of tone frequency. PMID:23555821
Eyeblink Conditioning: A Non-Invasive Biomarker for Neurodevelopmental Disorders
ERIC Educational Resources Information Center
Reeb-Sutherland, Bethany C.; Fox, Nathan A.
2015-01-01
Eyeblink conditioning (EBC) is a classical conditioning paradigm typically used to study the underlying neural processes of learning and memory. EBC has a well-defined neural circuitry, is non-invasive, and can be employed in human infants shortly after birth making it an ideal tool to use in both developing and special populations. In addition,…
Classical conditioning of the eyeblink reflex is a relatively simple procedure for studying associative learning that was first developed for use with human subjects more than half a century ago. The use of this procedure in laboratory animals by psychologists and neuro-scientist...
Inferior Colliculus Lesions Impair Eyeblink Conditioning in Rats
ERIC Educational Resources Information Center
Freeman, John H.; Halverson, Hunter E.; Hubbard, Erin M.
2007-01-01
The neural plasticity necessary for acquisition and retention of eyeblink conditioning has been localized to the cerebellum. However, the sources of sensory input to the cerebellum that are necessary for establishing learning-related plasticity have not been identified completely. The inferior colliculus may be a source of sensory input to the…
Cerebellar Secretin Modulates Eyeblink Classical Conditioning
ERIC Educational Resources Information Center
Fuchs, Jason R.; Robinson, Gain M.; Dean, Aaron M.; Schoenberg, Heidi E.; Williams, Michael R.; Morielli, Anthony D.; Green, John T.
2014-01-01
We have previously shown that intracerebellar infusion of the neuropeptide secretin enhances the acquisition phase of eyeblink conditioning (EBC). Here, we sought to test whether endogenous secretin also regulates EBC and to test whether the effect of exogenous and endogenous secretin is specific to acquisition. In Experiment 1, rats received…
Differential Effects of the Cannabinoid Agonist WIN55,212-2 on Delay and Trace Eyeblink Conditioning
Steinmetz, Adam B.; Freeman, John H.
2014-01-01
Central cannabinoid-1 receptors (CB1R) play a role in the acquisition of delay eyeblink conditioning but not trace eyeblink conditioning in humans and animals. However, it is not clear why trace conditioning is immune to the effects of cannabinoid receptor compounds. The current study examined the effects of variants of delay and trace conditioning procedures to elucidate the factors that determine the effects of CB1R agonists on eyeblink conditioning. In Experiment 1 rats were administered the cannabinoid agonist WIN55,212-2 during delay, long delay, or trace conditioning. Rats were impaired during delay and long delay but not trace conditioning; the impairment was greater for long delay than delay conditioning. Trace conditioning was further examined in Experiment 2 by manipulating the trace interval and keeping constant the conditioned stimulus (CS) duration. It was found that when the trace interval was 300 ms or less WIN55,212-2 administration impaired the rate of learning. Experiment 3 tested whether the trace interval duration or the relative durations of the CS and trace interval were critical parameters influencing the effects of WIN55,212-2 on eyeblink conditioning. Rats were not impaired with a 100 ms CS, 200 ms trace paradigm but were impaired with a 1000 ms CS, 500 ms trace paradigm, indicating that the duration of the trace interval does not matter but the proportion of the interstimulus interval occupied by the CS relative to the trace period is critical. Taken together the results indicate that cannabinoid agonists affect cerebellar learning the CS is longer than the trace interval. PMID:24128358
Nokia, Miriam S; Wikgren, Jan
2010-04-01
The relative power of the hippocampal theta-band ( approximately 6 Hz) activity (theta ratio) is thought to reflect a distinct neural state and has been shown to affect learning rate in classical eyeblink conditioning in rabbits. We sought to determine if the theta ratio is mostly related to the detection of the contingency between the stimuli used in conditioning or also to the learning of more complex inhibitory associations when a highly demanding delay discrimination-reversal eyeblink conditioning paradigm is used. A high hippocampal theta ratio was not only associated with a fast increase in conditioned responding in general but also correlated with slow emergence of discriminative responding due to sustained responding to the conditioned stimulus not paired with an unconditioned stimulus. The results indicate that the neural state reflected by the hippocampal theta ratio is specifically linked to forming associations between stimuli rather than to the learning of inhibitory associations needed for successful discrimination. This is in line with the view that the hippocampus is responsible for contingency detection in the early phase of learning in eyeblink conditioning. (c) 2009 Wiley-Liss, Inc.
ERIC Educational Resources Information Center
Halverson, Hunter E.; Hubbard, Erin M.; Freeman, John H.
2009-01-01
The role of the cerebellum in eyeblink conditioning is well established. Less work has been done to identify the necessary conditioned stimulus (CS) pathways that project sensory information to the cerebellum. A possible visual CS pathway has been hypothesized that consists of parallel inputs to the pontine nuclei from the lateral geniculate…
ERIC Educational Resources Information Center
Suter, Eugenie E.; Weiss, Craig; Disterhoft, John F.
2013-01-01
The acquisition of temporal associative tasks such as trace eyeblink conditioning is hippocampus-dependent, while consolidated performance is not. The parahippocampal region mediates much of the input and output of the hippocampus, and perirhinal (PER) and entorhinal (EC) cortices support persistent spiking, a possible mediator of temporal…
ERIC Educational Resources Information Center
Steinmetz, Adam B.; Ng, Ka H.; Freeman, John H.
2017-01-01
Amygdala lesions impair, but do not prevent, acquisition of cerebellum-dependent eyeblink conditioning suggesting that the amygdala modulates cerebellar learning. Two-factor theories of eyeblink conditioning posit that a fast-developing memory within the amygdala facilitates slower-developing memory within the cerebellum. The current study tested…
Impaired delay eyeblink conditioning in amnesic Korsakoff's patients and recovered alcoholics.
McGlinchey-Berroth, R; Cermak, L S; Carrillo, M C; Armfield, S; Gabrieli, J D; Disterhoft, J F
1995-10-01
The performance of amnesic Korsakoff patients in delay eyeblink classical conditioning was compared with that of recovered chronic alcoholic subjects and healthy normal control subjects. Normal control subjects exhibited acquisition of conditioned responses (CRs) to a previously neutral, conditioned tone stimulus (CS) following repeated pairings with an unconditioned air-puff stimulus, and demonstrated extinction of CRs when the CS was subsequently presented alone. Both amnesic Korsakoff patients and recovered chronic alcoholic subjects demonstrated an impairment in their ability to acquire CRs. These results indicate that the preservation of delay eyeblink conditioning in amnesia must depend on the underlying neuropathology of the amnesic syndrome. It is known that patients with amnesia caused by medial temporal lobe pathology have preserved conditioning. We have now demonstrated that patients with amnesia caused by Korsakoff's syndrome, as well as recovered chronic alcoholic subjects, have impaired conditioning. This impairment is most likely caused by cerebellar deterioration resulting from years of alcohol abuse.
Tracy, Jo Anne; Thompson, Judith K; Krupa, David J; Thompson, Richard F
2013-10-01
Electrical stimulation thresholds required to elicit eyeblinks with either pontine or cerebellar interpositus stimulation were measured before and after classical eyeblink conditioning with paired pontine stimulation (conditioned stimulus, CS) and corneal airpuff (unconditioned stimulus, US). Pontine stimulation thresholds dropped dramatically after training and returned to baseline levels following extinction, whereas interpositus thresholds and input-output functions remained stable across training sessions. Learning rate, magnitude of threshold change, and electrode placements were correlated. Pontine projection patterns to the cerebellum were confirmed with retrograde labeling techniques. These results add to the body of literature suggesting that the pons relays CS information to the cerebellum and provide further evidence of synaptic plasticity in the cerebellar network. 2013 APA, all rights reserved
Chau, Lily S.; Prakapenka, Alesia V.; Zendeli, Liridon; Davis, Ashley S.; Galvez, Roberto
2014-01-01
Studies utilizing general learning and memory tasks have suggested the importance of neocortical structural plasticity for memory consolidation. However, these learning tasks typically result in learning of multiple different tasks over several days of training, making it difficult to determine the synaptic time course mediating each learning event. The current study used trace-eyeblink conditioning to determine the time course for neocortical spine modification during learning. With eyeblink conditioning, subjects are presented with a neutral, conditioned stimulus (CS) paired with a salient, unconditioned stimulus (US) to elicit an unconditioned response (UR). With multiple CS-US pairings, subjects learn to associate the CS with the US and exhibit a conditioned response (CR) when presented with the CS. Trace conditioning is when there is a stimulus free interval between the CS and the US. Utilizing trace-eyeblink conditioning with whisker stimulation as the CS (whisker-trace-eyeblink: WTEB), previous findings have shown that primary somatosensory (barrel) cortex is required for both acquisition and retention of the trace-association. Additionally, prior findings demonstrated that WTEB acquisition results in an expansion of the cytochrome oxidase whisker representation and synaptic modification in layer IV of barrel cortex. To further explore these findings and determine the time course for neocortical learning-induced spine modification, the present study utilized WTEB conditioning to examine Golgi-Cox stained neurons in layer IV of barrel cortex. Findings from this study demonstrated a training-dependent spine proliferation in layer IV of barrel cortex during trace associative learning. Furthermore, findings from this study showing that filopodia-like spines exhibited a similar pattern to the overall spine density further suggests that reorganization of synaptic contacts set the foundation for learning-induced neocortical modifications through the different neocortical layers. PMID:24760074
Kishimoto, Yasushi; Yamamoto, Shigeyuki; Suzuki, Kazutaka; Toyoda, Haruyoshi; Kano, Masanobu; Tsukada, Hideo; Kirino, Yutaka
2015-01-01
Delay eyeblink conditioning, a cerebellum-dependent learning paradigm, has been applied to various mammalian species but not yet to monkeys. We therefore developed an accurate measuring system that we believe is the first system suitable for delay eyeblink conditioning in a monkey species (Macaca mulatta). Monkey eyeblinking was simultaneously monitored by orbicularis oculi electromyographic (OO-EMG) measurements and a high-speed camera-based tracking system built around a 1-kHz CMOS image sensor. A 1-kHz tone was the conditioned stimulus (CS), while an air puff (0.02 MPa) was the unconditioned stimulus. EMG analysis showed that the monkeys exhibited a conditioned response (CR) incidence of more than 60% of trials during the 5-day acquisition phase and an extinguished CR during the 2-day extinction phase. The camera system yielded similar results. Hence, we conclude that both methods are effective in evaluating monkey eyeblink conditioning. This system incorporating two different measuring principles enabled us to elucidate the relationship between the actual presence of eyelid closure and OO-EMG activity. An interesting finding permitted by the new system was that the monkeys frequently exhibited obvious CRs even when they produced visible facial signs of drowsiness or microsleep. Indeed, the probability of observing a CR in a given trial was not influenced by whether the monkeys closed their eyelids just before CS onset, suggesting that this memory could be expressed independently of wakefulness. This work presents a novel system for cognitive assessment in monkeys that will be useful for elucidating the neural mechanisms of implicit learning in nonhuman primates.
ERIC Educational Resources Information Center
Schroeder, Matthew P.; Weiss, Craig; Procissi, Daniel; Wang, Lei; Disterhoft, John F.
2016-01-01
Fluctuations in neural activity can produce states that facilitate and accelerate task-related performance. Acquisition of trace eyeblink conditioning (tEBC) in the rabbit is enhanced when trials are contingent on optimal pretrial activity in the hippocampus. Other regions which are essential for whisker-signaled tEBC, such as the cerebellar…
Purkinje Cell Activity in the Cerebellar Anterior Lobe after Rabbit Eyeblink Conditioning
ERIC Educational Resources Information Center
Green, John T.; Steinmetz, Joseph E.
2005-01-01
The cerebellar anterior lobe may play a critical role in the execution and proper timing of learned responses. The current study was designed to monitor Purkinje cell activity in the rabbit cerebellar anterior lobe after eyeblink conditioning, and to assess whether Purkinje cells in recording locations may project to the interpositus nucleus.…
Cholinergic Septo-Hippocampal Innervation Is Required for Trace Eyeblink Classical Conditioning
ERIC Educational Resources Information Center
Fontan-Lozano, Angela; Troncoso, Julieta; Munera, Alejandro; Carrion, Angel Manuel; Delgado-Garcia, Jose Maria
2005-01-01
We studied the effects of a selective lesion in rats, with 192-IgG-saporin, of the cholinergic neurons located in the medial septum/diagonal band (MSDB) complex on the acquisition of classical and instrumental conditioning paradigms. The MSDB lesion induced a marked deficit in the acquisition, but not in the retrieval, of eyeblink classical…
Classical eyeblink conditioning in Parkinson's disease.
Daum, I; Schugens, M M; Breitenstein, C; Topka, H; Spieker, S
1996-11-01
Patients with Parkinson's disease (PD) show impairments of a range of motor learning tasks, including tracking or serial reaction time task learning. Our study investigated whether such deficits would also be seen on a simple type of motor learning, classic conditioning of the eyeblink response. Medicated and unmediated patients with PD showed intact unconditioned eyeblink responses and significant learning across acquisition; the learning rates did not differ from those of healthy control subjects. The overall frequency of conditioned responses was significantly higher in the medicated patients with PD relative to control subjects, and there was also some evidence of facilitation in the unmedicated patients with PD. Conditioning of electrodermal and electrocortical responses was comparable in all groups. The findings are discussed in terms of enhanced excitability of brainstem pathways in PD and of the involvement of different neuronal circuits in different types of motor learning.
ERIC Educational Resources Information Center
Takehara-Nishiuchi, Kaori; Kawahara, Shigenori; Kirino, Yutaka
2005-01-01
Permanent lesions in the medial prefrontal cortex (mPFC) affect acquisition of conditioned responses (CRs) during trace eyeblink conditioning and retention of remotely acquired CRs. To clarify further roles of the mPFC in this type of learning, we investigated the participation of the mPFC in mnemonic processes both during and after daily…
ERIC Educational Resources Information Center
Weiss, Craig; Sametsky, Evgeny; Sasse, Astrid; Spiess, Joachim; Disterhoft, John F.
2005-01-01
The effects of stress (restraint plus tail shock) on hippocampus-dependent trace eyeblink conditioning and hippocampal excitability were examined in C57BL/6 male mice. The results indicate that the stressor significantly increased the concentration of circulating corticosterone, the amount and rate of learning relative to nonstressed conditioned…
Allen, Michael Todd; Miller, Daniel P
2016-01-01
Anxiety vulnerable individuals exhibit enhanced acquisition of conditioned eyeblinks as well as enhanced proactive interference from conditioned stimulus (CS) or unconditioned stimulus (US) alone pre-exposures (Holloway et al., 2012). US alone pre-exposures disrupt subsequent conditioned response (CR) acquisition to CS-US paired trials as compared to context pre-exposure controls. While Holloway et al. (2012) reported enhanced acquisition in high trait anxiety individuals in the context condition, anxiety vulnerability effects were not reported for the US alone pre-exposure group. It appears from the published data that there were no differences between high and low anxiety individuals in the US alone condition. In the work reported here, we sought to extend the findings of enhanced proactive interference with US alone pre-exposures to determine if the enhanced conditioning was disrupted by proactive interference procedures. We also were interested in the spontaneous eyeblinks during the pre-exposure phase of training. We categorized individuals as anxiety vulnerability or non-vulnerable individuals based scores on the Adult Measure of Behavioral Inhibition (AMBI). Sixty-six participants received 60 trials consisting of 30 US alone or context alone pre-exposures followed by 30 CS-US trials. US alone pre-exposures not only disrupted CR acquisition overall, but behaviorally inhibited (BI) individuals exhibited enhanced proactive interference as compared to non-inhibited (NI) individuals. In addition, US alone pre-exposures disrupted the enhanced acquisition observed in BI individuals as compared to NI individuals following context alone pre-exposures. Differences were also found in rates of spontaneous eyeblinks between BI and NI individuals during context pre-exposure. Our findings will be discussed in the light of the neural substrates of eyeblink conditioning as well as possible factors such as hypervigilance in the amygdala and hippocampal systems, and possible learned helplessness. Applications of these findings of enhanced proactive interference in BI individuals to pre-exposure therapies to reduce anxiety disorders such as posttraumatic stress disorder (PTSD) will be discussed.
Allen, Michael Todd; Miller, Daniel P.
2016-01-01
Anxiety vulnerable individuals exhibit enhanced acquisition of conditioned eyeblinks as well as enhanced proactive interference from conditioned stimulus (CS) or unconditioned stimulus (US) alone pre-exposures (Holloway et al., 2012). US alone pre-exposures disrupt subsequent conditioned response (CR) acquisition to CS-US paired trials as compared to context pre-exposure controls. While Holloway et al. (2012) reported enhanced acquisition in high trait anxiety individuals in the context condition, anxiety vulnerability effects were not reported for the US alone pre-exposure group. It appears from the published data that there were no differences between high and low anxiety individuals in the US alone condition. In the work reported here, we sought to extend the findings of enhanced proactive interference with US alone pre-exposures to determine if the enhanced conditioning was disrupted by proactive interference procedures. We also were interested in the spontaneous eyeblinks during the pre-exposure phase of training. We categorized individuals as anxiety vulnerability or non-vulnerable individuals based scores on the Adult Measure of Behavioral Inhibition (AMBI). Sixty-six participants received 60 trials consisting of 30 US alone or context alone pre-exposures followed by 30 CS-US trials. US alone pre-exposures not only disrupted CR acquisition overall, but behaviorally inhibited (BI) individuals exhibited enhanced proactive interference as compared to non-inhibited (NI) individuals. In addition, US alone pre-exposures disrupted the enhanced acquisition observed in BI individuals as compared to NI individuals following context alone pre-exposures. Differences were also found in rates of spontaneous eyeblinks between BI and NI individuals during context pre-exposure. Our findings will be discussed in the light of the neural substrates of eyeblink conditioning as well as possible factors such as hypervigilance in the amygdala and hippocampal systems, and possible learned helplessness. Applications of these findings of enhanced proactive interference in BI individuals to pre-exposure therapies to reduce anxiety disorders such as posttraumatic stress disorder (PTSD) will be discussed. PMID:27014001
ERIC Educational Resources Information Center
Woodruff-Pak, Diana S.; Seta, Susan E.; Roker, LaToya A.; Lehr, Melissa A.
2007-01-01
The aim of this study was to examine parameters affecting age differences in eyeblink classical conditioning in a large sample of young and middle-aged rabbits. A total of 122 rabbits of mean ages of 4 or 26 mo were tested at inter-stimulus intervals (ISIs) of 600 or 750 msec in the delay or trace paradigms. Paradigm affected both age groups…
Shortened Conditioned Eyeblink Response Latency in Male but not Female Wistar-Kyoto Hyperactive Rats
Thanellou, Alexandra; Schachinger, Kira M.; Green, John T.
2014-01-01
Reductions in the volume of the cerebellum and impairments in cerebellar-dependent eyeblink conditioning have been observed in attention-deficit/hyperactivity disorder (ADHD). Recently, it was reported that subjects with ADHD as well as male spontaneously hypertensive rats (SHR), a strain that is frequently employed as an animal model in the study of ADHD, exhibit a parallel pattern of timing deficits in eyeblink conditioning. One criticism that has been posed regarding the validity of the SHR strain as an animal model for the study of ADHD is that SHRs are not only hyperactive but also hypertensive. It is conceivable that many of the behavioral characteristics seen in SHRs that seem to parallel the behavioral symptoms of ADHD are not solely due to hyperactivity but instead are the net outcome of the interaction between hyperactivity and hypertension. We used Wistar-Kyoto Hyperactive (WKHA) and Wistar-Kyoto Hypertensive (WKHT) rats (males and females), strains generated from recombinant inbreeding of SHRs and their progenitor strain, Wistar-Kyoto (WKY) rats, to compare eyeblink conditioning in strains that are exclusively hyperactive or hypertensive. We used a long-delay eyeblink conditioning task in which a tone conditioned stimulus was paired with a periorbital stimulation unconditioned stimulus (750-ms delay paradigm). Our results showed that WKHA and WKHT rats exhibited similar rates of conditioned response (CR) acquisition. However, WKHA males displayed shortened CR latencies (early onset and peak latency) in comparison to WKHT males. In contrast, female WKHAs and WKHTs did not differ. In subsequent extinction training, WKHA rats extinguished at similar rates in comparison to WKHT rats. The current results support the hypothesis of a relationship between cerebellar abnormalities and ADHD in an animal model of ADHD-like symptoms that does not also exhibit hypertension, and suggest that cerebellar-related timing deficits are specific to males. PMID:19485572
Central cannabinoid receptors modulate acquisition of eyeblink conditioning
Steinmetz, Adam B.; Freeman, John H.
2010-01-01
Delay eyeblink conditioning is established by paired presentations of a conditioned stimulus (CS) such as a tone or light, and an unconditioned stimulus (US) that elicits the blink reflex. Conditioned stimulus information is projected from the basilar pontine nuclei to the cerebellar interpositus nucleus and cortex. The cerebellar cortex, particularly the molecular layer, contains a high density of cannabinoid receptors (CB1R). The CB1Rs are located on the axon terminals of parallel fibers, stellate cells, and basket cells where they inhibit neurotransmitter release. The present study examined the effects of a CB1R agonist WIN55,212-2 and antagonist SR141716A on the acquisition of delay eyeblink conditioning in rats. Rats were given subcutaneous administration of 1, 2, or 3 mg/kg of WIN55,212-2 or 1, 3, or 5 mg/kg of SR141716A before each day of acquisition training (10 sessions). Dose-dependent impairments in acquisition were found for WIN55,212-2 and SR141716A, with no effects on spontaneous or nonassociative blinking. However, the magnitude of impairment was greater for WIN55,212-2 than SR141716A. Dose-dependent impairments in conditioned blink response (CR) amplitude and timing were found with WIN55,212-2 but not with SR141716A. The findings support the hypothesis that CB1Rs in the cerebellar cortex play an important role in plasticity mechanisms underlying eyeblink conditioning. PMID:21030483
Weidemann, Gabrielle; Satkunarajah, Michelle; Lovibond, Peter F.
2016-01-01
Can conditioning occur without conscious awareness of the contingency between the stimuli? We trained participants on two separate reaction time tasks that ensured attention to the experimental stimuli. The tasks were then interleaved to create a differential Pavlovian contingency between visual stimuli from one task and an airpuff stimulus from the other. Many participants were unaware of the contingency and failed to show differential eyeblink conditioning, despite attending to a salient stimulus that was contingently and contiguously related to the airpuff stimulus over many trials. Manipulation of awareness by verbal instruction dramatically increased awareness and differential eyeblink responding. These findings cast doubt on dual-system theories, which propose an automatic associative system independent of cognition, and provide strong evidence that cognitive processes associated with awareness play a causal role in learning. PMID:26905277
Madroñal, Noelia; Gruart, Agnès; Sacktor, Todd C.; Delgado-García, José M.
2010-01-01
A leading candidate in the process of memory formation is hippocampal long-term potentiation (LTP), a persistent enhancement in synaptic strength evoked by the repetitive activation of excitatory synapses, either by experimental high-frequency stimulation (HFS) or, as recently shown, during actual learning. But are the molecular mechanisms for maintaining synaptic potentiation induced by HFS and by experience the same? Protein kinase Mzeta (PKMζ), an autonomously active atypical protein kinase C isoform, plays a key role in the maintenance of LTP induced by tetanic stimulation and the storage of long-term memory. To test whether the persistent action of PKMζ is necessary for the maintenance of synaptic potentiation induced after learning, the effects of ZIP (zeta inhibitory peptide), a PKMζ inhibitor, on eyeblink-conditioned mice were studied. PKMζ inhibition in the hippocampus disrupted both the correct retrieval of conditioned responses (CRs) and the experience-dependent persistent increase in synaptic strength observed at CA3-CA1 synapses. In addition, the effects of ZIP on the same associative test were examined when tetanic LTP was induced at the hippocampal CA3-CA1 synapse before conditioning. In this case, PKMζ inhibition both reversed tetanic LTP and prevented the expected LTP-mediated deleterious effects on eyeblink conditioning. Thus, PKMζ inhibition in the CA1 area is able to reverse both the expression of trace eyeblink conditioned memories and the underlying changes in CA3-CA1 synaptic strength, as well as the anterograde effects of LTP on associative learning. PMID:20454458
Taylor, William; Kalmbach, Brian; Desai, Niraj S.
2015-01-01
Abstract Trace eyeblink conditioning is useful for studying the interaction of multiple brain areas in learning and memory. The goal of the current work was to determine whether trace eyeblink conditioning could be established in a mouse model in the absence of elicited startle responses and the brain circuitry that supports this learning. We show here that mice can acquire trace conditioned responses (tCRs) devoid of startle while head-restrained and permitted to freely run on a wheel. Most mice (75%) could learn with a trace interval of 250 ms. Because tCRs were not contaminated with startle-associated components, we were able to document the development and timing of tCRs in mice, as well as their long-term retention (at 7 and 14 d) and flexible expression (extinction and reacquisition). To identify the circuitry involved, we made restricted lesions of the medial prefrontal cortex (mPFC) and found that learning was prevented. Furthermore, inactivation of the cerebellum with muscimol completely abolished tCRs, demonstrating that learned responses were driven by the cerebellum. Finally, inactivation of the mPFC and amygdala in trained animals nearly abolished tCRs. Anatomical data from these critical regions showed that mPFC and amygdala both project to the rostral basilar pons and overlap with eyelid-associated pontocerebellar neurons. The data provide the first report of trace eyeblink conditioning in mice in which tCRs were driven by the cerebellum and required a localized region of mPFC for acquisition. The data further reveal a specific role for the amygdala as providing a conditioned stimulus-associated input to the cerebellum. PMID:26464998
Meteran, Hanieh; Vindbjerg, Erik; Uldall, Sigurd Wiingaard; Glenthøj, Birte; Carlsson, Jessica; Oranje, Bob
2018-05-17
Impairments in mechanisms underlying early information processing have been reported in posttraumatic stress disorder (PTSD); however, findings in the existing literature are inconsistent. This current study capitalizes on technological advancements of research on electroencephalographic event-related potential and applies it to a novel PTSD population consisting of trauma-affected refugees. A total of 25 trauma-affected refugees with PTSD and 20 healthy refugee controls matched on age, gender, and country of origin completed the study. In two distinct auditory paradigms sensory gating, indexed as P50 suppression, and sensorimotor gating, indexed as prepulse inhibition (PPI), startle reactivity, and habituation of the eye-blink startle response were examined. Within the P50 paradigm, N100 and P200 amplitudes were also assessed. In addition, correlations between psychophysiological and clinical measures were investigated. PTSD patients demonstrated significantly elevated stimuli responses across the two paradigms, reflected in both increased amplitude of the eye-blink startle response, and increased N100 and P200 amplitudes relative to healthy refugee controls. We found a trend toward reduced habituation in the patients, while the groups did not differ in PPI and P50 suppression. Among correlations, we found that eye-blink startle responses were associated with higher overall illness severity and lower levels of functioning. Fundamental gating mechanisms appeared intact, while the pattern of deficits in trauma-affected refugees with PTSD point toward a different form of sensory overload, an overall neural hypersensitivity and disrupted the ability to down-regulate stimuli responses. This study represents an initial step toward elucidating sensory processing deficits in a PTSD subgroup.
Eye-blink conditioning deficits indicate temporal processing abnormalities in schizophrenia.
Bolbecker, Amanda R; Mehta, Crystal S; Edwards, Chad R; Steinmetz, Joseph E; O'Donnell, Brian F; Hetrick, William P
2009-06-01
Theoretical models suggest that symptoms of schizophrenia may be due to a dysfunctional modulatory system associated with the cerebellum. Although it has long been known that the cerebellum plays a critical role in associative learning and motor timing, recent evidence suggests that it also plays a role in nonmotor psychological processes. Indeed, cerebellar anomalies in schizophrenia have been linked to cognitive dysfunction and poor long-term outcome. To test the hypothesis that schizophrenia is associated with cerebellar dysfunction, cerebellar-dependent, delay eye-blink conditioning was examined in 62 individuals with schizophrenia and 62 age-matched non-psychiatric comparison subjects. The conditioned stimulus was a 400 ms tone, which co-terminated with a 50 ms unconditioned stimulus air puff. A subset of participants (25 with schizophrenia and 29 controls) also completed the Wechsler Abbreviated Scale of Intelligence. Participants with schizophrenia exhibited lower rates of eye-blink conditioning, including earlier (less adaptively timed) conditioned response latencies. Cognitive functioning was correlated with the rate of conditioned responsing in the non-psychiatric comparison subjects but not among those with schizophrenia, and the magnitude of these correlations significantly differed between groups. These findings are consistent with models of schizophrenia in which disruptions within the cortico-cerebellar-thalamic-cortical (CCTC) brain circuit are postulated to underlie the cognitive fragmentation that characterizes the disorder.
Eye-Blink Conditioning Deficits Indicate Temporal Processing Abnormalities in Schizophrenia
Bolbecker, Amanda R.; Mehta, Crystal; Edwards, Chad R.; Steinmetz, Joseph E.; O’Donnell, Brian F.; Hetrick, William P.
2009-01-01
Theoretical models suggest that symptoms of schizophrenia may be due to a dysfunctional modulatory system associated with the cerebellum. Although it has long been known that the cerebellum plays a critical role in associative learning and motor timing, recent evidence suggests that it also plays a role in nonmotor psychological processes. Indeed, cerebellar anomalies in schizophrenia have been linked to cognitive dysfunction and poor long-term outcome. To test the hypothesis that schizophrenia is associated with cerebellar dysfunction, cerebellar-dependent, delay eye-blink conditioning was examined in 62 individuals with schizophrenia and 62 age-matched non-psychiatric comparison subjects. The conditioned stimulus was a 400 ms tone, which co-terminated with a 50 ms unconditioned stimulus air puff. A subset of participants (25 with schizophrenia and 29 controls) also completed the Wechsler Abbreviated Scale of Intelligence. Participants with schizophrenia exhibited lower rates of eye-blink conditioning, including earlier (less adaptively timed) conditioned response latencies. Cognitive functioning was correlated with the rate of conditioned responsing in the non-psychiatric comparison subjects but not among those with schizophrenia, and the magnitude of these correlations significantly differed between groups. These findings are consistent with models of schizophrenia in which disruptions within the cortico-cerebellar-thalamic-cortical (CCTC) brain circuit are postulated to underlie the cognitive fragmentation that characterizes the disorder. PMID:19351577
Deficits in hippocampus-mediated Pavlovian conditioning in endogenous hypercortisolism.
Grillon, Christian; Smith, Kathryn; Haynos, Ann; Nieman, Lynnette K
2004-12-01
Elevated endogenous levels of corticosteroids cause neural dysfunction and loss, especially within the hippocampus, as well as cognitive impairment in hippocampus-mediated tasks. Because Cushing's syndrome patients suffer from hypercortisolism, they represent a unique opportunity to study the impact of elevated glucocorticoids on cognitive functions. The aim of this study was to examine the performance of Cushing's syndrome patients on trace eyeblink conditioning, a cross-species, hippocampal-mediated test of learning and memory. Eleven Cushing's syndrome patients and 11 healthy control subjects participated in an eyeblink trace conditioning test (1000-msec trace) and a task of declarative memory for words. Salivary cortisol was collected in both the patients and the control subjects, and urinary free cortisol was collected in the patients only. The patients exhibited fewer conditional responses and remembered fewer words, compared with the control subjects. Cortisol levels correlated with immediate and delayed declarative memory only. Conditional response correlated with delayed recall after controlling for the magnitude of unconditional response. The integrity of the hippocampus seems to be compromised in Cushing's syndrome patients. Trace eyeblink conditioning might be useful both as a clinical tool to examine changes in hippocampus function in Cushing's disease patients and as a translational tool of research on the impact of chronic exposure of glucocorticoids.
Krupa, D J; Thompson, R F
1995-05-23
The localization of sites of memory formation within the mammalian brain has proven to be a formidable task even for simple forms of learning and memory. Recent studies have demonstrated that reversibly inactivating a localized region of cerebellum, including the dorsal anterior interpositus nucleus, completely prevents acquisition of the conditioned eye-blink response with no effect upon subsequent learning without inactivation. This result indicates that the memory trace for this type of learning is located either (i) within this inactivated region of cerebellum or (ii) within some structure(s) efferent from the cerebellum to which output from the interpositus nucleus ultimately projects. To distinguish between these possibilities, two groups of rabbits were conditioned (by using two conditioning stimuli) while the output fibers of the interpositus (the superior cerebellar peduncle) were reversibly blocked with microinjections of the sodium channel blocker tetrodotoxin. Rabbits performed no conditioned responses during this inactivation training. However, training after inactivation revealed that the rabbits (trained with either conditioned stimulus) had fully learned the response during the previous inactivation training. Cerebellar output, therefore, does not appear to be essential for acquisition of the learned response. This result, coupled with the fact that inactivation of the appropriate region of cerebellum completely prevents learning, provides compelling evidence supporting the hypothesis that the essential memory trace for the classically conditioned eye-blink response is localized within the cerebellum.
Effects of inferior olive lesion on fear-conditioned bradycardia
Kotajima, Hiroko; Sakai, Kazuhisa; Hashikawa, Tsutomu
2014-01-01
The inferior olive (IO) sends excitatory inputs to the cerebellar cortex and cerebellar nuclei through the climbing fibers. In eyeblink conditioning, a model of motor learning, the inactivation of or a lesion in the IO impairs the acquisition or expression of conditioned eyeblink responses. Additionally, climbing fibers originating from the IO are believed to transmit the unconditioned stimulus to the cerebellum in eyeblink conditioning. Studies using fear-conditioned bradycardia showed that the cerebellum is associated with adaptive control of heart rate. However, the role of inputs from the IO to the cerebellum in fear-conditioned bradycardia has not yet been investigated. To examine this possible role, we tested fear-conditioned bradycardia in mice by selective disruption of the IO using 3-acetylpyridine. In a rotarod test, mice with an IO lesion were unable to remain on the rod. The number of neurons of IO nuclei in these mice was decreased to ∼40% compared with control mice. Mice with an IO lesion did not show changes in the mean heart rate or in heart rate responses to a conditioned stimulus, or in their responses to a painful stimulus in a tail-flick test. However, they did show impairment of the acquisition/expression of conditioned bradycardia and attenuation of heart rate responses to a pain stimulus used as an unconditioned stimulus. These results indicate that the IO inputs to the cerebellum play a key role in the acquisition/expression of conditioned bradycardia. PMID:24784584
Additive Effects of Threat-of-Shock and Picture Valence on Startle Reflex Modulation
Bublatzky, Florian; Guerra, Pedro M.; Pastor, M. Carmen; Schupp, Harald T.; Vila, Jaime
2013-01-01
The present study examined the effects of sustained anticipatory anxiety on the affective modulation of the eyeblink startle reflex. Towards this end, pleasant, neutral and unpleasant pictures were presented as a continuous stream during alternating threat-of-shock and safety periods, which were cued by colored picture frames. Orbicularis-EMG to auditory startle probes and electrodermal activity were recorded. Previous findings regarding affective picture valence and threat-of-shock modulation were replicated. Of main interest, anticipating aversive events and viewing affective pictures additively modulated defensive activation. Specifically, despite overall potentiated startle blink magnitude in threat-of-shock conditions, the startle reflex remained sensitive to hedonic picture valence. Finally, skin conductance level revealed sustained sympathetic activation throughout the entire experiment during threat- compared to safety-periods. Overall, defensive activation by physical threat appears to operate independently from reflex modulation by picture media. The present data confirms the importance of simultaneously manipulating phasic-fear and sustained-anxiety in studying both normal and abnormal anxiety. PMID:23342060
Myers, Catherine E.; VanMeenen, Kirsten M.; McAuley, J. Devin; Beck, Kevin D.; Pang, Kevin C. H.; Servatius, Richard J.
2012-01-01
Prior studies have sometimes demonstrated facilitated acquisition of classically-conditioned responses and/or resistance to extinction in post-traumatic stress disorder (PTSD). However, it is unclear whether these behaviors are acquired as a result of PTSD or exposure to trauma, or reflect pre-existing risk factors that confer vulnerability for PTSD. Here, we examined classical eyeblink conditioning and extinction in veterans self-assessed for current PTSD symptoms, exposure to combat, and the personality trait of behavioral inhibition (BI), a risk factor for PTSD. 128 veterans were recruited (mean age 51.2 years; 13.3% female); 126 completed self-assessment, with 25.4% reporting a history of exposure to combat and 30.9% reporting severe, current PTSD symptoms (PTSS). PTSD symptom severity was correlated with current BI (R2=0.497) and PTSS status could be predicted based on current BI and combat history (80.2% correct classification). A subset of the veterans (n=87) also completed eyeblink conditioning. Among veterans without PTSS, childhood BI was associated with faster acquisition; veterans with PTSS showed delayed extinction, under some conditions. These data demonstrate a relationship between current BI and PTSS, and suggest that the facilitated conditioning sometimes observed in PTSD patients may partially reflect personality traits such as childhood BI that pre-date and contribute to vulnerability for PTSD. PMID:21790343
Oristaglio, Jeff; West, Susan Hyman; Ghaffari, Manely; Lech, Melissa S.; Verma, Beeta R.; Harvey, John A.; Welsh, John P.; Malone, Richard P.
2013-01-01
Children with autism spectrum disorder (ASD) and age-matched typically-developing (TD) peers were tested on two forms of eyeblink conditioning (EBC), a Pavlovian associative learning paradigm where subjects learn to execute an appropriately-timed eyeblink in response to a previously neutral conditioning stimulus (CS). One version of the task, trace EBC, interposes a stimulus-free interval between the presentation of the CS and the unconditioned stimulus (US), a puff of air to the eye which causes subjects to blink. In delay EBC, the CS overlaps in time with the delivery of the US, usually with both stimuli terminating simultaneously. ASD children performed normally during trace EBC, exhibiting no differences from typically-developing (TD) subjects with regard to learning rate or the timing of the CR. However, when subsequently tested on delay EBC, subjects with ASD displayed abnormally-timed conditioned eye blinks that began earlier and peaked sooner than those of TD subjects, consistent with previous findings. The results suggest an impaired ability of children with ASD to properly time conditioned eye blinks which appears to be specific to delay EBC. We suggest that this deficit may reflect a dysfunction of cerebellar cortex in which increases in the intensity or duration of sensory input can temporarily disrupt the accuracy of motor timing over short temporal intervals. PMID:23769889
Interactions among Collective Spectators Facilitate Eyeblink Synchronization
Nomura, Ryota; Liang, Yingzong; Okada, Takeshi
2015-01-01
Whereas the entrainment of movements and aspirations among audience members has been known as a basis of collective excitement in the theater, the role of the entrainment of cognitive processes among audience members is still unclear. In the current study, temporal patterns of the audience’s attention were observed using eyeblink responses. To determine the effect of interactions among audience members on cognitive entrainment, as well as its direction (attractive or repulsive), the eyeblink synchronization of the following two groups were compared: (1) the experimental condition, where the audience members (seven frequent viewers and seven first-time viewers) viewed live performances in situ, and (2) the control condition, where the audience members (15 frequent viewers and 15 first-time viewers) viewed videotaped performances in individual experimental settings (results reported in previous study.) The results of this study demonstrated that the mean values of a measure of asynchrony (i.e., D interval) were much lower for the experimental condition than for the control condition. Frequent viewers had a moderate attractive effect that increased as the story progressed, while a strong attractive effect was observed throughout the story for first-time viewers. The attractive effect of interactions among a group of spectators was discussed from the viewpoint of cognitive and somatic entrainment in live performances. PMID:26479405
Cross-modal Savings in the Contralateral Eyelid Conditioned Response
Campolattaro, Matthew M.; Buss, Eric W.; Freeman, John H.
2015-01-01
The present experiment monitored bilateral eyelid responses during eyeblink conditioning in rats trained with a unilateral unconditioned stimulus (US). Three groups of rats were used to determine if cross-modal savings occurs when the location of the US is switched from one eye to the other. Rats in each group first received paired or unpaired eyeblink conditioning with a conditioned stimulus (tone or light; CS) and a unilateral periorbital electrical stimulation US. All rats were subsequently given paired training, but with the US location (Group 1), CS modality (Group 2), or US location and CS modality (Group 3) changed. Changing the location of the US alone resulted in an immediate transfer of responding in both eyelids (Group 1) in rats that received paired training prior to the transfer session. Rats in groups 2 and 3 that initially received paired training showed facilitated learning to the new CS modality during the transfer sessions, indicating that cross-modal savings occurs whether or not the location of the US is changed. All rats that were initially given unpaired training acquired conditioned eyeblink responses similar to de novo acquisition rate during the transfer sessions. Savings of CR incidence was more robust than savings of CR amplitude when the US switched sides, a finding that has implications for elucidating the neural mechanisms of cross-modal savings. PMID:26501170
Eyeblink Synchrony in Multimodal Human-Android Interaction.
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-12-23
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human's attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners' eyeblinks were entrained to android speakers' eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android's hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions.
Eyeblink Synchrony in Multimodal Human-Android Interaction
Tatsukawa, Kyohei; Nakano, Tamami; Ishiguro, Hiroshi; Yoshikawa, Yuichiro
2016-01-01
As the result of recent progress in technology of communication robot, robots are becoming an important social partner for humans. Behavioral synchrony is understood as an important factor in establishing good human-robot relationships. In this study, we hypothesized that biasing a human’s attitude toward a robot changes the degree of synchrony between human and robot. We first examined whether eyeblinks were synchronized between a human and an android in face-to-face interaction and found that human listeners’ eyeblinks were entrained to android speakers’ eyeblinks. This eyeblink synchrony disappeared when the android speaker spoke while looking away from the human listeners but was enhanced when the human participants listened to the speaking android while touching the android’s hand. These results suggest that eyeblink synchrony reflects a qualitative state in human-robot interactions. PMID:28009014
Myer, Catherine E; Bryant, Deborah; DeLuca, John; Gluck, Mark A
2002-01-01
In humans, anterograde amnesia can result from damage to the medial temporal (MT) lobes (including hippocampus), as well as to other brain areas such as basal forebrain. Results from animal classical conditioning studies suggest that there may be qualitative differences in the memory impairment following MT vs. basal forebrain damage. Specifically, delay eyeblink conditioning is spared after MT damage in animals and humans, but impaired in animals with basal forebrain damage. Recently, we have likewise shown delay eyeblink conditioning impairment in humans with amnesia following anterior communicating artery (ACoA) aneurysm rupture, which damages the basal forebrain. Another associative learning task, a computer-based concurrent visual discrimination, also appears to be spared in MT amnesia while ACoA amnesics are slower to learn the discriminations. Conversely, animal and computational models suggest that, even though MT amnesics may learn quickly, they may learn qualitatively differently from controls, and these differences may result in impaired transfer when familiar information is presented in novel combinations. Our initial data suggests such a two-phase learning and transfer task may provide a double dissociation between MT amnesics (spared initial learning but impaired transfer) and ACoA amnesics (slow initial learning but spared transfer). Together, these emerging data suggest that there are subtle but dissociable differences in the amnesic syndrome following damage to the MT lobes vs. basal forebrain, and that these differences may be most visible in non-declarative tasks such as eyeblink classical conditioning and simple associative learning.
Naase, Taher; Doughty, Michael J; Button, Norman F
2005-04-01
To determine whether there is a change in the pattern of human eyeblink events under topical ocular anaesthesia. Forty male subjects, aged between 19 and 52 years and with no significant ocular surface disease, were recruited. Their spontaneous eyeblink activity, in primary eye gaze position and in silence, was recorded for 5-min periods, before and after instillation of benoxinate 0.4% eyedrops. The surface anaesthesia was confirmed by aesthesiometry. The spontaneous eyeblink rate (SEBR) decreased from 9.1+/-4.0 blinks/min to an average of 5.7+/-3.3 blinks/min, with 37 subjects showing a decreased eyeblink rate under anaesthesia. Three blink patterns were observed before anaesthesia (symmetrical, J-type and I-type) and these were essentially unchanged under anaesthesia. These studies confirm that the SEBR is usually reduced under surface anaesthesia (so is sensitive to exogenous control) but the pattern of the eyeblink activity is unchanged (so is less sensitive to exogenous control). The removal of exogenous stimuli by anaesthesia does not shift the eyeblink pattern to a single type, so indicates endogenous control.
Rahman, Md. Ashrafur; Tanaka, Norifumi; Usui, Koji; Kawahara, Shigenori
2016-01-01
We investigated the role of muscarinic acetylcholine receptors (mAChRs) in eyeblink serial feature-positive discrimination learning in mice using the mAChR antagonist. A 2-s light cue was delivered 5 or 6 s before the presentation of a 350-ms tone paired with a 100-ms periorbital electrical shock (cued trial) but not before the tone-alone presentation (non-cued trial). Mice received 30 cued and 30 non-cued trials each day in a random order. We found that saline-injected control mice were successfully discriminating between cued and non-cued trials within a few days of conditioning. The mice responded more frequently to the tone in cued trials than in non-cued trials. Analysis of conditioned response (CR) dynamics revealed that the CR onset latency was shorter in cued trials than in non-cued trials, despite the CR peak amplitude not differing significantly between the two conditions. In contrast, scopolamine-injected mice developed an equal number of CRs with similar temporal patterns irrespective of the presence of the cue during the 7 days of conditioning, indicating in a failure to acquire conditional discrimination. In addition, the scopolamine administration to the control mice after they had successfully acquired discrimination did not impair the conditional discrimination and expression of pre-acquired CR. These results suggest that mAChRs may play a pivotal role in memory formation in the conditional brain state associated with the feature cue; however they are unlikely to be involved in the development of discrimination after conditional memory had formed in the serial feature-positive discrimination task during eyeblink conditioning. PMID:26808980
Both Trace and Delay Conditioning of Evaluative Responses Depend on Contingency Awareness
ERIC Educational Resources Information Center
Kattner, Florian; Ellermeier, Wolfgang; Tavakoli, Paniz
2012-01-01
Whereas previous evaluative conditioning (EC) studies produced inconsistent results concerning the role of contingency knowledge, there are classical eye-blink conditioning studies suggesting that declarative processes are involved in trace conditioning but not in delay conditioning. In two EC experiments pairing neutral sounds (conditioned…
Transcriptional profiling reveals regulated genes in the hippocampus during memory formation
NASA Technical Reports Server (NTRS)
Donahue, Christine P.; Jensen, Roderick V.; Ochiishi, Tomoyo; Eisenstein, Ingrid; Zhao, Mingrui; Shors, Tracey; Kosik, Kenneth S.
2002-01-01
Transcriptional profiling (TP) offers a powerful approach to identify genes activated during memory formation and, by inference, the molecular pathways involved. Trace eyeblink conditioning is well suited for the study of regional gene expression because it requires the hippocampus, whereas the highly parallel task, delay conditioning, does not. First, we determined when gene expression was most regulated during trace conditioning. Rats were exposed to 200 trials per day of paired and unpaired stimuli each day for 4 days. Changes in gene expression were most apparent 24 h after exposure to 200 trials. Therefore, we profiled gene expression in the hippocampus 24 h after 200 trials of trace eyeblink conditioning, on multiple arrays using additional animals. Of 1,186 genes on the filter array, seven genes met the statistical criteria and were also validated by real-time polymerase chain reaction. These genes were growth hormone (GH), c-kit receptor tyrosine kinase (c-kit), glutamate receptor, metabotropic 5 (mGluR5), nerve growth factor-beta (NGF-beta), Jun oncogene (c-Jun), transmembrane receptor Unc5H1 (UNC5H1), and transmembrane receptor Unc5H2 (UNC5H2). All these genes, except for GH, were downregulated in response to trace conditioning. GH was upregulated; therefore, we also validated the downregulation of the GH inhibitor, somatostatin (SST), even though it just failed to meet criteria on the arrays. By during situ hybridization, GH was expressed throughout the cell layers of the hippocampus in response to trace conditioning. None of the genes regulated in trace eyeblink conditioning were similarly affected by delay conditioning, a task that does not require the hippocampus. These findings demonstrate that transcriptional profiling can exhibit a repertoire of genes sensitive to the formation of hippocampal-dependent associative memories.
Romano, Anthony G; Quinn, Jennifer L; Li, Luchuan; Dave, Kuldip D; Schindler, Emmanuelle A; Aloyo, Vincent J; Harvey, John A
2010-10-01
Parenteral injections of d-lysergic acid diethylamide (LSD), a serotonin 5-HT(2A) receptor agonist, enhance eyeblink conditioning. Another hallucinogen, (±)-1(2, 5-dimethoxy-4-iodophenyl)-2-aminopropane hydrochloride (DOI), was shown to elicit a 5-HT(2A)-mediated behavior (head bobs) after injection into the hippocampus, a structure known to mediate trace eyeblink conditioning. This study aims to determine if parenteral injections of the hallucinogens LSD, d,l-2,5-dimethoxy-4-methylamphetamine, and 5-methoxy-dimethyltryptamine elicit the 5-HT(2A)-mediated behavior of head bobs and whether intrahippocampal injections of LSD would produce head bobs and enhance trace eyeblink conditioning. LSD was infused into the dorsal hippocampus just prior to each of eight conditioning sessions. One day after the last infusion of LSD, DOI was infused into the hippocampus to determine whether there had been a desensitization of the 5-HT(2A) receptor as measured by a decrease in DOI-elicited head bobs. Acute parenteral or intrahippocampal LSD elicited a 5-HT(2A) but not a 5-HT(2C)-mediated behavior, and chronic administration enhanced conditioned responding relative to vehicle controls. Rabbits that had been chronically infused with 3 or 10 nmol per side of LSD during Pavlovian conditioning and then infused with DOI demonstrated a smaller increase in head bobs relative to controls. LSD produced its enhancement of Pavlovian conditioning through an effect on 5-HT(2A) receptors located in the dorsal hippocampus. The slight, short-lived enhancement of learning produced by LSD appears to be due to the development of desensitization of the 5-HT(2A) receptor within the hippocampus as a result of repeated administration of its agonist (LSD).
Lindquist, Derick H
2013-04-01
Binge-like postnatal ethanol exposure produces significant damage throughout the brain in rats, including the cerebellum and hippocampus. In the current study, cue- and context-mediated Pavlovian conditioning were assessed in adult rats exposed to moderately low (3E; 3g/kg/day) or high (5E; 5g/kg/day) doses of ethanol across postnatal days 4-9. Ethanol-exposed and control groups were presented with 8 sessions of trace eyeblink conditioning followed by another 8 sessions of delay eyeblink conditioning, with an altered context presented over the last two sessions. Both forms of conditioning rely on the brainstem and cerebellum, while the more difficult trace conditioning also requires the hippocampus. The hippocampus is also needed to gate or modulate expression of the eyeblink conditioned response (CR) based on contextual cues. Results indicate that the ethanol-exposed rats were not significantly impaired in trace EBC relative to control subjects. In terms of CR topography, peak amplitude was significantly reduced by both doses of alcohol, whereas onset latency but not peak latency was significantly lengthened in the 5E rats across the latter half of delay EBC in the original training context. Neither dosage resulted in significant impairment in the contextual gating of the behavioral response, as revealed by similar decreases in CR production across all four treatment groups following introduction of the novel context. Results suggest ethanol-induced brainstem-cerebellar damage can account for the present results, independent of the putative disruption in hippocampal development and function proposed to occur following postnatal ethanol exposure. Copyright © 2013 Elsevier B.V. All rights reserved.
The emotional startle effect is disrupted by a concurrent working memory task.
King, Rosemary; Schaefer, Alexandre
2011-02-01
Working memory (WM) processes are often thought to play an important role in the cognitive regulation of negative emotions. However, little is known about how they influence emotional processing. We report two experiments that tested whether a concurrent working memory task could modulate the emotional startle eyeblink effect, a well-known index of emotional processing. In both experiments, emotionally negative and neutral pictures were viewed in two conditions: a "cognitive load" (CL) condition, in which participants had to actively maintain information in working memory (WM) while viewing the pictures, and a control "no load" (NL) condition. Picture-viewing instructions were identical across CL and NL. In both experiments, results showed a significant reduction of the emotional modulation of the startle eyeblink reflex in the CL condition compared to the NL condition. These findings suggest that a concurrent WM task disrupts emotional processing even when participants are directing visual focus on emotionally relevant information. Copyright © 2010 Society for Psychophysiological Research.
Allen, M T; Myers, C E; Servatius, R J
2016-05-01
Recent work has found that behaviorally inhibited (BI) individuals exhibit enhanced eyeblink conditioning in omission and yoked training as well as with schedules of partial reinforcement. We hypothesized that spacing CS-US paired trials over a longer period of time by extending and varying the inter-trial interval (ITI) would facilitate learning. All participants completed the Adult Measure of Behavioural Inhibition (AMBI) and were grouped as behaviorally inhibited (BI) and non-behaviorally inhibited (NI) based on a median split score of 15.5. All participants received 3 US alone trials and 30CS-US paired trials for acquisition training and 20CS alone trials for extinction training in one session. Conditioning stimuli were a 500 ms tone conditioned stimulus (CS) and a 50-ms air puff unconditional stimulus (US). Participants were randomly assigned to receive a short ITI (mean=30+/- 5s), a long ITI (mean=57+/- 5s) or a variable long ITI (mean=57 s, range 25-123 s). No significant ITI effects were observed for acquisition or extinction. Overall, anxiety vulnerable individuals exhibited enhanced conditioned eyeblink responses as compared to non-vulnerable individuals. This enhanced acquisition of CRs was significant in spaced training with a variable long ITI, but not the short or long ITI. There were no significant effects of ITI or BI on extinction. These findings are interpreted based on the idea that uncertainty plays a role in anxiety and can enhance associative learning in anxiety vulnerable individuals. Copyright © 2016 Elsevier B.V. All rights reserved.
Cerebellar transcranial direct current stimulation interacts with BDNF Val66Met in motor learning.
van der Vliet, Rick; Jonker, Zeb D; Louwen, Suzanne C; Heuvelman, Marco; de Vreede, Linda; Ribbers, Gerard M; De Zeeuw, Chris I; Donchin, Opher; Selles, Ruud W; van der Geest, Jos N; Frens, Maarten A
2018-04-11
Cerebellar transcranial direct current stimulation has been reported to enhance motor associative learning and motor adaptation, holding promise for clinical application in patients with movement disorders. However, behavioral benefits from cerebellar tDCS have been inconsistent. Identifying determinants of treatment success is necessary. BDNF Val66Met is a candidate determinant, because the polymorphism is associated with motor skill learning and BDNF is thought to mediate tDCS effects. We undertook two cerebellar tDCS studies in subjects genotyped for BDNF Val66Met. Subjects performed an eyeblink conditioning task and received sham, anodal or cathodal tDCS (N = 117, between-subjects design) or a vestibulo-ocular reflex adaptation task and received sham and anodal tDCS (N = 51 subjects, within-subjects design). Performance was quantified as a learning parameter from 0 to 100%. We investigated (1) the distribution of the learning parameter with mixture modeling presented as the mean (M), standard deviation (S) and proportion (P) of the groups, and (2) the role of BDNF Val66Met and cerebellar tDCS using linear regression presented as the regression coefficients (B) and odds ratios (OR) with equally-tailed intervals (ETIs). For the eyeblink conditioning task, we found distinct groups of learners (M Learner = 67.2%; S Learner = 14.7%; P Learner = 61.6%) and non-learners (M Non-learner = 14.2%; S Non-learner = 8.0%; P Non-learner = 38.4%). Carriers of the BDNF Val66Met polymorphism were more likely to be learners (OR = 2.7 [1.2 6.2]). Within the group of learners, anodal tDCS supported eyeblink conditioning in BDNF Val66Met non-carriers (B = 11.9% 95%ETI = [0.8 23.0]%), but not in carriers (B = 1.0% 95%ETI = [-10.2 12.1]%). For the vestibulo-ocular reflex adaptation task, we found no effect of BDNF Val66Met (B = -2.0% 95%ETI = [-8.7 4.7]%) or anodal tDCS in either carriers (B = 3.4% 95%ETI = [-3.2 9.5]%) or non-carriers (B = 0.6% 95%ETI = [-3.4 4.8]%). Finally, we performed additional saccade and visuomotor adaptation experiments (N = 72) to investigate the general role of BDNF Val66Met in cerebellum-dependent learning and found no difference between carriers and non-carriers for both saccade (B = 1.0% 95%ETI = [-8.6 10.6]%) and visuomotor adaptation (B = 2.7% 95%ETI = [-2.5 7.9]%). The specific role for BDNF Val66Met in eyeblink conditioning, but not vestibulo-ocular reflex adaptation, saccade adaptation or visuomotor adaptation could be related to dominance of the role of simple spike suppression of cerebellar Purkinje cells with a high baseline firing frequency in eyeblink conditioning. Susceptibility of non-carriers to anodal tDCS in eyeblink conditioning might be explained by a relatively larger effect of tDCS-induced subthreshold depolarization in this group, which might increase the spontaneous firing frequency up to the level of that of the carriers. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.
Locomotor activity modulates associative learning in mouse cerebellum.
Albergaria, Catarina; Silva, N Tatiana; Pritchett, Dominique L; Carey, Megan R
2018-05-01
Changes in behavioral state can profoundly influence brain function. Here we show that behavioral state modulates performance in delay eyeblink conditioning, a cerebellum-dependent form of associative learning. Increased locomotor speed in head-fixed mice drove earlier onset of learning and trial-by-trial enhancement of learned responses that were dissociable from changes in arousal and independent of sensory modality. Eyelid responses evoked by optogenetic stimulation of mossy fiber inputs to the cerebellum, but not at sites downstream, were positively modulated by ongoing locomotion. Substituting prolonged, low-intensity optogenetic mossy fiber stimulation for locomotion was sufficient to enhance conditioned responses. Our results suggest that locomotor activity modulates delay eyeblink conditioning through increased activation of the mossy fiber pathway within the cerebellum. Taken together, these results provide evidence for a novel role for behavioral state modulation in associative learning and suggest a potential mechanism through which engaging in movement can improve an individual's ability to learn.
Spontaneous Eye-Blinking and Stereotyped Behavior in Older Persons with Mental Retardation
ERIC Educational Resources Information Center
Roebel, Amanda M.; MacLean, William E., Jr.
2007-01-01
Previous research indicates that abnormal stereotyped movements are associated with central dopamine dysfunction and that eye-blink rate is a noninvasive, in vivo measure of dopamine function. We measured the spontaneous eye-blinking and stereotyped behavior of older adults with severe/profound mental retardation living in a state mental…
Gao, Zhenyu; Proietti-Onori, Martina; Lin, Zhanmin; Ten Brinke, Michiel M; Boele, Henk-Jan; Potters, Jan-Willem; Ruigrok, Tom J H; Hoebeek, Freek E; De Zeeuw, Chris I
2016-02-03
Closed-loop circuitries between cortical and subcortical regions can facilitate precision of output patterns, but the role of such networks in the cerebellum remains to be elucidated. Here, we characterize the role of internal feedback from the cerebellar nuclei to the cerebellar cortex in classical eyeblink conditioning. We find that excitatory output neurons in the interposed nucleus provide efference-copy signals via mossy fibers to the cerebellar cortical zones that belong to the same module, triggering monosynaptic responses in granule and Golgi cells and indirectly inhibiting Purkinje cells. Upon conditioning, the local density of nucleocortical mossy fiber terminals significantly increases. Optogenetic activation and inhibition of nucleocortical fibers in conditioned animals increases and decreases the amplitude of learned eyeblink responses, respectively. Our data show that the excitatory nucleocortical closed-loop circuitry of the cerebellum relays a corollary discharge of premotor signals and suggests an amplifying role of this circuitry in controlling associative motor learning. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
2014-01-01
Persistent spiking in response to a discrete stimulus is considered to reflect the active maintenance of a memory for that stimulus until a behavioral response is made. This response pattern has been reported in learning paradigms that impose a temporal gap between stimulus presentation and behavioral response, including trace eyeblink conditioning. However, it is unknown whether persistent responses are acquired as a function of learning or simply represent an already existing category of response type. This fundamental question was addressed by recording single-unit activity in the medial prefrontal cortex (mPFC) of rabbits during the initial learning phase of trace eyeblink conditioning. Persistent responses to the tone conditioned stimulus were observed in the mPFC during the very first training sessions. Further analysis revealed that most cells with persistent responses showed this pattern during the very first training trial, before animals had experienced paired training. However, persistent cells showed reliable decreases in response magnitude over the first training session, which were not observed on the second day of training or for sessions in which learning criterion was met. This modification of response magnitude was specific to persistent responses and was not observed for cells showing phasic tone-evoked responses. The data suggest that persistent responses to discrete stimuli do not require learning but that the ongoing robustness of such responses over the course of training is modified as a result of experience. Putative mechanisms for this modification are discussed, including changes in cellular or network properties, neuromodulatory tone, and/or the synaptic efficacy of tone-associated inputs. PMID:25080570
Stevens, Andreas; Schwarz, Jürgen; Schwarz, Benedikt; Ruf, Ilona; Kolter, Thomas; Czekalla, Joerg
2002-03-01
Novel and classic neuroleptics differ in their effects on limbic striatal/nucleus accumbens (NA) and prefrontal cortex (PFC) dopamine turnover, suggesting differential effects on implicit and explicit learning as well as on anhedonia. The present study investigates whether such differences can be demonstrated in a naturalistic sample of schizophrenic patients. Twenty-five inpatients diagnosed with DSM-IV schizophrenic psychosis and treated for at least 14 days with the novel neuroleptic olanzapine were compared with 25 schizophrenics taking classic neuroleptics and with 25 healthy controls, matched by age and education level. PFC/NA-dependent implicit learning was assessed by a serial reaction time task (SRTT) and compared with cerebellum-mediated classical eye-blink conditioning and explicit visuospatial memory. Anhedonia was measured with the Snaith-Hamilton-Pleasure Scale (SHAPS). Implicit (SRTT) and psychomotor speed, but not explicit (visuospatial) learning were superior in the olanzapine-treated group as compared to the patients on classic neuroleptics. Compared to healthy controls, olanzapine-treated schizophrenics showed similar implicit learning, but reduced explicit (visuospatial) memory performance. Acquisition of eyeblink conditioning was not different between the three groups. There was no difference with regard to anhedonia and SANS scores between the patients. Olanzapine seems to interfere less with unattended learning and motor speed than classical neuroleptics. In daily life, this may translate into better adaptation to a rapidly changing environment. The effects seem specific, as in explicit learning and eyeblink conditioning no difference to classic NL was found.
Ernst, Thomas M; Thürling, Markus; Müller, Sarah; Kahl, Fabian; Maderwald, Stefan; Schlamann, Marc; Boele, Henk-Jan; Koekkoek, Sebastiaan K E; Diedrichsen, Jörn; De Zeeuw, Chris I; Ladd, Mark E; Timmann, Dagmar
2017-08-01
Classical delay eyeblink conditioning is likely the most commonly used paradigm to study cerebellar learning. As yet, few studies have focused on extinction and savings of conditioned eyeblink responses (CRs). Saving effects, which are reflected in a reacquisition after extinction that is faster than the initial acquisition, suggest that learned associations are at least partly preserved during extinction. In this study, we tested the hypothesis that acquisition-related plasticity is nihilated during extinction in the cerebellar cortex, but retained in the cerebellar nuclei, allowing for faster reacquisition. Changes of 7 T functional magnetic resonance imaging (fMRI) signals were investigated in the cerebellar cortex and nuclei of young and healthy human subjects. Main effects of acquisition, extinction, and reacquisition against rest were calculated in conditioned stimulus-only trials. First-level β values were determined for a spherical region of interest (ROI) around the acquisition peak voxel in lobule VI, and dentate and interposed nuclei ipsilateral to the unconditioned stimulus. In the cerebellar cortex and nuclei, fMRI signals were significantly lower in extinction compared to acquisition and reacquisition, but not significantly different between acquisition and reacquisition. These findings are consistent with the theory of bidirectional learning in both the cerebellar cortex and nuclei. It cannot explain, however, why conditioned responses reappear almost immediately in reacquisition following extinction. Although the present data do not exclude that part of the initial memory remains in the cerebellum in extinction, future studies should also explore changes in extracerebellar regions as a potential substrate of saving effects. Hum Brain Mapp 38:3957-3974, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Extinction, reacquisition, and rapid forgetting of eyeblink conditioning in developing rats
Freeman, John H.
2014-01-01
Eyeblink conditioning is a well-established model for studying the developmental neurobiology of associative learning and memory. However, age differences in extinction and subsequent reacquisition have yet to be studied using this model. The present study examined extinction and reacquisition of eyeblink conditioning in developing rats. In Experiment 1, post-natal day (P) 17 and 24 rats were trained to a criterion of 80% conditioned responses (CRs) using stimulation of the middle cerebellar peduncle (MCP) as a conditioned stimulus (CS). Stimulation CS-alone extinction training commenced 24 h later, followed by reacquisition training after the fourth extinction session. Contrary to expected results, rats trained starting on P17 showed significantly fewer CRs to stimulation CS-alone presentations relative to P24s, including fewer CRs as early as the first block of extinction session 1. Furthermore, the P17 group was slower to reacquire following extinction. Experiment 2 was run to determine the extent to which the low CR percentage observed in P17s early in extinction reflected rapid forgetting versus rapid extinction. Twenty-four hours after reaching criterion, subjects were trained in a session split into 50 stimulation CS-unconditioned stimulus paired trials followed immediately by 50 stimulation CS-alone trials. With this “immediate” extinction protocol, CR percentages during the first block of stimulation CS-alone presentations were equivalent to terminal acquisition levels at both ages but extinction was more rapid in the P17 group. These findings indicate that forgetting is observed in P17 relative to P24 rats 24 h following acquisition. The forgetting in P17 rats has important implications for the neurobiological mechanisms of memory in the developing cerebellum. PMID:25403458
Andreu-Sánchez, Celia; Martín-Pascual, Miguel Ángel; Gruart, Agnès; Delgado-García, José María
2017-01-01
While movie edition creates a discontinuity in audio-visual works for narrative and economy-of-storytelling reasons, eyeblink creates a discontinuity in visual perception for protective and cognitive reasons. We were interested in analyzing eyeblink rate linked to cinematographic edition styles. We created three video stimuli with different editing styles and analyzed spontaneous blink rate in participants (N = 40). We were also interested in looking for different perceptive patterns in blink rate related to media professionalization. For that, of our participants, half (n = 20) were media professionals, and the other half were not. According to our results, MTV editing style inhibits eyeblinks more than Hollywood style and one-shot style. More interestingly, we obtained differences in visual perception related to media professionalization: we found that media professionals inhibit eyeblink rate substantially compared with non-media professionals, in any style of audio-visual edition. PMID:28220882
Andreu-Sánchez, Celia; Martín-Pascual, Miguel Ángel; Gruart, Agnès; Delgado-García, José María
2017-02-21
While movie edition creates a discontinuity in audio-visual works for narrative and economy-of-storytelling reasons, eyeblink creates a discontinuity in visual perception for protective and cognitive reasons. We were interested in analyzing eyeblink rate linked to cinematographic edition styles. We created three video stimuli with different editing styles and analyzed spontaneous blink rate in participants (N = 40). We were also interested in looking for different perceptive patterns in blink rate related to media professionalization. For that, of our participants, half (n = 20) were media professionals, and the other half were not. According to our results, MTV editing style inhibits eyeblinks more than Hollywood style and one-shot style. More interestingly, we obtained differences in visual perception related to media professionalization: we found that media professionals inhibit eyeblink rate substantially compared with non-media professionals, in any style of audio-visual edition.
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
ERIC Educational Resources Information Center
Kehoe, E. James; Ludvig, Elliot A.; Sutton, Richard S.
2014-01-01
The present experiment tested whether or not the time course of a conditioned eyeblink response, particularly its duration, would expand and contract, as the magnitude of the conditioned response (CR) changed massively during acquisition, extinction, and reacquisition. The CR duration remained largely constant throughout the experiment, while CR…
Hardiman, Mervyn J.; Hsu, Hsin-jen; Bishop, Dorothy V.M.
2013-01-01
Three converging lines of evidence have suggested that cerebellar abnormality is implicated in developmental language and literacy problems. First, some brain imaging studies have linked abnormalities in cerebellar grey matter to dyslexia and specific language impairment (SLI). Second, theoretical accounts of both dyslexia and SLI have postulated impairments of procedural learning and automatisation of skills, functions that are known to be mediated by the cerebellum. Third, motor learning has been shown to be abnormal in some studies of both disorders. We assessed the integrity of face related regions of the cerebellum using Pavlovian eyeblink conditioning in 7–11 year-old children with SLI. We found no relationship between oral language skills or literacy skills with either delay or trace conditioning in the children. We conclude that this elementary form of associative learning is intact in children with impaired language or literacy development. PMID:24139661
Biobehavioral Markers of Adverse Effect in Fetal Alcohol Spectrum Disorders
Jacobson, Sandra W.; Jacobson, Joseph L.; Stanton, Mark E.; Meintjes, Ernesta M.; Molteno, Christopher D.
2011-01-01
Identification of children with fetal alcohol spectrum disorders (FASD) is difficult because information regarding prenatal exposure is often lacking, a large proportion of affected children do not exhibit facial anomalies, and no distinctive behavioral phenotype has been identified. Castellanos and Tannock have advocated going beyond descriptive symptom-based approaches to diagnosis to identify biomarkers derived from cognitive neuroscience. Classical eyeblink conditioning and magnitude comparison are particularly promising biobehavioral markers of FASD—eyeblink conditioning because a deficit in this elemental form of learning characterizes a very large proportion of alcohol-exposed children; magnitude comparison because it is a domain of higher order cognitive function that is among the most sensitive to fetal alcohol exposure. Because the neural circuitry mediating both these biobehavioral markers is well understood, they have considerable potential for advancing understanding of the pathophysiology of FASD, which can contribute to development of treatments targeted to the specific deficits that characterize this disorder. PMID:21541763
Hoffmann, Loren C.; Cicchese, Joseph J.; Berry, Stephen D.
2015-01-01
Neurobiological oscillations are regarded as essential to normal information processing, including coordination and timing of cells and assemblies within structures as well as in long feedback loops of distributed neural systems. The hippocampal theta rhythm is a 3–12 Hz oscillatory potential observed during cognitive processes ranging from spatial navigation to associative learning. The lower range, 3–7 Hz, can occur during immobility and depends upon the integrity of cholinergic forebrain systems. Several studies have shown that the amount of pre-training theta in the rabbit strongly predicts the acquisition rate of classical eyeblink conditioning and that impairment of this system substantially slows the rate of learning. Our lab has used a brain-computer interface (BCI) that delivers eyeblink conditioning trials contingent upon the explicit presence or absence of hippocampal theta. A behavioral benefit of theta-contingent training has been demonstrated in both delay and trace forms of the paradigm with a two- to four-fold increase in learning speed. This behavioral effect is accompanied by enhanced amplitude and synchrony of hippocampal local field potential (LFP)s, multi-unit excitation, and single-unit response patterns that depend on theta state. Additionally, training in the presence of hippocampal theta has led to increases in the salience of tone-induced unit firing patterns in the medial prefrontal cortex, followed by persistent multi-unit activity during the trace interval. In cerebellum, rhythmicity and precise synchrony of stimulus time-locked LFPs with those of hippocampus occur preferentially under the theta condition. Here we review these findings, integrate them into current models of hippocampal-dependent learning and suggest how improvement in our understanding of neurobiological oscillations is critical for theories of medial temporal lobe processes underlying intact and pathological learning. PMID:25918501
Hoffmann, Loren C; Cicchese, Joseph J; Berry, Stephen D
2015-01-01
Neurobiological oscillations are regarded as essential to normal information processing, including coordination and timing of cells and assemblies within structures as well as in long feedback loops of distributed neural systems. The hippocampal theta rhythm is a 3-12 Hz oscillatory potential observed during cognitive processes ranging from spatial navigation to associative learning. The lower range, 3-7 Hz, can occur during immobility and depends upon the integrity of cholinergic forebrain systems. Several studies have shown that the amount of pre-training theta in the rabbit strongly predicts the acquisition rate of classical eyeblink conditioning and that impairment of this system substantially slows the rate of learning. Our lab has used a brain-computer interface (BCI) that delivers eyeblink conditioning trials contingent upon the explicit presence or absence of hippocampal theta. A behavioral benefit of theta-contingent training has been demonstrated in both delay and trace forms of the paradigm with a two- to four-fold increase in learning speed. This behavioral effect is accompanied by enhanced amplitude and synchrony of hippocampal local field potential (LFP)s, multi-unit excitation, and single-unit response patterns that depend on theta state. Additionally, training in the presence of hippocampal theta has led to increases in the salience of tone-induced unit firing patterns in the medial prefrontal cortex, followed by persistent multi-unit activity during the trace interval. In cerebellum, rhythmicity and precise synchrony of stimulus time-locked LFPs with those of hippocampus occur preferentially under the theta condition. Here we review these findings, integrate them into current models of hippocampal-dependent learning and suggest how improvement in our understanding of neurobiological oscillations is critical for theories of medial temporal lobe processes underlying intact and pathological learning.
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung
2018-02-01
Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Leuner, Benedetta; Waddell, Jaylyn; Gould, Elizabeth; Shors, Tracey J.
2012-01-01
Some, but not all, types of learning and memory can influence neurogenesis in the adult hippocampus. Trace eyeblink conditioning has been shown to enhance the survival of new neurons, whereas delay eyeblink conditioning has no such effect. The key difference between the two training procedures is that the conditioning stimuli are separated in time during trace but not delay conditioning. These findings raise the question of whether temporal discontiguity is necessary for enhancing the survival of new neurons. Here we used two approaches to test this hypothesis. First, we examined the influence of a delay conditioning task in which the duration of the conditioned stimulus (CS) was increased nearly twofold, a procedure that critically engages the hippocampus. Although the CS and unconditioned stimulus are contiguous, this very long delay conditioning procedure increased the number of new neurons that survived. Second, we examined the influence of learning the trace conditioned response (CR) after having acquired the CR during delay conditioning, a procedure that renders trace conditioning hippocampal-independent. In this case, trace conditioning did not enhance the survival of new neurons. Together, these results demonstrate that associative learning increases the survival of new neurons in the adult hippocampus, regardless of temporal contiguity. PMID:17192426
Impaired delay and trace eyeblink conditioning in school-age children with fetal alcohol syndrome.
Jacobson, Sandra W; Stanton, Mark E; Dodge, Neil C; Pienaar, Mariska; Fuller, Douglas S; Molteno, Christopher D; Meintjes, Ernesta M; Hoyme, H Eugene; Robinson, Luther K; Khaole, Nathaniel; Jacobson, Joseph L
2011-02-01
Classical eyeblink conditioning (EBC) involves contingent temporal pairing of a conditioned stimulus (e.g., tone) with an unconditioned stimulus (e.g., air puff). Impairment of EBC has been demonstrated in studies of alcohol-exposed animals and in children exposed prenatally at heavy levels. Fetal alcohol syndrome (FAS) was diagnosed by expert dysmorphologists in a large sample of Cape Coloured, South African children. Delay EBC was examined in a new sample of 63 children at 11.3 years, and trace conditioning in 32 of the same children at 12.8 years. At each age, 2 sessions of 50 trials each were administered on the same day; 2 more sessions the next day, for children not meeting criterion for conditioning. Six of 34 (17.6%) children born to heavy drinkers were diagnosed with FAS, 28 were heavily exposed nonsyndromal (HE), and 29 were nonexposed controls. Only 33.3% with FAS and 42.9% of HE met criterion for delay conditioning, compared with 79.3% of controls. The more difficult trace conditioning task was also highly sensitive to fetal alcohol exposure. Only 16.7% of the FAS and 21.4% of HE met criterion for trace conditioning, compared with 66.7% of controls. The magnitude of the effect of diagnostic group on trace conditioning was not greater than the effect on short delay conditioning, findings consistent with recent rat studies. Longer latency to onset and peak eyeblink CR in exposed children indicated poor timing and failure to blink in anticipation of the puff. Extended training resulted in some but not all of the children reaching criterion. These data showing alcohol-related delay and trace conditioning deficits extend our earlier findings of impaired EBC in 5-year-olds to school-age. Alcohol-related impairment in the cerebellar circuitry required for both forms of conditioning may be sufficient to account for the deficit in both tasks. Extended training was beneficial for some exposed children. EBC provides a well-characterized model system for assessment of degree of cerebellar-related learning and memory dysfunction in fetal alcohol exposed children. Copyright © 2010 by the Research Society on Alcoholism.
Impaired delay and trace eyeblink conditioning in school-age children with fetal alcohol syndrome
Jacobson, Sandra W.; Stanton, Mark E.; Dodge, Neil C.; Pienaar, Mariska; Fuller, Douglas S.; Molteno, Christopher D.; Meintjes, Ernesta M.; Hoyme, H. Eugene; Robinson, Luther K.; Khaole, Nathaniel; Jacobson, Joseph L.
2013-01-01
Background Classical eyeblink conditioning (EBC) involves contingent temporal pairing of a conditioned stimulus (e.g., tone) with an unconditioned stimulus (e.g., air puff). Impairment of EBC has been demonstrated in studies of alcohol-exposed animals and in children exposed prenatally at heavy levels. Methods Fetal alcohol syndrome (FAS) was diagnosed by expert dysmorphologists in a large sample of Cape Coloured, South African children. Delay EBC was examined in a new sample of 63 children at 11.3 years, and trace conditioning in 32 of the same children at 12.8 years. At each age, two sessions of 50 trials each were administered on the same day; two more sessions the next day, for children not meeting criterion for conditioning. Results 6 of 34 (17.6%) children born to heavy drinkers were diagnosed with FAS, 28 were heavily exposed nonsyndromal (HE), and 29 were non-exposed controls. Only 33.3% with FAS and 42.9% of HE met criterion for delay conditioning, compared with 79.3% of controls. The more difficult trace conditioning task was also highly sensitive to fetal alcohol exposure. Only 16.7% of the FAS and 21.4% of HE met criterion for trace conditioning, compared with 66.7% of controls. The magnitude of the effect of diagnostic group on trace conditioning was not greater than the effect on short delay conditioning, findings consistent with recent rat studies. Longer latency to onset and peak eyeblink CR in exposed children indicated poor timing and failure to blink in anticipation of the puff. Extended training resulted in some but not all of the children reaching criterion. Conclusions These data showing alcohol-related delay and trace conditioning deficits extend our earlier findings of impaired EBC in 5-year-olds to school-age. Alcohol-related impairment in the cerebellar circuitry required for both forms of conditioning may be sufficient to account for the deficit in both tasks. Extended training was beneficial for some exposed children. EBC provides a well-characterized model system for assessment of degree of cerebellar-related learning and memory dysfunction in fetal alcohol exposed children. PMID:21073484
ERIC Educational Resources Information Center
Nokia, Miriam S.; Waselius, Tomi; Mikkonen, Jarno E.; Wikgren, Jan; Penttonen, Markku
2015-01-01
Hippocampal ? (3-12 Hz) oscillations are implicated in learning and memory, but their functional role remains unclear. We studied the effect of the phase of local ? oscillation on hippocampal responses to a neutral conditioned stimulus (CS) and subsequent learning of classical trace eyeblink conditioning in adult rabbits. High-amplitude, regular…
NASA Astrophysics Data System (ADS)
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung
2018-02-01
Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Allen, M T; Handy, J D; Blankenship, M R; Servatius, R J
2018-06-01
Recent work has focused on a learning diathesis model in which specific personality factors such as behavioral inhibition (BI) may influence associative learning and in turn increase risk for the development of anxiety disorders. We have found in a series of studies that individuals self-reporting high levels of BI exhibit enhanced acquisition of conditioned eyeblinks. In the study reported here, hypotheses were extended to include distressed (Type D) personality which has been found to be related to BI. Type D personality is measured with the DS-14 scale which includes two subscales measuring negative affectivity (NA) and social inhibition (SI). We hypothesized that SI, which is similar to BI, would result in enhanced acquisition while the effect of NA is unclear. Eighty nine participants completed personality inventories including the Adult Measure of Behavioral Inhibition (AMBI) and DS-14. All participants received 60 acquisition trials with a 500 ms, 1000 Hz, tone CS and a co-terminating 50 ms, 5 psi corneal airpuff US. Participants received either 100% CS-US paired trials or a schedule of partial reinforcement where 50% US alone trials were intermixed into CS-US training. Acquisition of CRs did not differ between the two training protocols. Whereas BI was significantly related to Type D, SI, and NA, only BI and SI individuals exhibited enhanced acquisition of conditioned eyeblinks as compared to non-inhibited individuals. Personality factors now including social inhibition can be used to identify individuals who express enhanced associative learning which lends further support to a learning diathesis model of anxiety disorders. Copyright © 2018 Elsevier B.V. All rights reserved.
Eye-blink behaviors in 71 species of primates.
Tada, Hideoki; Omori, Yasuko; Hirokawa, Kumi; Ohira, Hideki; Tomonaga, Masaki
2013-01-01
The present study was performed to investigate the associations between eye-blink behaviors and various other factors in primates. We video-recorded 141 individuals across 71 primate species and analyzed the blink rate, blink duration, and "isolated" blink ratio (i.e., blinks without eye or head movement) in relation to activity rhythms, habitat types, group size, and body size factors. The results showed close relationships between three types of eye-blink measures and body size factors. All of these measures increased as a function of body weight. In addition, diurnal primates showed more blinks than nocturnal species even after controlling for body size factors. The most important findings were the relationships between eye-blink behaviors and social factors, e.g., group size. Among diurnal primates, only the blink rate was significantly correlated even after controlling for body size factors. The blink rate increased as the group size increased. Enlargement of the neocortex is strongly correlated with group size in primate species and considered strong evidence for the social brain hypothesis. Our results suggest that spontaneous eye-blinks have acquired a role in social communication, similar to grooming, to adapt to complex social living during primate evolution.
Eye-Blink Behaviors in 71 Species of Primates
Tada, Hideoki; Omori, Yasuko; Hirokawa, Kumi; Ohira, Hideki; Tomonaga, Masaki
2013-01-01
The present study was performed to investigate the associations between eye-blink behaviors and various other factors in primates. We video-recorded 141 individuals across 71 primate species and analyzed the blink rate, blink duration, and “isolated” blink ratio (i.e., blinks without eye or head movement) in relation to activity rhythms, habitat types, group size, and body size factors. The results showed close relationships between three types of eye-blink measures and body size factors. All of these measures increased as a function of body weight. In addition, diurnal primates showed more blinks than nocturnal species even after controlling for body size factors. The most important findings were the relationships between eye-blink behaviors and social factors, e.g., group size. Among diurnal primates, only the blink rate was significantly correlated even after controlling for body size factors. The blink rate increased as the group size increased. Enlargement of the neocortex is strongly correlated with group size in primate species and considered strong evidence for the social brain hypothesis. Our results suggest that spontaneous eye-blinks have acquired a role in social communication, similar to grooming, to adapt to complex social living during primate evolution. PMID:23741522
Eyeblink Classical Conditioning and Post-Traumatic Stress Disorder – A Model Systems Approach
Schreurs, Bernard G.; Burhans, Lauren B.
2015-01-01
Not everyone exposed to trauma suffers flashbacks, bad dreams, numbing, fear, anxiety, sleeplessness, hyper-vigilance, hyperarousal, or an inability to cope, but those who do may suffer from post-traumatic stress disorder (PTSD). PTSD is a major physical and mental health problem for military personnel and civilians exposed to trauma. There is still debate about the incidence and prevalence of PTSD especially among the military, but for those who are diagnosed, behavioral therapy and drug treatment strategies have proven to be less than effective. A number of these treatment strategies are based on rodent fear conditioning research and are capable of treating only some of the symptoms because the extinction of fear does not deal with the various forms of hyper-vigilance and hyperarousal experienced by people with PTSD. To help address this problem, we have developed a preclinical eyeblink classical conditioning model of PTSD in which conditioning and hyperarousal can both be extinguished. We review this model and discuss findings showing that unpaired stimulus presentations can be effective in reducing levels of conditioning and hyperarousal even when unconditioned stimulus intensity is reduced to the point where it is barely capable of eliciting a response. These procedures have direct implications for the treatment of PTSD and could be implemented in a virtual reality environment. PMID:25904874
McGlinchey, Regina E.; Fortier, Catherine B.; Venne, Jonathan R.; Maksimovskiy, Arkadiy L.; Milberg, William P.
2014-01-01
This study examined the performance of veterans and active duty personnel who served in Operation Enduring Freedom and/or Operation Iraqi Freedom (OEF/OIF) on a basic associative learning task. Eighty-eight individuals participated in this study. All received a comprehensive clinical evaluation to determine the presence and severity of posttraumatic stress disorder (PTSD) and traumatic brain injury (TBI). The eyeblink conditioning task was composed of randomly intermixed delay and trace conditioned stimulus (CS) and unconditioned stimulus (US) pairs (acquisition) followed by a series of CS only trials (extinction). Results revealed that those with a clinical diagnosis of PTSD or a diagnosis of PTSD with comorbid mTBI acquired delay and trace conditioned responses (CRs) to levels and at rates similar to a deployed control group, thus suggesting intact basic associative learning. Differential extinction impairment was observed in the two clinical groups. Acquisition of CRs for both delay and trace conditioning, as well as extinction of trace CRs, was associated with alcoholic behavior across all participants. These findings help characterize the learning and memory function of individuals with PTSD and mTBI from OEF/OIF and raise the alarming possibility that the use of alcohol in this group may lead to more significant cognitive dysfunction. PMID:24625622
Eyeblink classical conditioning and post-traumatic stress disorder - a model systems approach.
Schreurs, Bernard G; Burhans, Lauren B
2015-01-01
Not everyone exposed to trauma suffers flashbacks, bad dreams, numbing, fear, anxiety, sleeplessness, hyper-vigilance, hyperarousal, or an inability to cope, but those who do may suffer from post-traumatic stress disorder (PTSD). PTSD is a major physical and mental health problem for military personnel and civilians exposed to trauma. There is still debate about the incidence and prevalence of PTSD especially among the military, but for those who are diagnosed, behavioral therapy and drug treatment strategies have proven to be less than effective. A number of these treatment strategies are based on rodent fear conditioning research and are capable of treating only some of the symptoms because the extinction of fear does not deal with the various forms of hyper-vigilance and hyperarousal experienced by people with PTSD. To help address this problem, we have developed a preclinical eyeblink classical conditioning model of PTSD in which conditioning and hyperarousal can both be extinguished. We review this model and discuss findings showing that unpaired stimulus presentations can be effective in reducing levels of conditioning and hyperarousal even when unconditioned stimulus intensity is reduced to the point where it is barely capable of eliciting a response. These procedures have direct implications for the treatment of PTSD and could be implemented in a virtual reality environment.
ERIC Educational Resources Information Center
Simon, Barbara B.; Knuckley, Bryan; Powell, Donald A.
2004-01-01
Previous work has demonstrated that drugs increasing brain concentrations of acetylcholine can enhance cognition in aging and brain-damaged organisms. The present study assessed whether galantamine (GAL), an allosteric modulator of nicotinic cholinergic receptors and weak acetylcholinesterase inhibitor, could improve acquisition and retention of…
Nomura, Ryota; Hino, Kojun; Shimazu, Makoto; Liang, Yingzong; Okada, Takeshi
2015-01-01
Collective spectator communications such as oral presentations, movies, and storytelling performances are ubiquitous in human culture. This study investigated the effects of past viewing experiences and differences in expressive performance on an audience’s transportive experience into a created world of a storytelling performance. In the experiment, 60 participants (mean age = 34.12 years, SD = 13.18 years, range 18–63 years) were assigned to watch one of two videotaped performances that were played (1) in an orthodox way for frequent viewers and (2) in a modified way aimed at easier comprehension for first-time viewers. Eyeblink synchronization among participants was quantified by employing distance-based measurements of spike trains, Dspike and Dinterval (Victor and Purpura, 1997). The results indicated that even non-familiar participants’ eyeblinks were synchronized as the story progressed and that the effect of the viewing experience on transportation was weak. Rather, the results of a multiple regression analysis demonstrated that the degrees of transportation could be predicted by a retrospectively reported humor experience and higher real-time variability (i.e., logarithmic transformed SD) of inter blink intervals during a performance viewing. The results are discussed from the viewpoint in which the extent of eyeblink synchronization and eyeblink-rate variability acts as an index of the inner experience of audience members. PMID:26029123
Kaneko, T; Thompson, R F
1997-05-01
Central muscarinic cholinergic involvement in classical conditioning of eyeblink responses was determined in trace and delay paradigms. Rabbits were trained on a trace procedure in which a 250-ms tone conditioned stimulus (CS) and a 100-ms air-puff unconditioned stimulus (UCS) were presented with a 500-ms trace interval. Each training session day consisted of ten tone alone, ten air-puff alone and 80 paired CS-UCS trials. Scopolamine hydrochloride at doses of 0.03 and 0.1 mg/0.5 ml per kg, s.c. dose-dependently disrupted acquisition of conditioned responses. Rabbits that were treated with scopolamine and failed to learn showed a gradual increase in conditioned responses during an additional training period with saline injections and no transfer from earlier training. Scopolamine methyl bromide, which does not appreciably cross the blood-brain barrier, showed no effects in the trace conditioning paradigm at a dose of 0.1 mg/kg, s.c., indicating central cholinergic blockade is responsible for the suppressive effect of scopolamine. Scopolamine hydrochloride at a dose of 0.1 mg/kg, s.c. did not block acquisition in the delay procedure with a 250-ms inter-stimulus interval, although the rate of acquisition was somewhat reduced by the drug. These data are the first to demonstrate that classical conditioning of the eyeblink response in the trace procedure is highly sensitive to central cholinergic deficits.
Knudson, Inge M; Melcher, Jennifer R
2016-06-01
Increases in the acoustic startle response (ASR) of animals have been reported following experimental manipulations to induce tinnitus, an auditory disorder defined by phantom perception of sound. The increases in ASR have been proposed to signify the development of hyperacusis, a clinical condition defined by intolerance of normally tolerable sound levels. To test this proposal, the present study compared ASR amplitude to measures of sound-level tolerance (SLT) in humans, the only species in which SLT can be directly assessed. Participants had clinically normal/near-normal hearing thresholds, were free of psychotropic medications, and comprised people with tinnitus and without. ASR was measured as eyeblink-related electromyographic activity in response to a noise pulse presented at a range of levels and in two background conditions (noise and quiet). SLT was measured as loudness discomfort level (LDL), the lowest level of sound deemed uncomfortable, and via a questionnaire on the loudness of sounds in everyday life. Regardless of tinnitus status, ASR amplitude at a given stimulus level increased with decreasing LDL, but showed no relationship to SLT self-reported via the questionnaire. These relationships (or lack thereof) could not be attributed to hearing threshold, age, anxiety, or depression. The results imply that increases in ASR in the animal work signify decreases in LDL specifically and may not correspond to the development of hyperacusis as would be self-reported by a clinic patient.
Sex differences in learning processes of classical and operant conditioning
Dalla, Christina; Shors, Tracey J.
2009-01-01
Males and females learn and remember differently at different times in their lives. These differences occur in most species, from invertebrates to humans. We review here sex differences as they occur in laboratory rodent species. We focus on classical and operant conditioning paradigms, including classical eyeblink conditioning, fear conditioning, active avoidance and conditioned taste aversion. Sex differences have been reported during acquisition, retention and extinction in most of these paradigms. In general, females perform better than males in the classical eyeblink conditioning, in fear-potentiated startle and in most operant conditioning tasks, such as the active avoidance test. However, in the classical fear conditioning paradigm, in certain lever-pressing paradigms and in the conditioned taste aversion males outperform females or are more resistant to extinction. Most sex differences in conditioning are dependent on organizational effects of gonadal hormones during early development of the brain, in addition to modulation by activational effects during puberty and adulthood. Critically, sex differences in performance account for some of the reported effects on learning and these are discussed throughout the review. Because so many mental disorders are more prevalent on one sex than the other, it is important to consider sex differences in learning when applying animal models of learning for these disorders. Finally, we discuss how sex differences in learning continue to alter the brain throughout the lifespan. Thus, sex differences in learning are not only mediated by sex differences in the brain, but also contribute to them. PMID:19272397
Sex differences in learning processes of classical and operant conditioning.
Dalla, Christina; Shors, Tracey J
2009-05-25
Males and females learn and remember differently at different times in their lives. These differences occur in most species, from invertebrates to humans. We review here sex differences as they occur in laboratory rodent species. We focus on classical and operant conditioning paradigms, including classical eyeblink conditioning, fear-conditioning, active avoidance and conditioned taste aversion. Sex differences have been reported during acquisition, retention and extinction in most of these paradigms. In general, females perform better than males in the classical eyeblink conditioning, in fear-potentiated startle and in most operant conditioning tasks, such as the active avoidance test. However, in the classical fear-conditioning paradigm, in certain lever-pressing paradigms and in the conditioned taste aversion, males outperform females or are more resistant to extinction. Most sex differences in conditioning are dependent on organizational effects of gonadal hormones during early development of the brain, in addition to modulation by activational effects during puberty and adulthood. Critically, sex differences in performance account for some of the reported effects on learning and these are discussed throughout the review. Because so many mental disorders are more prevalent in one sex than the other, it is important to consider sex differences in learning when applying animal models of learning for these disorders. Finally, we discuss how sex differences in learning continue to alter the brain throughout the lifespan. Thus, sex differences in learning are not only mediated by sex differences in the brain, but also contribute to them.
Burhans, Lauren B; Smith-Bell, Carrie A; Schreurs, Bernard G
2017-10-01
Glutamatergic dysfunction is implicated in many neuropsychiatric conditions, including post-traumatic stress disorder (PTSD). Glutamate antagonists have shown some utility in treating PTSD symptoms, whereas glutamate agonists may facilitate cognitive behavioral therapy outcomes. We have developed an animal model of PTSD, based on conditioning of the rabbit's eyeblink response, that addresses two key features: conditioned responses (CRs) to cues associated with an aversive event and a form of conditioned hyperarousal referred to as conditioning-specific reflex modification (CRM). The optimal treatment to reduce both CRs and CRM is unpaired extinction. The goals of the study were to examine whether treatment with the N-methyl-D-aspartate glutamate receptor antagonist ketamine could reduce CRs and CRM, and whether the N-methyl-D-aspartate agonist D-cycloserine combined with unpaired extinction treatment could enhance the extinction of both. Administration of a single dose of subanesthetic ketamine had no significant immediate or delayed effect on CRs or CRM. Combining D-cycloserine with a single day of unpaired extinction facilitated extinction of CRs in the short term while having no impact on CRM. These results caution that treatments may improve one aspect of the PTSD symptomology while having no significant effects on other symptoms, stressing the importance of a multiple-treatment approach to PTSD and of animal models that address multiple symptoms.
ERIC Educational Resources Information Center
Weible, Aldis P.; Oh, M. Matthew; Lee, Grace; Disterhoft, John F.
2004-01-01
Cholinergic systems are critical to the neural mechanisms mediating learning. Reduced nicotinic cholinergic receptor (nAChR) binding is a hallmark of normal aging. These reductions are markedly more severe in some dementias, such as Alzheimer's disease. Pharmacological central nervous system therapies are a means to ameliorate the cognitive…
Lindquist, Derick H; Sokoloff, Greta; Milner, Eric; Steinmetz, Joseph E
2013-09-01
Exposure to ethanol in neonatal rats results in reduced neuronal numbers in the cerebellar cortex and deep nuclei of juvenile and adult animals. This reduction in cell numbers is correlated with impaired delay eyeblink conditioning (EBC), a simple motor learning task in which a neutral conditioned stimulus (CS; tone) is repeatedly paired with a co-terminating unconditioned stimulus (US; periorbital shock). Across training, cell populations in the interpositus (IP) nucleus model the temporal form of the eyeblink-conditioned response (CR). The hippocampus, though not required for delay EBC, also shows learning-dependent increases in CA1 and CA3 unit activity. In the present study, rat pups were exposed to 0, 3, 4, or 5 mg/kg/day of ethanol during postnatal days (PD) 4-9. As adults, CR acquisition and timing were assessed during 6 training sessions of delay EBC with a short (280 ms) interstimulus interval (ISI; time from CS onset to US onset) followed by another 6 sessions with a long (880 ms) ISI. Neuronal activity was recorded in the IP and area CA1 during all 12 sessions. The high-dose rats learned the most slowly and, with the moderate-dose rats, produced the longest CR peak latencies over training to the short ISI. The low dose of alcohol impaired CR performance to the long ISI only. The 3E (3 mg/kg/day of ethanol) and 5E (5 mg/kg/day of ethanol) rats also showed slower-than-normal increases in learning-dependent excitatory unit activity in the IP and CA1. The 4E (4 mg/kg/day of ethanol) rats showed a higher rate of CR production to the long ISI and enhanced IP and CA1 activation when compared to the 3E and 5E rats. The results indicate that binge-like ethanol exposure in neonatal rats induces long-lasting, dose-dependent deficits in CR acquisition and timing and diminishes conditioning-related neuronal excitation in both the cerebellum and hippocampus. Published by Elsevier Inc.
Lindquist, Derick H.; Sokoloff, Greta; Milner, Eric; Steinmetz, Joseph E.
2013-01-01
Exposure to ethanol in neonatal rats results in reduced neuronal numbers in the cerebellar cortex and deep nuclei of juvenile and adult animals. This reduction in cell numbers is correlated with impaired delay eyeblink conditioning (EBC), a simple motor learning task in which a neutral conditioned stimulus (CS; tone) is repeatedly paired with a co-terminating unconditioned stimulus (US; periorbital shock). Across training, cell populations in the interpositus (IP) nucleus model the temporal form of the eyeblink conditioned response (CR). The hippocampus, though not required for delay EBC, also shows learning-dependent increases in CA1 and CA3 unit activity. In the present study, rat pups were exposed to 0, 3, 4, or 5 mg/kg/day of ethanol during postnatal days (PD) 4–9. As adults, CR acquisition and timing were assessed during 6 training sessions of delay EBC with a short (280 msec) interstimulus interval (ISI; time from CS onset to US onset) followed by another 6 sessions with a long (880 msec) ISI. Neuronal activity was recorded in the IP and area CA1 during all 12 sessions. The high-dose rats learned the most slowly and, with the moderate-dose rats, produced the longest CR peak latencies over training to the short ISI. The low dose of alcohol impaired CR performance to the long ISI only. The 3E (3 mg/kg/day of ethanol) and 5E (5 mg/kg/day of ethanol) rats also showed slower-than-normal increases in learning-dependent excitatory unit activity in the IP and CA1. The 4E (4 mg/kg/day of ethanol) rats showed a higher rate of CR production to the long ISI and enhanced IP and CA1 activation when compared to the 3E and 5E rats. The results indicate that binge-like ethanol exposure in neonatal rats induces long-lasting, dose-dependent deficits in CR acquisition and timing and diminishes conditioning-related neuronal excitation in both the cerebellum and hippocampus. PMID:23871534
Voluntary eyeblinks disrupt iconic memory.
Thomas, Laura E; Irwin, David E
2006-04-01
In the present research, we investigated whether eyeblinks interfere with cognitive processing. In Experiment 1, the participants performed a partial-report iconic memory task in which a letter array was presented for 106 msec, followed 50, 150, or 750 msec later by a tone that cued recall of onerow of the array. At a cue delay of 50 msec between array offset and cue onset, letter report accuracy was lower when the participants blinked following array presentation than under no-blink conditions; the participants made more mislocation errors under blink conditions. This result suggests that blinking interferes with the binding of object identity and object position in iconic memory. Experiment 2 demonstrated that interference due to blinks was not due merely to changes in light intensity. Experiments 3 and 4 demonstrated that other motor responses did not interfere with iconic memory. We propose a new phenomenon, cognitive blink suppression, in which blinking inhibits cognitive processing. This phenomenon may be due to neural interference. Blinks reduce activation in area V1, which may interfere with the representation of information in iconic memory.
Baijot, Simon; Slama, Hichem; Söderlund, Göran; Dan, Bernard; Deltenre, Paul; Colin, Cécile; Deconinck, Nicolas
2016-03-15
Optimal stimulation theory and moderate brain arousal (MBA) model hypothesize that extra-task stimulation (e.g. white noise) could improve cognitive functions of children with attention-deficit/hyperactivity disorder (ADHD). We investigate benefits of white noise on attention and inhibition in children with and without ADHD (7-12 years old), both at behavioral and at neurophysiological levels. Thirty children with and without ADHD performed a visual cued Go/Nogo task in two conditions (white noise or no-noise exposure), in which behavioral and P300 (mean amplitudes) data were analyzed. Spontaneous eye-blink rates were also recorded and participants went through neuropsychological assessment. Two separate analyses were conducted with each child separately assigned into two groups (1) ADHD or typically developing children (TDC), and (2) noise beneficiaries or non-beneficiaries according to the observed performance during the experiment. This latest categorization, based on a new index we called "Noise Benefits Index" (NBI), was proposed to determine a neuropsychological profile positively sensitive to noise. Noise exposure reduced omission rate in children with ADHD, who were no longer different from TDC. Eye-blink rate was higher in children with ADHD but was not modulated by white noise. NBI indicated a significant relationship between ADHD and noise benefit. Strong correlations were observed between noise benefit and neuropsychological weaknesses in vigilance and inhibition. Participants who benefited from noise had an increased Go P300 in the noise condition. The improvement of children with ADHD with white noise supports both optimal stimulation theory and MBA model. However, eye-blink rate results question the dopaminergic hypothesis in the latter. The NBI evidenced a profile positively sensitive to noise, related with ADHD, and associated with weaker cognitive control.
Cicchese, Joseph J.; Berry, Stephen D.
2016-01-01
Typical information processing is thought to depend on the integrity of neurobiological oscillations that may underlie coordination and timing of cells and assemblies within and between structures. The 3–7 Hz bandwidth of hippocampal theta rhythm is associated with cognitive processes essential to learning and depends on the integrity of cholinergic, GABAergic, and glutamatergic forebrain systems. Since several significant psychiatric disorders appear to result from dysfunction of medial temporal lobe (MTL) neurochemical systems, preclinical studies on animal models may be an important step in defining and treating such syndromes. Many studies have shown that the amount of hippocampal theta in the rabbit strongly predicts the acquisition rate of classical eyeblink conditioning and that impairment of this system substantially slows the rate of learning and attainment of asymptotic performance. Our lab has developed a brain–computer interface that makes eyeblink training trials contingent upon the explicit presence or absence of hippocampal theta. The behavioral benefit of theta-contingent training has been demonstrated in both delay and trace forms of the paradigm with a two- to fourfold increase in learning speed over non-theta states. The non-theta behavioral impairment is accompanied by disruption of the amplitude and synchrony of hippocampal local field potentials, multiple-unit excitation, and single-unit response patterns dependent on theta state. Our findings indicate a significant electrophysiological and behavioral impact of the pretrial state of the hippocampus that suggests an important role for this MTL system in associative learning and a significant deleterious impact in the absence of theta. Here, we focus on the impairments in the non-theta state, integrate them into current models of psychiatric disorders, and suggest how improvement in our understanding of neurobiological oscillations is critical for theories and treatment of psychiatric pathology. PMID:26903886
Physiological artifacts in scalp EEG and ear-EEG.
Kappel, Simon L; Looney, David; Mandic, Danilo P; Kidmose, Preben
2017-08-11
A problem inherent to recording EEG is the interference arising from noise and artifacts. While in a laboratory environment, artifacts and interference can, to a large extent, be avoided or controlled, in real-life scenarios this is a challenge. Ear-EEG is a concept where EEG is acquired from electrodes in the ear. We present a characterization of physiological artifacts generated in a controlled environment for nine subjects. The influence of the artifacts was quantified in terms of the signal-to-noise ratio (SNR) deterioration of the auditory steady-state response. Alpha band modulation was also studied in an open/closed eyes paradigm. Artifacts related to jaw muscle contractions were present all over the scalp and in the ear, with the highest SNR deteriorations in the gamma band. The SNR deterioration for jaw artifacts were in general higher in the ear compared to the scalp. Whereas eye-blinking did not influence the SNR in the ear, it was significant for all groups of scalps electrodes in the delta and theta bands. Eye movements resulted in statistical significant SNR deterioration in both frontal, temporal and ear electrodes. Recordings of alpha band modulation showed increased power and coherence of the EEG for ear and scalp electrodes in the closed-eyes periods. Ear-EEG is a method developed for unobtrusive and discreet recording over long periods of time and in real-life environments. This study investigated the influence of the most important types of physiological artifacts, and demonstrated that spontaneous activity, in terms of alpha band oscillations, could be recorded from the ear-EEG platform. In its present form ear-EEG was more prone to jaw related artifacts and less prone to eye-blinking artifacts compared to state-of-the-art scalp based systems.
Antonietti, Alberto; Casellato, Claudia; D'Angelo, Egidio; Pedrocchi, Alessandra
The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.The cerebellum plays a critical role in sensorimotor control. However, how the specific circuits and plastic mechanisms of the cerebellum are engaged in closed-loop processing is still unclear. We developed an artificial sensorimotor control system embedding a detailed spiking cerebellar microcircuit with three bidirectional plasticity sites. This proved able to reproduce a cerebellar-driven associative paradigm, the eyeblink classical conditioning (EBCC), in which a precise time relationship between an unconditioned stimulus (US) and a conditioned stimulus (CS) is established. We challenged the spiking model to fit an experimental data set from human subjects. Two subsequent sessions of EBCC acquisition and extinction were recorded and transcranial magnetic stimulation (TMS) was applied on the cerebellum to alter circuit function and plasticity. Evolutionary algorithms were used to find the near-optimal model parameters to reproduce the behaviors of subjects in the different sessions of the protocol. The main finding is that the optimized cerebellar model was able to learn to anticipate (predict) conditioned responses with accurate timing and success rate, demonstrating fast acquisition, memory stabilization, rapid extinction, and faster reacquisition as in EBCC in humans. The firing of Purkinje cells (PCs) and deep cerebellar nuclei (DCN) changed during learning under the control of synaptic plasticity, which evolved at different rates, with a faster acquisition in the cerebellar cortex than in DCN synapses. Eventually, a reduced PC activity released DCN discharge just after the CS, precisely anticipating the US and causing the eyeblink. Moreover, a specific alteration in cortical plasticity explained the EBCC changes induced by cerebellar TMS in humans. In this paper, for the first time, it is shown how closed-loop simulations, using detailed cerebellar microcircuit models, can be successfully used to fit real experimental data sets. Thus, the changes of the model parameters in the different sessions of the protocol unveil how implicit microcircuit mechanisms can generate normal and altered associative behaviors.
Changes in complex spike activity during classical conditioning
Rasmussen, Anders; Jirenhed, Dan-Anders; Wetmore, Daniel Z.; Hesslow, Germund
2014-01-01
The cerebellar cortex is necessary for adaptively timed conditioned responses (CRs) in eyeblink conditioning. During conditioning, Purkinje cells acquire pause responses or “Purkinje cell CRs” to the conditioned stimuli (CS), resulting in disinhibition of the cerebellar nuclei (CN), allowing them to activate motor nuclei that control eyeblinks. This disinhibition also causes inhibition of the inferior olive (IO), via the nucleo-olivary pathway (N-O). Activation of the IO, which relays the unconditional stimulus (US) to the cortex, elicits characteristic complex spikes in Purkinje cells. Although Purkinje cell activity, as well as stimulation of the CN, is known to influence IO activity, much remains to be learned about the way that learned changes in simple spike firing affects the IO. In the present study, we analyzed changes in simple and complex spike firing, in extracellular Purkinje cell records, from the C3 zone, in decerebrate ferrets undergoing training in a conditioning paradigm. In agreement with the N-O feedback hypothesis, acquisition resulted in a gradual decrease in complex spike activity during the conditioned stimulus, with a delay that is consistent with the long N-O latency. Also supporting the feedback hypothesis, training with a short interstimulus interval (ISI), which does not lead to acquisition of a Purkinje cell CR, did not cause a suppression of complex spike activity. In contrast, observations that extinction did not lead to a recovery in complex spike activity and the irregular patterns of simple and complex spike activity after the conditioned stimulus are less conclusive. PMID:25140129
Affective Modulation of the Startle Eyeblink and Postauricular Reflexes in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Dichter, Gabriel S.; Benning, Stephen D.; Holtzclaw, Tia N.; Bodfish, James W.
2010-01-01
Eyeblink and postauricular reflexes to standardized affective images were examined in individuals without (n = 37) and with (n = 20) autism spectrum disorders (ASDs). Affective reflex modulation in control participants replicated previous findings. The ASD group, however, showed anomalous reflex modulation patterns, despite similar self-report…
Reflex Augmentation of a Tap-Elicited Eyeblink: The Effects of Tone Frequency and Tap Intensity.
ERIC Educational Resources Information Center
Cohen, Michelle E.; And Others
1986-01-01
Describes two experiments that examined whether the amplitude of the human eyeblink by a mild tap between the eyebrows can be increased if a brief tone is presented simultaneously with the tap and how these effects change from newborn infants to adults. (HOD)
Brown, Kevin L.; Stanton, Mark E.
2008-01-01
Eyeblink classical conditioning (EBC) was observed across a broad developmental period with tasks utilizing two interstimulus intervals (ISIs). In ISI discrimination, two distinct conditioned stimuli (CSs; light and tone) are reinforced with a periocular shock unconditioned stimulus (US) at two different CS-US intervals. Temporal uncertainty is identical in design with the exception that the same CS is presented at both intervals. Developmental changes in conditioning have been reported in each task beyond ages when single-ISI learning is well developed. The present study sought to replicate and extend these previous findings by testing each task at four separate ages. Consistent with previous findings, younger rats (postnatal day – PD - 23 and 30) trained in ISI discrimination showed evidence of enhanced cross-modal influence of the short CS-US pairing upon long CS conditioning relative to older subjects. ISI discrimination training at PD43-47 yielded outcomes similar to those in adults (PD65-71). Cross-modal transfer effects in this task therefore appear to diminish between PD30 and PD43-47. Comparisons of ISI discrimination with temporal uncertainty indicated that cross-modal transfer in ISI discrimination at the youngest ages did not represent complete generalization across CSs. ISI discrimination undergoes a more protracted developmental emergence than single-cue EBC and may be a more sensitive indicator of developmental disorders involving cerebellar dysfunction. PMID:18726989
Modeling startle eyeblink electromyogram to assess fear learning.
Khemka, Saurabh; Tzovara, Athina; Gerster, Samuel; Quednow, Boris B; Bach, Dominik R
2017-02-01
Pavlovian fear conditioning is widely used as a laboratory model of associative learning in human and nonhuman species. In this model, an organism is trained to predict an aversive unconditioned stimulus from initially neutral events (conditioned stimuli, CS). In humans, fear memory is typically measured via conditioned autonomic responses or fear-potentiated startle. For the latter, various analysis approaches have been developed, but a systematic comparison of competing methodologies is lacking. Here, we investigate the suitability of a model-based approach to startle eyeblink analysis for assessment of fear memory, and compare this to extant analysis strategies. First, we build a psychophysiological model (PsPM) on a generic startle response. Then, we optimize and validate this PsPM on three independent fear-conditioning data sets. We demonstrate that our model can robustly distinguish aversive (CS+) from nonaversive stimuli (CS-, i.e., has high predictive validity). Importantly, our model-based approach captures fear-potentiated startle during fear retention as well as fear acquisition. Our results establish a PsPM-based approach to assessment of fear-potentiated startle, and qualify previous peak-scoring methods. Our proposed model represents a generic startle response and can potentially be used beyond fear conditioning, for example, to quantify affective startle modulation or prepulse inhibition of the acoustic startle response. © 2016 The Authors. Psychophysiology published by Wiley Periodicals, Inc. on behalf of Society for Psychophysiological Research.
Asnaani, Anu; Sawyer, Alice T.; Aderka, Idan M.; Hofmann, Stefan G.
2012-01-01
To examine the effects of different emotion regulation strategies on acoustic eye-blink startle, 65 participants viewed positive, neutral, and negative pictures and were instructed to suppress, reappraise, or accept their emotional responses to these pictures using a within-group experimental design with separate blocks of pictures for each strategy. Instructions to suppress the emotional response led to an attenuation of the eye-blink startle magnitude, in comparison with instructions to reappraise or accept. Reappraisal and acceptance instructions did not differ from one another in their effect on startle. These results are discussed within the context of the existing empirical literature on emotion regulation. PMID:24551448
ERIC Educational Resources Information Center
Fister, Mathew; Bickford, Paula C.; Cartford, M. Claire; Samec, Amy
2004-01-01
The neurotransmitter norepinephrine (NE) has been shown to modulate cerebellar-dependent learning and memory. Lesions of the nucleus locus coeruleus or systemic blockade of noradrenergic receptors has been shown to delay the acquisition of several cerebellar-dependent learning tasks. To date, no studies have shown a direct involvement of…
Tempest, Gavin D; Parfitt, Gaynor
2017-07-01
The interplay between the prefrontal cortex and amygdala is proposed to explain the regulation of affective responses (pleasure/displeasure) during exercise as outlined in the dual-mode model. However, due to methodological limitations the dual-mode model has not been fully tested. In this study, prefrontal oxygenation (using near-infrared spectroscopy) and amygdala activity (reflected by eyeblink amplitude using acoustic startle methodology) were recorded during exercise standardized to metabolic processes: 80% of ventilatory threshold (below VT), at the VT, and at the respiratory compensation point (RCP). Self-reported tolerance of the intensity of exercise was assessed prior to, and affective responses recorded during exercise. The results revealed that, as the intensity of exercise became more challenging (from below VT to RCP), prefrontal oxygenation was larger and eyeblink amplitude and affective responses were reduced. Below VT and at VT, larger prefrontal oxygenation was associated with larger eyeblink amplitude. At the RCP, prefrontal oxygenation was greater in the left than right hemisphere, and eyeblink amplitude explained significant variance in affective responses (with prefrontal oxygenation) and self-reported tolerance. These findings highlight the role of the prefrontal cortex and potentially the amygdala in the regulation of affective (particularly negative) responses during exercise at physiologically challenging intensities (above VT). In addition, a psychophysiological basis of self-reported tolerance is indicated. This study provides some support of the dual-mode model and insight into the neural basis of affective responses during exercise. © 2017 Society for Psychophysiological Research.
Caro-Martín, C Rocío; Leal-Campanario, Rocío; Sánchez-Campusano, Raudel; Delgado-García, José M; Gruart, Agnès
2015-11-04
We were interested in determining whether rostral medial prefrontal cortex (rmPFC) neurons participate in the measurement of conditioned stimulus-unconditioned stimulus (CS-US) time intervals during classical eyeblink conditioning. Rabbits were conditioned with a delay paradigm consisting of a tone as CS. The CS started 50, 250, 500, 1000, or 2000 ms before and coterminated with an air puff (100 ms) directed at the cornea as the US. Eyelid movements were recorded with the magnetic search coil technique and the EMG activity of the orbicularis oculi muscle. Firing activities of rmPFC neurons were recorded across conditioning sessions. Reflex and conditioned eyelid responses presented a dominant oscillatory frequency of ≈12 Hz. The firing rate of each recorded neuron presented a single peak of activity with a frequency dependent on the CS-US interval (i.e., ≈12 Hz for 250 ms, ≈6 Hz for 500 ms, and≈3 Hz for 1000 ms). Interestingly, rmPFC neurons presented their dominant firing peaks at three precise times evenly distributed with respect to CS start and also depending on the duration of the CS-US interval (only for intervals of 250, 500, and 1000 ms). No significant neural responses were recorded at very short (50 ms) or long (2000 ms) CS-US intervals. rmPFC neurons seem not to encode the oscillatory properties characterizing conditioned eyelid responses in rabbits, but are probably involved in the determination of CS-US intervals of an intermediate range (250-1000 ms). We propose that a variable oscillator underlies the generation of working memories in rabbits. The way in which brains generate working memories (those used for the transient processing and storage of newly acquired information) is still an intriguing question. Here, we report that the firing activities of neurons located in the rostromedial prefrontal cortex recorded in alert behaving rabbits are controlled by a dynamic oscillator. This oscillator generated firing frequencies in a variable band of 3-12 Hz depending on the conditioned stimulus-unconditioned stimulus intervals (1 s, 500 ms, 250 ms) selected for classical eyeblink conditioning of behaving rabbits. Shorter (50 ms) and longer (2 s) intervals failed to activate the oscillator and prevented the acquisition of conditioned eyelid responses. This is an unexpected mechanism to generate sustained firing activities in neural circuits generating working memories. Copyright © 2015 the authors 0270-6474/15/3514809-13$15.00/0.
ERIC Educational Resources Information Center
Tharp, Ian J.; Pickering, Alan D.
2011-01-01
Individual differences in psychophysiological function have been shown to influence the balance between flexibility and distractibility during attentional set-shifting [e.g., Dreisbach et al. (2005). Dopamine and cognitive control: The influence of spontaneous eyeblink rate and dopamine gene polymorphisms on perseveration and distractibility.…
ERIC Educational Resources Information Center
Villarreal, Ronald P.; Steinmetz, Joseph E.
2005-01-01
How the nervous system encodes learning and memory processes has interested researchers for 100 years. Over this span of time, a number of basic neuroscience methods has been developed to explore the relationship between learning and the brain, including brain lesion, stimulation, pharmacology, anatomy, imaging, and recording techniques. In this…
Effects of meditation practice on spontaneous eyeblink rate.
Kruis, Ayla; Slagter, Heleen A; Bachhuber, David R W; Davidson, Richard J; Lutz, Antoine
2016-05-01
A rapidly growing body of research suggests that meditation can change brain and cognitive functioning. Yet little is known about the neurochemical mechanisms underlying meditation-related changes in cognition. Here, we investigated the effects of meditation on spontaneous eyeblink rates (sEBR), a noninvasive peripheral correlate of striatal dopamine activity. Previous studies have shown a relationship between sEBR and cognitive functions such as mind wandering, cognitive flexibility, and attention-functions that are also affected by meditation. We therefore expected that long-term meditation practice would alter eyeblink activity. To test this, we recorded baseline sEBR and intereyeblink intervals (IEBI) in long-term meditators (LTM) and meditation-naive participants (MNP). We found that LTM not only blinked less frequently, but also showed a different eyeblink pattern than MNP. This pattern had good to high degree of consistency over three time points. Moreover, we examined the effects of an 8-week course of mindfulness-based stress reduction on sEBR and IEBI, compared to an active control group and a waitlist control group. No effect of short-term meditation practice was found. Finally, we investigated whether different types of meditation differentially alter eyeblink activity by measuring sEBR and IEBI after a full day of two kinds of meditation practices in the LTM. No effect of meditation type was found. Taken together, these findings may suggest either that individual difference in dopaminergic neurotransmission is a self-selection factor for meditation practice, or that long-term, but not short-term meditation practice induces stable changes in baseline striatal dopaminergic functioning. © 2016 Society for Psychophysiological Research.
Changes in the magnitude of the eyeblink startle response during habituation of sexual arousal.
Koukounas, E; Over, R
2000-06-01
Modulation of the startle response was used to examine emotional processing of sexual stimulation across trials within a session. Eyeblink startle was elicited by a probe (burst of intense white noise) presented intermittently while men were viewing an erotic film segment. Repeated display of the film segment resulted in a progressive decrease in sexual arousal. Habituation of sexual arousal was accompanied by a reduction over trials in the extent the men felt absorbed when viewing the erotic stimulus and by an increase over trials in the magnitude of the eyeblink startle response. Replacing the familiar stimulus by a novel erotic stimulus increased in sexual arousal and absorption and reduced startle (novelty effect), while dishabituation was evident for all three response measures when the familiar stimulus was reintroduced. This pattern of results indicates that with repeated presentation an erotic stimulus is experienced not only as less sexually arousing but also as less appetitive and absorbing. The question of whether habituation of sexual arousal is mediated by changes in attentional and affective processing over trials is discussed, as are clinical contexts in which eyeblink startle can be used in studying aspects of sexual functioning.
Auditory priming improves neural synchronization in auditory-motor entrainment.
Crasta, Jewel E; Thaut, Michael H; Anderson, Charles W; Davies, Patricia L; Gavin, William J
2018-05-22
Neurophysiological research has shown that auditory and motor systems interact during movement to rhythmic auditory stimuli through a process called entrainment. This study explores the neural oscillations underlying auditory-motor entrainment using electroencephalography. Forty young adults were randomly assigned to one of two control conditions, an auditory-only condition or a motor-only condition, prior to a rhythmic auditory-motor synchronization condition (referred to as combined condition). Participants assigned to the auditory-only condition auditory-first group) listened to 400 trials of auditory stimuli presented every 800 ms, while those in the motor-only condition (motor-first group) were asked to tap rhythmically every 800 ms without any external stimuli. Following their control condition, all participants completed an auditory-motor combined condition that required tapping along with auditory stimuli every 800 ms. As expected, the neural processes for the combined condition for each group were different compared to their respective control condition. Time-frequency analysis of total power at an electrode site on the left central scalp (C3) indicated that the neural oscillations elicited by auditory stimuli, especially in the beta and gamma range, drove the auditory-motor entrainment. For the combined condition, the auditory-first group had significantly lower evoked power for a region of interest representing sensorimotor processing (4-20 Hz) and less total power in a region associated with anticipation and predictive timing (13-16 Hz) than the motor-first group. Thus, the auditory-only condition served as a priming facilitator of the neural processes in the combined condition, more so than the motor-only condition. Results suggest that even brief periods of rhythmic training of the auditory system leads to neural efficiency facilitating the motor system during the process of entrainment. These findings have implications for interventions using rhythmic auditory stimulation. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Duncko, Roman; Cornwell, Brian; Cui, Lihong; Merikangas, Kathleen R.; Grillon, Christian
2007-01-01
The present study investigated the effects of acute stress exposure on learning performance in humans using analogs of two paradigms frequently used in animals. Healthy male participants were exposed to the cold pressor test (CPT) procedure, i.e., insertion of the dominant hand into ice water for 60 sec. Following the CPT or the control procedure,…
Differing Presynaptic Contributions to LTP and Associative Learning in Behaving Mice
Madroñal, Noelia; Gruart, Agnès; Delgado-García, José M.
2009-01-01
The hippocampal CA3-CA1 synapse is an excellent experimental model for studying the interactions between short- and long-term plastic changes taking place following high-frequency stimulation (HFS) of Schaffer collaterals and during the acquisition and extinction of a classical eyeblink conditioning in behaving mice. Input/output curves and a full-range paired-pulse study enabled determining the optimal intensities and inter-stimulus intervals for evoking paired-pulse facilitation (PPF) or depression (PPD) at the CA3-CA1 synapse. Long-term potentiation (LTP) induced by HFS lasted ≈10 days. HFS-induced LTP evoked an initial depression of basal PPF. Recovery of PPF baseline values was a steady and progressive process lasting ≈20 days, i.e., longer than the total duration of the LTP. In a subsequent series of experiments, we checked whether PPF was affected similarly during activity-dependent synaptic changes. Animals were conditioned using a trace paradigm, with a tone as a conditioned stimulus (CS) and an electrical shock to the trigeminal nerve as an unconditioned stimulus (US). A pair of pulses (40 ms interval) was presented to the Schaffer collateral-commissural pathway to evoke field EPSPs (fEPSPs) during the CS-US interval. Basal PPF decreased steadily across conditioning sessions (i.e., in the opposite direction to that during LTP), reaching a minimum value during the 10th conditioning session. Thus, LTP and classical eyeblink conditioning share some presynaptic mechanisms, but with an opposite evolution. Furthermore, PPF and PPD might play a homeostatic role during long-term plastic changes at the CA3-CA1 synapse. PMID:19636387
Tran, Tuan D.; Amin, Aenia; Jones, Keith G.; Sheffer, Ellen M.; Ortega, Lidia; Dolman, Keith
2017-01-01
Neonatal rats were administered a relatively high concentration of ethyl alcohol (11.9% v/v) during postnatal days 4-9, a time when the fetal brain undergoes rapid organizational change and is similar to accelerated brain changes that occur during the third trimester in humans. This model of fetal alcohol spectrum disorders (FASDs) produces severe brain damage, mimicking the amount and pattern of binge-drinking that occurs in some pregnant alcoholic mothers. We describe the use of trace eyeblink classical conditioning (ECC), a higher-order variant of associative learning, to assess long-term hippocampal dysfunction that is typically seen in alcohol-exposed adult offspring. At 90 days of age, rodents were surgically prepared with recording and stimulating electrodes, which measured electromyographic (EMG) blink activity from the left eyelid muscle and delivered mild shock posterior to the left eye, respectively. After a 5 day recovery period, they underwent 6 sessions of trace ECC to determine associative learning differences between alcohol-exposed and control rats. Trace ECC is one of many possible ECC procedures that can be easily modified using the same equipment and software, so that different neural systems can be assessed. ECC procedures in general, can be used as diagnostic tools for detecting neural pathology in different brain systems and different conditions that insult the brain. PMID:28809846
Stevens, Elizabeth S; Weinberg, Anna; Nelson, Brady D; Meissel, Emily E E; Shankman, Stewart A
2018-03-01
Attention-related abnormalities are key components of the abnormal defensive responding observed in panic disorder (PD). Although behavioral studies have found aberrant attentional biases towards threat in PD, psychophysiological studies have been mixed. Predictability of threat, an important feature of threat processing, may have contributed to these mixed findings. Additionally, anxiety sensitivity, a dimensional trait associated with PD, may yield stronger associations with cognitive processes than categorical diagnoses of PD. In this study, 171 participants with PD and/or depression and healthy controls completed a task that differentiated anticipation of predictable vs. unpredictable shocks, while startle eyeblink and event-related potentials (ERPs [N100, P300]) were recorded. In all participants, relative to the control condition, probe N100 was enhanced to both predictable and unpredictable threat, whereas P300 suppression was unique to predictable threat. Probe N100, but not P300, was associated with startle eyeblink during both threatening conditions, and was strongest for unpredictable threat. PD was not associated with ERPs, but anxiety sensitivity (physical concerns) was positively associated with probe N100 (indicating reduced responding) in the unpredictable condition independent of PD diagnosis. Vulnerability to panic-related psychopathology may be characterized by aberrant early processing of threat, which may be especially evident during anticipation of unpredictable threats. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cerebellar Structure and Function in Male Wistar-Kyoto Hyperactive Rats
Thanellou, Alexandra; Green, John T.
2014-01-01
Previous research has suggested that the Wistar-Kyoto Hyperactive (WKHA) rat strain may model some of the behavioral features associated with attention-deficit/hyperactivity disorder (ADHD). We have shown that, in cerebellar-dependent eyeblink conditioning, WKHA emit eyeblink CRs with shortened onset latencies. To further characterize the shortened CR onset latencies seen in WKHA rats, we examined 750-ms delay conditioning with either a tone CS or a light CS, we extended acquisition training, and we included Wistar rats as an additional, outbred control strain. Our results indicated that WKHAs learned more quickly and showed a shortened CR onset latency to a tone CS compared to both Wistar-Kyoto Hypertensive (WKHT) and Wistars. WKHAs and Wistars show a lengthening of CR onset latency over conditioning with a tone CS and an increasing confinement of CRs to the later part of the tone CS (inhibition of delay). WKHAs learned more quickly to a light CS only in comparison to WKHTs and showed a shortened CR onset latency only in comparison to Wistars. Wistars showed an increasing confinement of CRs to the late part of the light CS over conditioning. We used unbiased stereology to estimate the number of Purkinje and granule cells in the cerebellar cortex of the three strains. Our results indicated that WKHAs have more granule cells than Wistars and WKHTs and more Purkinje cells than Wistars. Results are discussed in terms of CS processing and cerebellar cortical contributions to EBC. PMID:23398437
Variability of ICA decomposition may impact EEG signals when used to remove eyeblink artifacts
PONTIFEX, MATTHEW B.; GWIZDALA, KATHRYN L.; PARKS, ANDREW C.; BILLINGER, MARTIN; BRUNNER, CLEMENS
2017-01-01
Despite the growing use of independent component analysis (ICA) algorithms for isolating and removing eyeblink-related activity from EEG data, we have limited understanding of how variability associated with ICA uncertainty may be influencing the reconstructed EEG signal after removing the eyeblink artifact components. To characterize the magnitude of this ICA uncertainty and to understand the extent to which it may influence findings within ERP and EEG investigations, ICA decompositions of EEG data from 32 college-aged young adults were repeated 30 times for three popular ICA algorithms. Following each decomposition, eyeblink components were identified and removed. The remaining components were back-projected, and the resulting clean EEG data were further used to analyze ERPs. Findings revealed that ICA uncertainty results in variation in P3 amplitude as well as variation across all EEG sampling points, but differs across ICA algorithms as a function of the spatial location of the EEG channel. This investigation highlights the potential of ICA uncertainty to introduce additional sources of variance when the data are back-projected without artifact components. Careful selection of ICA algorithms and parameters can reduce the extent to which ICA uncertainty may introduce an additional source of variance within ERP/EEG studies. PMID:28026876
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens; Comani, Silvia
2018-01-01
EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity ( p ). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings.
Tamburro, Gabriella; Fiedler, Patrique; Stone, David; Haueisen, Jens
2018-01-01
Background EEG may be affected by artefacts hindering the analysis of brain signals. Data-driven methods like independent component analysis (ICA) are successful approaches to remove artefacts from the EEG. However, the ICA-based methods developed so far are often affected by limitations, such as: the need for visual inspection of the separated independent components (subjectivity problem) and, in some cases, for the independent and simultaneous recording of the inspected artefacts to identify the artefactual independent components; a potentially heavy manipulation of the EEG signals; the use of linear classification methods; the use of simulated artefacts to validate the methods; no testing in dry electrode or high-density EEG datasets; applications limited to specific conditions and electrode layouts. Methods Our fingerprint method automatically identifies EEG ICs containing eyeblinks, eye movements, myogenic artefacts and cardiac interference by evaluating 14 temporal, spatial, spectral, and statistical features composing the IC fingerprint. Sixty-two real EEG datasets containing cued artefacts are recorded with wet and dry electrodes (128 wet and 97 dry channels). For each artefact, 10 nonlinear SVM classifiers are trained on fingerprints of expert-classified ICs. Training groups include randomly chosen wet and dry datasets decomposed in 80 ICs. The classifiers are tested on the IC-fingerprints of different datasets decomposed into 20, 50, or 80 ICs. The SVM performance is assessed in terms of accuracy, False Omission Rate (FOR), Hit Rate (HR), False Alarm Rate (FAR), and sensitivity (p). For each artefact, the quality of the artefact-free EEG reconstructed using the classification of the best SVM is assessed by visual inspection and SNR. Results The best SVM classifier for each artefact type achieved average accuracy of 1 (eyeblink), 0.98 (cardiac interference), and 0.97 (eye movement and myogenic artefact). Average classification sensitivity (p) was 1 (eyeblink), 0.997 (myogenic artefact), 0.98 (eye movement), and 0.48 (cardiac interference). Average artefact reduction ranged from a maximum of 82% for eyeblinks to a minimum of 33% for cardiac interference, depending on the effectiveness of the proposed method and the amplitude of the removed artefact. The performance of the SVM classifiers did not depend on the electrode type, whereas it was better for lower decomposition levels (50 and 20 ICs). Discussion Apart from cardiac interference, SVM performance and average artefact reduction indicate that the fingerprint method has an excellent overall performance in the automatic detection of eyeblinks, eye movements and myogenic artefacts, which is comparable to that of existing methods. Being also independent from simultaneous artefact recording, electrode number, type and layout, and decomposition level, the proposed fingerprint method can have useful applications in clinical and experimental EEG settings. PMID:29492336
Defensive Physiological Reactions to Rejection
Gyurak, Anett; Ayduk, Özlem
2014-01-01
We examined the hypothesis that rejection automatically elicits defensive physiological reactions in people with low self-esteem (SE) but that attentional control moderates this effect. Undergraduates (N = 67) completed questionnaire measures of SE and attentional control. Their eye-blink responses to startle probes were measured while they viewed paintings related to rejection and acceptance themes. The stimuli also included positive-, negative-, and neutral-valence control paintings unrelated to rejection. As predicted, compared with people high in SE, those low in SE showed stronger startle eye-blink responses to paintings related to rejection, but not to negative paintings. Paintings related to acceptance did not attenuate their physiological reactivity. Furthermore, attentional control moderated their sensitivity to rejection, such that low SE was related to greater eye-blink responses to rejection only among individuals who were low in attentional control. Implications of the role of attentional control as a top-down process regulating emotional reactivity in people with low SE are discussed. PMID:17894606
Enhanced startle responsivity 24 hours after acute stress exposure.
Herten, Nadja; Otto, Tobias; Adolph, Dirk; Pause, Bettina M; Kumsta, Robert; Wolf, Oliver T
2016-10-01
Cortisol release in a stressful situation can be beneficial for memory encoding and memory consolidation. Stimuli, such as odors, related to the stressful episode may successfully cue memory contents of the stress experience. The current investigation aimed at testing the potency of stress to influence startle responsivity 24 hr later and to implicitly reactivate emotional memory traces triggered by an odor involved. Participants were assigned to either a stress (Trier Social Stress Test [TSST]) or control (friendly TSST [f-TSST]) condition featuring an ambient odor. On the next day, participants underwent an auditory startle paradigm while their eyeblink reflex was recorded by an electrooculogram. Three different olfactory stimuli were delivered, one being the target odor presented the day before. Additionally, negative, positive, and pictures of the committee members were included for comparing general startle responsivity and fear-potentiated startle. Participants of the stress group demonstrated an enhanced startle response across all stimuli compared to participants of the control group. There were no specific effects with regard to the target odor. The typical fear-potentiated startle response occurred. Stressed participants tended to rate the target odor more aversive than control participants. Odor recognition memory did not differ between the groups, suggesting an implicit effect on odor valence. Our results show that acute stress exposure enhances startle responsivity 24 hr later. This effect might be caused by a shift of amygdala function causing heightened sensitivity, but lower levels of specificity. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Eleore, Lyndell; López-Ramos, Juan Carlos; Guerra-Narbona, Rafael; Delgado-García, José M.
2011-01-01
We studied the interactions between short- and long-term plastic changes taking place during the acquisition of a classical eyeblink conditioning and following high-frequency stimulation (HFS) of the reuniens nucleus in behaving mice. Synaptic changes in strength were studied at the reuniens-medial prefrontal cortex (mPFC) and the reuniens-CA1 synapses. Input/output curves and a paired-pulse study enabled determining the functional capabilities of the two synapses and the optimal intensities to be applied at the reuniens nucleus during classical eyeblink conditioning and for HFS applied to the reuniens nucleus. Animals were conditioned using a trace paradigm, with a tone as conditioned stimulus (CS) and an electric shock to the trigeminal nerve as unconditioned stimulus (US). A single pulse was presented to the reuniens nucleus to evoke field EPSPs (fEPSPs) in mPFC and CA1 areas during the CS-US interval. No significant changes in synaptic strength were observed at the reuniens-mPFC and reuniens-CA1 synapses during the acquisition of eyelid conditioned responses (CRs). Two successive HFS sessions carried out during the first two conditioning days decreased the percentage of CRs, without evoking any long-term potentiation (LTP) at the recording sites. HFS of the reuniens nucleus also prevented the proper acquisition of an object discrimination task. A subsequent study revealed that HFS of the reuniens nucleus evoked a significant decrease of paired-pulse facilitation. In conclusion, reuniens nucleus projections to prefrontal and hippocampal circuits seem to participate in the acquisition of associative learning through a mechanism that does not required the development of LTP. PMID:21858159
Sadnicka, A; Teo, J T; Kojovic, M; Pareés, I; Saifee, T A; Kassavetis, P; Schwingenschuh, P; Katschnig-Winter, P; Stamelou, M; Mencacci, N E; Rothwell, J C; Edwards, M J; Bhatia, K P
2015-05-01
Traditionally dystonia has been considered a disorder of basal ganglia dysfunction. However, recent research has advocated a more complex neuroanatomical network. In particular, there is increasing interest in the pathophysiological role of the cerebellum. Patients with cervical and focal hand dystonia have impaired cerebellar associative learning using the paradigm eyeblink conditioning. This is perhaps the most direct evidence to date that the cerebellum is implicated in patients. Eleven patients with DYT1 dystonia and five patients with DYT6 dystonia were examined and rates of eyeblink conditioning were compared with age-matched controls. A marker of brainstem excitability, the blink reflex recovery, was also studied in the same groups. Patients with DYT1 and DYT6 dystonia have a normal ability to acquire conditioned responses. Blink reflex recovery was enhanced in DYT1 but this effect was not seen in DYT6. If the cerebellum is an important driver in DYT1 and DYT6 dystonia our data suggest that there is specific cerebellar dysfunction such that the circuits essential for conditioning function normally. Our data are contrary to observations in focal dystonia and suggest that the cerebellum may have a distinct role in different subsets of dystonia. Evidence of enhanced blink reflex recovery in all patients with dystonia was not found and recent studies calling for the blink recovery reflex to be used as a diagnostic test for dystonic tremor may require further corroboration. © 2014 The Author(s) European Journal of Neurology © 2014 EAN.
Automatic removal of eye-movement and blink artifacts from EEG signals.
Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun
2010-03-01
Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.
Eyeblink conditioning in unmedicated schizophrenia patients: A positron emission tomography study
Parker, Krystal L.; Andreasen, Nancy C.; Liu, Dawei; Freeman, John H.; O’Leary, Daniel S.
2014-01-01
Previous studies suggest that patients with schizophrenia exhibit dysfunctions in a widely distributed circuit—the cortico-cerebellar-thalamic-cortical circuit, or CCTCC—and that this may explain the multiple cognitive deficits observed in the disorder. This study uses positron emission tomography (PET) with O15 H2O to measure regional cerebral blood flow (rCBF) in response to a classic test of cerebellar function, the associative learning that occurs during eyeblink conditioning, in a sample of 20 unmedicated schizophrenia patients and 20 closely matched healthy controls. The PET paradigm examined three phases of acquisition and extinction (early, middle and late). The patients displayed impaired behavioral performance during both acquisition and extinction. The imaging data indicate that, compared to the control subjects, the patients displayed decreases in rCBF in all three components of the CCTCC during both acquisition and extinction. Specifically, patients had less rCBF in the middle and medial frontal lobes, anterior cerebellar lobules I/V and VI, as well as the thalamus during acquisition and although similar areas were found in the frontal lobe, ipsilateral cerebellar lobule IX showed consistently less activity in patients during extinction. Thus this study provides additional support for the hypothesis that patients with schizophrenia have a cognitive dysmetria—an inability to smoothly coordinate many different types of mental activity—that affects even a very basic cognitive task that taps into associative learning. PMID:24090512
Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.
Kim, Woong Bin; Cho, Jun-Hyeong
2017-08-30
In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.
Ernst, T M; Beyer, L; Mueller, O M; Göricke, S; Ladd, M E; Gerwig, M; Timmann, D
2016-05-01
Human cerebellar lesion studies provide good evidence that the cerebellum contributes to the acquisition of classically conditioned eyeblink responses (CRs). As yet, only one study used more advanced methods of lesion-symptom (or lesion-behavior) mapping to investigate which cerebellar areas are involved in CR acquisition in humans. Likewise, comparatively few studies investigated the contribution of the human cerebellum to CR extinction and savings. In this present study, young adults with focal cerebellar disease were tested. A subset of participants was expected to acquire enough conditioned responses to allow the investigation of extinction and saving effects. 19 participants with chronic surgical lesions of the cerebellum and 19 matched control subjects were tested. In all cerebellar subjects benign tumors of the cerebellum had been surgically removed. Eyeblink conditioning was performed using a standard short delay protocol. An initial unpaired control phase was followed by an acquisition phase, an extinction phase and a subsequent reacquisition phase. Structural 3T magnetic resonance images of the brain were acquired on the day of testing. Cerebellar lesions were normalized using methods optimized for the cerebellum. Subtraction analysis and Liebermeister tests were used to perform lesion-symptom mapping. As expected, CR acquisition was significantly reduced in cerebellar subjects compared to controls. Reduced CR acquisition was significantly more likely in participants with lesions of lobule VI and Crus I extending into Crus II (p<0.05, Liebermeister test). Cerebellar subjects could be subdivided into two groups: a smaller group (n=5) which showed acquisition, extinction and savings within the normal range; and a larger group (n=14) which did not show acquisition. In the latter, no conclusions on extinction or savings could be drawn. Previous findings were confirmed that circumscribed areas in lobule VI and Crus I are of major importance in CR acquisition. In addition, the present data suggest that if the critical regions of the cerebellar cortex are lesioned, the ability to acquire CRs is not only reduced but abolished. Subjects with lesions outside these critical areas, on the other hand show preserved acquisition, extinction and saving effects. As a consequence, studies in human subjects with cerebellar lesions do not allow drawing conclusions on CR extinction and savings. In light of the present findings, previous reports of reduced extinction in humans with circumscribed cerebellar disease need to be critically reevaluated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Pillai, Roshni; Yathiraj, Asha
2017-09-01
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Neuronal Correlates of Cross-Modal Transfer in the Cerebellum and Pontine Nuclei
Campolattaro, Matthew M.; Kashef, Alireza; Lee, Inah; Freeman, John H.
2011-01-01
Cross-modal transfer occurs when learning established with a stimulus from one sensory modality facilitates subsequent learning with a new stimulus from a different sensory modality. The current study examined neuronal correlates of cross-modal transfer of Pavlovian eyeblink conditioning in rats. Neuronal activity was recorded from tetrodes within the anterior interpositus nucleus (IPN) of the cerebellum and basilar pontine nucleus (PN) during different phases of training. After stimulus pre-exposure and unpaired training sessions with a tone conditioned stimulus (CS), light CS, and periorbital stimulation unconditioned stimulus (US), rats received associative training with one of the CSs and the US (CS1-US). Training then continued on the same day with the other CS to assess cross-modal transfer (CS2-US). The final training session included associative training with both CSs on separate trials to establish stronger cross-modal transfer (CS1/CS2). Neurons in the IPN and PN showed primarily unimodal responses during pre-training sessions. Learning-related facilitation of activity correlated with the conditioned response (CR) developed in the IPN and PN during CS1-US training. Subsequent CS2-US training resulted in acquisition of CRs and learning-related neuronal activity in the IPN but substantially less little learning-related activity in the PN. Additional CS1/CS2 training increased CRs and learning-related activity in the IPN and PN during CS2-US trials. The findings suggest that cross-modal neuronal plasticity in the PN is driven by excitatory feedback from the IPN to the PN. Interacting plasticity mechanisms in the IPN and PN may underlie behavioral cross-modal transfer in eyeblink conditioning. PMID:21411647
du Plessis, Lindie; Jacobson, Sandra W; Molteno, Christopher D; Robertson, Frances C; Peterson, Bradley S; Jacobson, Joseph L; Meintjes, Ernesta M
2015-01-01
Classical eyeblink conditioning (EBC), an elemental form of learning, is among the most sensitive indicators of fetal alcohol spectrum disorders. The cerebellum plays a key role in maintaining timed movements with millisecond accuracy required for EBC. Functional MRI (fMRI) was used to identify cerebellar regions that mediate timing in healthy controls and the degree to which these areas are also recruited in children with prenatal alcohol exposure. fMRI data were acquired during an auditory rhythmic/non-rhythmic finger tapping task. We present results for 17 children with fetal alcohol syndrome (FAS) or partial FAS, 17 heavily exposed (HE) nonsyndromal children and 16 non- or minimally exposed controls. Controls showed greater cerebellar blood oxygen level dependent (BOLD) activation in right crus I, vermis IV-VI, and right lobule VI during rhythmic than non-rhythmic finger tapping. The alcohol-exposed children showed smaller activation increases during rhythmic tapping in right crus I than the control children and the most severely affected children with either FAS or PFAS showed smaller increases in vermis IV-V. Higher levels of maternal alcohol intake per occasion during pregnancy were associated with reduced activation increases during rhythmic tapping in all four regions associated with rhythmic tapping in controls. The four cerebellar areas activated by the controls more during rhythmic than non-rhythmic tapping have been implicated in the production of timed responses in several previous studies. These data provide evidence linking binge-like drinking during pregnancy to poorer function in cerebellar regions involved in timing and somatosensory processing needed for complex tasks requiring precise timing.
Takehara, Kaori; Kawahara, Shigenori; Kirino, Yutaka
2003-10-29
Many studies have confirmed the time-limited involvement of the hippocampus in mnemonic processes and suggested that there is reorganization of the responsible brain circuitry during memory consolidation. To clarify such reorganization, we chose trace classical eyeblink conditioning, in which hippocampal ablation produces temporally graded retrograde amnesia. Here, we extended the temporal characterization of retrograde amnesia to other regions that are involved in acquisition during this task: the medial prefrontal cortex (mPFC) and the cerebellum. At a various time interval after establishing the trace conditioned response (CR), rats received an aspiration of one of the three regions. After recovery, the animals were tested for their CR retention. When ablated 1 d after the learning, both the hippocampal lesion and the cerebellar lesion group of rats exhibited a severe impairment in retention of the CR, whereas the mPFC lesion group showed only a slight decline. With an increase in interval between the lesion and the learning, the effect of the hippocampal lesion diminished and that of the mPFC lesion increased. When ablated 4 weeks after the learning, the hippocampal lesion group exhibited as robust CRs as its corresponding control group. In contrast, the mPFC lesion and the cerebellar lesion groups failed to retain the CRs. These results indicate that the hippocampus and the cerebellum, but only marginally the mPFC, constitute a brain circuitry that mediates recently acquired memory. As time elapses, the circuitry is reorganized to use mainly the mPFC and the cerebellum, but not the hippocampus, for remotely acquired memory.
Benke, Christoph; Blumenthal, Terry D; Modeß, Christiane; Hamm, Alfons O; Pané-Farré, Christiane A
2015-09-01
The way in which the tendency to fear somatic arousal sensations (anxiety sensitivity), in interaction with the created expectations regarding arousal induction, might affect defensive responding to a symptom provocation challenge is not yet understood. The present study investigated the effect of anxiety sensitivity on autonomic arousal, startle eyeblink responses, and reported arousal and alertness to expected vs. unexpected caffeine consumption. To create a match/mismatch of expected and experienced arousal, high and low anxiety sensitive participants received caffeine vs. no drug either mixed in coffee (expectation of arousal induction) or in bitter lemon soda (no expectation of arousal induction) on four separate occasions. Autonomic arousal (heart rate, skin conductance level), respiration (end-tidal CO2, minute ventilation), defensive reflex responses (startle eyeblink), and reported arousal and alertness were recorded prior to, immediately and 30 min after beverage ingestion. Caffeine increased ventilation, autonomic arousal, and startle response magnitudes. Both groups showed comparable levels of autonomic and respiratory responses. The startle eyeblink responses were decreased when caffeine-induced arousal occurred unexpectedly, e.g., after administering caffeine in bitter lemon. This effect was more accentuated in high anxiety sensitive persons. Moreover, in high anxiety sensitive persons, the expectation of arousal (coffee consumption) led to higher subjective alertness when administering caffeine and increased arousal even if no drug was consumed. Unexpected symptom provocation leads to increased attention allocation toward feared arousal sensations in high anxiety sensitive persons. This finding broadens our understanding of modulatory mechanisms in defensive responding to bodily symptoms.
Grashow, Rachel; Miller, Mark W; McKinney, Ann; Nie, Linda H; Sparrow, David; Hu, Howard; Weisskopf, Marc G
2013-01-01
Physiologically-based indicators of neural plasticity in humans could provide mechanistic insights into toxicant actions on learning in the brain, and perhaps prove more objective and sensitive measures of such effects than other methods. We explored the association between lead exposure and classical conditioning of the acoustic startle reflex (ASR)-a simple form of associative learning in the brain-in a population of elderly men. Fifty-one men from the VA Normative Aging Study with cumulative bone lead exposure measurements made with K-X-Ray-Fluorescence participated in a fear-conditioning protocol. The mean age of the men was 75.5years (standard deviation [sd]=5.9) and mean patella lead concentration was 22.7μg/g bone (sd=15.9). Baseline ASR eyeblink response decreased with age, but was not associated with subsequent conditioning. Among 37 men with valid responses at the end of the protocol, higher patella lead was associated with decreased awareness of the conditioning contingency (declarative learning; adjusted odds ratio [OR] per 20μg/g patella lead=0.91, 95% confidence interval [CI]: 0.84, 0.99, p=0.03). Eyeblink conditioning (non-declarative learning) was 0.44sd less (95% CI: -0.91, 0.02; p=0.06) per 20μg/g patella lead after adjustment. Each result was stronger when correcting for the interval between lead measurement and startle testing (awareness: OR=0.88, 95% CI: 0.78, 0.99, p=0.04; conditioning: -0.79sd less, 95% CI: -1.56, 0.03, p=0.04). This initial exploration suggests that lead exposure interferes with specific neural mechanisms of learning and offers the possibility that the ASR may provide a new approach to physiologically explore the effects of neurotoxicant exposures on neural mechanisms of learning in humans with a paradigm that is directly comparable to animal models. Copyright © 2013 Elsevier Inc. All rights reserved.
Auditory reafferences: the influence of real-time feedback on movement control.
Kennel, Christian; Streese, Lukas; Pizzera, Alexandra; Justen, Christoph; Hohmann, Tanja; Raab, Markus
2015-01-01
Auditory reafferences are real-time auditory products created by a person's own movements. Whereas the interdependency of action and perception is generally well studied, the auditory feedback channel and the influence of perceptual processes during movement execution remain largely unconsidered. We argue that movements have a rhythmic character that is closely connected to sound, making it possible to manipulate auditory reafferences online to understand their role in motor control. We examined if step sounds, occurring as a by-product of running, have an influence on the performance of a complex movement task. Twenty participants completed a hurdling task in three auditory feedback conditions: a control condition with normal auditory feedback, a white noise condition in which sound was masked, and a delayed auditory feedback condition. Overall time and kinematic data were collected. Results show that delayed auditory feedback led to a significantly slower overall time and changed kinematic parameters. Our findings complement previous investigations in a natural movement situation with non-artificial auditory cues. Our results support the existing theoretical understanding of action-perception coupling and hold potential for applied work, where naturally occurring movement sounds can be implemented in the motor learning processes.
Eyeblink conditioning in unmedicated schizophrenia patients: a positron emission tomography study.
Parker, Krystal L; Andreasen, Nancy C; Liu, Dawei; Freeman, John H; O'Leary, Daniel S
2013-12-30
Previous studies suggest that patients with schizophrenia exhibit dysfunctions in a widely distributed circuit-the cortico-cerebellar-thalamic-cortical circuit, or CCTCC-and that this may explain the multiple cognitive deficits observed in the disorder. This study uses positron emission tomography (PET) with O(15) H₂O to measure regional cerebral blood flow (rCBF) in response to a classic test of cerebellar function, the associative learning that occurs during eyeblink conditioning, in a sample of 20 unmedicated schizophrenia patients and 20 closely matched healthy controls. The PET paradigm examined three phases of acquisition and extinction (early, middle and late). The patients displayed impaired behavioral performance during both acquisition and extinction. The imaging data indicate that, compared to the control subjects, the patients displayed decreases in rCBF in all three components of the CCTCC during both acquisition and extinction. Specifically, patients had less rCBF in the middle and medial frontal lobes, anterior cerebellar lobules I/V and VI, as well as the thalamus during acquisition and although similar areas were found in the frontal lobe, ipsilateral cerebellar lobule IX showed consistently less activity in patients during extinction. Thus this study provides additional support for the hypothesis that patients with schizophrenia have a cognitive dysmetria--an inability to smoothly coordinate many different types of mental activity--that affects even a very basic cognitive task that taps into associative learning. © 2013 Elsevier Ireland Ltd. All rights reserved.
Holloway, Jacqueline L.; Trivedi, Payal; Myers, Catherine E.; Servatius, Richard J.
2012-01-01
In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework. PMID:23162449
Holloway, Jacqueline L; Trivedi, Payal; Myers, Catherine E; Servatius, Richard J
2012-01-01
In classical conditioning, proactive interference may arise from experience with the conditioned stimulus (CS), the unconditional stimulus (US), or both, prior to their paired presentations. Interest in the application of proactive interference has extended to clinical populations as either a risk factor for disorders or as a secondary sign. Although the current literature is dense with comparisons of stimulus pre-exposure effects in animals, such comparisons are lacking in human subjects. As such, interpretation of proactive interference over studies as well as its generalization and utility in clinical research is limited. The present study was designed to assess eyeblink response acquisition after equal numbers of CS, US, and explicitly unpaired CS and US pre-exposures, as well as to evaluate how anxiety vulnerability might modulate proactive interference. In the current study, anxiety vulnerability was assessed using the State/Trait Anxiety Inventories as well as the adult and retrospective measures of behavioral inhibition (AMBI and RMBI, respectively). Participants were exposed to 1 of 4 possible pre-exposure contingencies: 30 CS, 30 US, 30 CS, and 30 US explicitly unpaired pre-exposures, or Context pre-exposure, immediately prior to standard delay training. Robust proactive interference was evident in all pre-exposure groups relative to Context pre-exposure, independent of anxiety classification, with CR acquisition attenuated at similar rates. In addition, trait anxious individuals were found to have enhanced overall acquisition as well as greater proactive interference relative to non-vulnerable individuals. The findings suggest that anxiety vulnerable individuals learn implicit associations faster, an effect which persists after the introduction of new stimulus contingencies. This effect is not due to enhanced sensitivity to the US. Such differences would have implications for the development of anxiety psychopathology within a learning framework.
Suga, Nobuo
2011-01-01
The central auditory system consists of the lemniscal and nonlemniscal systems. The thalamic lemniscal and non-lemniscal auditory nuclei are different from each other in response properties and neural connectivities. The cortical auditory areas receiving the projections from these thalamic nuclei interact with each other through corticocortical projections and project down to the subcortical auditory nuclei. This corticofugal (descending) system forms multiple feedback loops with the ascending system. The corticocortical and corticofugal projections modulate auditory signal processing and play an essential role in the plasticity of the auditory system. Focal electric stimulation -- comparable to repetitive tonal stimulation -- of the lemniscal system evokes three major types of changes in the physiological properties, such as the tuning to specific values of acoustic parameters of cortical and subcortical auditory neurons through different combinations of facilitation and inhibition. For such changes, a neuromodulator, acetylcholine, plays an essential role. Electric stimulation of the nonlemniscal system evokes changes in the lemniscal system that is different from those evoked by the lemniscal stimulation. Auditory signals ascending from the lemniscal and nonlemniscal thalamic nuclei to the cortical auditory areas appear to be selected or adjusted by a “differential” gating mechanism. Conditioning for associative learning and pseudo-conditioning for nonassociative learning respectively elicit tone-specific and nonspecific plastic changes. The lemniscal, corticofugal and cholinergic systems are involved in eliciting the former, but not the latter. The current article reviews the recent progress in the research of corticocortical and corticofugal modulations of the auditory system and its plasticity elicited by conditioning and pseudo-conditioning. PMID:22155273
Spottiswoode, B.S.; Meintjes, E.M.; Anderson, A.W.; Molteno, C.D.; Stanton, M.E.; Dodge, N.C.; Gore, J.C.; Peterson, B.S.; Jacobson, J.L.; Jacobson, S.W.
2011-01-01
Background Prenatal alcohol exposure is related to a wide range of neurocognitive effects. Eyeblink conditioning (EBC), which involves temporal pairing of a conditioned with an unconditioned stimulus, has been shown to be a potential biomarker of fetal alcohol exposure. A growing body of evidence suggests that white matter may be a specific target of alcohol teratogenesis, and the neural circuitry underlying EBC is known to involve the cerebellar peduncles. Diffusion tensor imaging (DTI) is a magnetic resonance imaging (MRI) technique which has proven useful for assessing central nervous system white matter integrity. This study used DTI to examine the degree to which the fetal alcohol-related deficit in EBC may be mediated by structural impairment in the cerebellar peduncles. Methods 13 children with fetal alcohol spectrum disorder (FASD) and 12 matched controls were scanned using DTI and structural MRI sequences. The DTI data were processed using a voxelwise technique, and the structural data were used for volumetric analyses. Prenatal alcohol exposure group and EBC performance were examined in relation to brain volumes and outputs from the DTI analysis. Results FA and perpendicular diffusivity group differences between alcohol-exposed and nonexposed children were identified in the left middle cerebellar peduncle. Alcohol exposure correlated with lower fractional anisotropy (FA) and greater perpendicular diffusivity in this region, and these correlations remained significant even after controlling for total brain and cerebellar volume. Conversely, trace conditioning performance was related to higher FA and lower perpendicular diffusivity in the left middle peduncle. The effect of prenatal alcohol exposure on trace conditioning was partially mediated by lower FA in this region. Conclusions This study extends recent findings that have used DTI to reveal microstructural deficits in white matter in children with FASD. This is the first DTI study to demonstrate mediation of a fetal alcohol-related effect on neuropsychological function by deficits in white matter integrity. PMID:21790667
Yao, Juan; Wu, Guang-Yan; Liu, Guo-Long; Liu, Shu-Lei; Yang, Yi; Wu, Bing; Li, Xuan; Feng, Hua; Sui, Jian-Feng
2014-11-01
Learning with a stimulus from one sensory modality can facilitate subsequent learning with a new stimulus from a different sensory modality. To date, the characteristics and mechanism of this phenomenon named transfer effect still remain ambiguous. Our previous work showed that electrical stimulation of medial prefrontal cortex (mPFC) as a conditioned stimulus (CS) could successfully establish classical eyeblink conditioning (EBC). The present study aimed to (1) observe whether transfer of EBC learning would occur when CSs shift between central (mPFC electrical stimulation as a CS, mPFC-CS) and peripheral (tone as a CS, tone CS); (2) compare the difference in transfer effect between the two paradigms, delay EBC (DEBC) and trace EBC (TEBC). A total of 8 groups of guinea pigs were tested in the study, including 4 experimental groups and 4 control groups. Firstly, the experimental groups accepted central (or peripheral) CS paired with corneal airpuff unconditioned stimulus (US); then, CS shifted to the peripheral (or central) and paired with US. The control groups accepted corresponding central (or peripheral) CS and pseudo-paired with US, and then shifted CS from central (or peripheral) to peripheral (or central) and paired with US. The results showed that the acquisition rates of EBC were higher in experimental groups than in control groups after CS switching from central to peripheral or vice versa, and the CR acquisition rate was remarkably higher in DEBC than in TEBC in both transfer ways. The results indicate that EBC transfer can occur between learning established with mPFC-CS and tone CS. Memory of CS-US association for delay paradigm was less disturbed by the sudden switch of CS than for trace paradigm. This study provides new insight into neural mechanisms underlying conditioned reflex as well as the role of mPFC. Copyright © 2014 Elsevier B.V. All rights reserved.
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
ERIC Educational Resources Information Center
Fassler, Joan
The study investigated the task performance of cerebral palsied children under conditions of reduced auditory input and under normal auditory conditions. A non-cerebral palsied group was studied in a similar manner. Results indicated that cerebral palsied children showed some positive change in performance, under conditions of reduced auditory…
Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut
2013-10-01
Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Weiss, Craig; Disterhoft, John F.
2008-01-01
Many laboratories studying eyeblinks in unanesthetized rodents use a periorbital shock to evoke the blink. The stimulus is typically delivered via a tether and usually obliterates detection of a full unconditioned response with electromyographic (EMG) recording. Here we describe the adapter we have used successfully for several years to deliver puffs of air to the cornea of freely moving rats during our studies of eyeblink conditioning. The stimulus evokes an unconditioned response that can be recorded without affecting the EMG signal. This allows a complete analysis of the unconditioned response which is important for studies examining reflex modification or the effect of drugs, genetic manipulations, or aging on the unconditioned blink reflex. We also describe an infrared reflective sensor that can be added to the tether to minimize the number of wires that need to be implanted around the eye, and which is relatively immune to electrical artifacts associated with a periorbital shock stimulus or other devices powered by alternating current. The responses recorded simultaneously by EMG wires and the optical sensor appear highly correlated and demonstrate that the optical sensor can measure responses that might otherwise be lost due to electrical interference from a shock stimulus. PMID:18598716
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Enhanced Generalization of Auditory Conditioned Fear in Juvenile Mice
ERIC Educational Resources Information Center
Ito, Wataru; Pan, Bing-Xing; Yang, Chao; Thakur, Siddarth; Morozov, Alexei
2009-01-01
Increased emotionality is a characteristic of human adolescence, but its animal models are limited. Here we report that generalization of auditory conditioned fear between a conditional stimulus (CS+) and a novel auditory stimulus is stronger in 4-5-wk-old mice (juveniles) than in their 9-10-wk-old counterparts (adults), whereas nonassociative…
Sound tuning of amygdala plasticity in auditory fear conditioning
Park, Sungmo; Lee, Junuk; Park, Kyungjoon; Kim, Jeongyeon; Song, Beomjong; Hong, Ingie; Kim, Jieun; Lee, Sukwon; Choi, Sukwoo
2016-01-01
Various auditory tones have been used as conditioned stimuli (CS) for fear conditioning, but researchers have largely neglected the effect that different types of auditory tones may have on fear memory processing. Here, we report that at lateral amygdala (LA) synapses (a storage site for fear memory), conditioning with different types of auditory CSs (2.8 kHz tone, white noise, FM tone) recruits distinct forms of long-term potentiation (LTP) and inserts calcium permeable AMPA receptor (CP-AMPAR) for variable periods. White noise or FM tone conditioning produced brief insertion (<6 hr after conditioning) of CP-AMPARs, whereas 2.8 kHz tone conditioning induced more persistent insertion (≥6 hr). Consistently, conditioned fear to 2.8 kHz tone but not to white noise or FM tones was erased by reconsolidation-update (which depends on the insertion of CP-AMPARs at LA synapses) when it was performed 6 hr after conditioning. Our data suggest that conditioning with different auditory CSs recruits distinct forms of LA synaptic plasticity, resulting in more malleable fear memory to some tones than to others. PMID:27488731
Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?
Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.
2015-01-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799
Why do pictures, but not visual words, reduce older adults' false memories?
Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R
2015-09-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Fan, Jia; Meintjes, Ernesta M.; Molteno, Christopher D.; Spottiswoode, Bruce S.; Dodge, Neil C.; Alhamud, Alkathafi A.; Stanton, Mark E.; Peterson, Bradley S.; Jacobson, Joseph L.; Jacobson, Sandra W.
2015-01-01
Fetal alcohol spectrum disorders (FASD) are characterized by a range of neurodevelopmental deficits that result from prenatal exposure to alcohol. These can include cognitive, behavioural, and neurological impairment, as well as structural and functional brain damage. Eyeblink conditioning (EBC) is among the most sensitive endpoints affected in FASD. The cerebellar peduncles, large bundles of myelinated nerve fibers that connect the cerebellum to the brainstem, constitute the principal white matter element of the EBC circuit. Diffusion tensor imaging (DTI) is used to assess white matter integrity in fibre pathways linking brain regions. DTI scans of 54 children with FASD and 23 healthy controls, mean age 10.1±1.0 yrs, from the Cape Town Longitudinal Cohort were processed using voxelwise group comparisons. Prenatal alcohol exposure was related to lower fractional anisotropy (FA) bilaterally in the superior cerebellar peduncles and higher mean diffusivity (MD) in the left middle peduncle, effects that remained significant after controlling for potential confounding variables. Lower FA and higher MD in these regions were associated with poorer EBC performance. Moreover, effects of alcohol exposure on EBC decreased significantly after inclusion of these DTI measures in regression models, suggesting that these white matter deficits partially mediate the relation of prenatal alcohol exposure to EBC. The associations of greater alcohol consumption with these DTI measures are largely attributable to greater radial diffusivity, possibly indicating poorer myelination. Thus, these data suggest that fetal alcohol-related deficits in EBC are attributable, in part, to poorer myelination in key regions of the cerebellar peduncles. PMID:25783559
The effects of early auditory-based intervention on adult bilateral cochlear implant outcomes.
Lim, Stacey R
2017-09-01
The goal of this exploratory study was to determine the types of improvement that sequentially implanted auditory-verbal and auditory-oral adults with prelingual and childhood hearing loss received in bilateral listening conditions, compared to their best unilateral listening condition. Five auditory-verbal adults and five auditory-oral adults were recruited for this study. Participants were seated in the center of a 6-loudspeaker array. BKB-SIN sentences were presented from 0° azimuth, while multi-talker babble was presented from various loudspeakers. BKB-SIN scores in bilateral and the best unilateral listening conditions were compared to determine the amount of improvement gained. As a group, the participants had improved speech understanding scores in the bilateral listening condition. Although not statistically significant, the auditory-verbal group tended to have greater speech understanding with greater levels of competing background noise, compared to the auditory-oral participants. Bilateral cochlear implantation provides individuals with prelingual and childhood hearing loss with improved speech understanding in noise. A higher emphasis on auditory development during the critical language development years may add to increased speech understanding in adulthood. However, other demographic factors such as age or device characteristics must also be considered. Although both auditory-verbal and auditory-oral approaches emphasize spoken language development, they emphasize auditory development to different degrees. This may affect cochlear implant (CI) outcomes. Further consideration should be made in future auditory research to determine whether these differences contribute to performance outcomes. Additional investigation with a larger participant pool, controlled for effects of age and CI devices and processing strategies, would be necessary to determine whether language learning approaches are associated with different levels of speech understanding performance.
Evaluation of an imputed pitch velocity model of the auditory tau effect.
Henry, Molly J; McAuley, J Devin; Zaleha, Marta
2009-08-01
This article extends an imputed pitch velocity model of the auditory kappa effect proposed by Henry and McAuley (2009a) to the auditory tau effect. Two experiments were conducted using an AXB design in which listeners judged the relative pitch of a middle target tone (X) in ascending and descending three-tone sequences. In Experiment 1, sequences were isochronous, establishing constant fast, medium, and slow velocity conditions. No systematic distortions in perceived target pitch were observed, and thresholds were similar across velocity conditions. Experiment 2 introduced to-be-ignored variations in target timing. Variations in target timing that deviated from constant velocity conditions introduced systematic distortions in perceived target pitch, indicative of a robust auditory tau effect. Consistent with an auditory motion hypothesis, the magnitude of the tau effect was larger at faster velocities. In addition, the tau effect was generally stronger for descending sequences than for ascending sequences. Combined with previous work on the auditory kappa effect, the imputed velocity model and associated auditory motion hypothesis provide a unified quantitative account of both auditory tau and kappa effects. In broader terms, these findings add support to the view that pitch and time relations in auditory patterns are fundamentally interdependent.
Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.
Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M
2013-11-01
Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.
Electrophysiological evidence for a general auditory prediction deficit in adults who stutter
Daliri, Ayoub; Max, Ludo
2015-01-01
We previously found that stuttering individuals do not show the typical auditory modulation observed during speech planning in nonstuttering individuals. In this follow-up study, we further elucidate this difference by investigating whether stuttering speakers’ atypical auditory modulation is observed only when sensory predictions are based on movement planning or also when predictable auditory input is not a consequence of one’s own actions. We recorded 10 stuttering and 10 nonstuttering adults’ auditory evoked potentials in response to random probe tones delivered while anticipating either speaking aloud or hearing one’s own speech played back and in a control condition without auditory input (besides probe tones). N1 amplitude of nonstuttering speakers was reduced prior to both speaking and hearing versus the control condition. Stuttering speakers, however, showed no N1 amplitude reduction in either the speaking or hearing condition as compared with control. Thus, findings suggest that stuttering speakers have general auditory prediction difficulties. PMID:26335995
Auditory processing deficits in bipolar disorder with and without a history of psychotic features.
Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N
2015-11-01
Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Auditory Cortex Is Required for Fear Potentiation of Gap Detection
Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.
2014-01-01
Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510
Sleigh, Merry J; Casey, Michael B
2014-07-01
Species-typical developmental outcomes result from organismic and environmental constraints and experiences shared by members of a species. We examined the effects of enhanced prenatal sensory experience on hatching behaviors by exposing domestic chicks (n = 95) and Japanese quail (n = 125) to one of four prenatal conditions: enhanced visual stimulation, enhanced auditory stimulation, enhanced auditory and visual stimulation, or no enhanced sensory experience (control condition). In general, across species, control embryos had slower hatching behaviors than all other embryos. Embryos in the auditory condition had faster hatching behaviors than embryos in the visual and control conditions. Auditory-visual condition embryos showed similarities to embryos exposed to either auditory or visual stimulation. These results suggest that prenatal sensory experience can influence hatching behavior of precocial birds, with the type of stimulation being a critical variable. These results also provide further evidence that species-typical outcomes are the result of species-typical prenatal experiences. © 2013 Wiley Periodicals, Inc.
Scanning silence: mental imagery of complex sounds.
Bunzeck, Nico; Wuestenberg, Torsten; Lutz, Kai; Heinze, Hans-Jochen; Jancke, Lutz
2005-07-15
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
Pilkiw, Maryna; Insel, Nathan; Cui, Younghua; Finney, Caitlin; Morrissey, Mark D; Takehara-Nishiuchi, Kaori
2017-07-06
The lateral entorhinal cortex (LEC) is thought to bind sensory events with the environment where they took place. To compare the relative influence of transient events and temporally stable environmental stimuli on the firing of LEC cells, we recorded neuron spiking patterns in the region during blocks of a trace eyeblink conditioning paradigm performed in two environments and with different conditioning stimuli. Firing rates of some neurons were phasically selective for conditioned stimuli in a way that depended on which room the rat was in; nearly all neurons were tonically selective for environments in a way that depended on which stimuli had been presented in those environments. As rats moved from one environment to another, tonic neuron ensemble activity exhibited prospective information about the conditioned stimulus associated with the environment. Thus, the LEC formed phasic and tonic codes for event-environment associations, thereby accurately differentiating multiple experiences with overlapping features.
Impact of olfactory and auditory priming on the attraction to foods with high energy density.
Chambaron, S; Chisin, Q; Chabanet, C; Issanchou, S; Brand, G
2015-12-01
\\]\\Recent research suggests that non-attentively perceived stimuli may significantly influence consumers' food choices. The main objective of the present study was to determine whether an olfactory prime (a sweet-fatty odour) and a semantic auditory prime (a nutritional prevention message), both presented incidentally, either alone or in combination can influence subsequent food choices. The experiment included 147 participants who were assigned to four different conditions: a control condition, a scented condition, an auditory condition or an auditory-scented condition. All participants remained in the waiting room during15 min while they performed a 'lure' task. For the scented condition, the participants were unobtrusively exposed to a 'pain au chocolat' odour. Those in the auditory condition were exposed to an audiotape including radio podcasts and a nutritional message. A third group of participants was exposed to both olfactory and auditory stimuli simultaneously. In the control condition, no stimulation was given. Following this waiting period, all participants moved into a non-odorised test room where they were asked to choose, from dishes served buffet-style, the starter, main course and dessert that they would actually eat for lunch. The results showed that the participants primed with the odour of 'pain au chocolat' tended to choose more desserts with high energy density (i.e., a waffle) than the participants in the control condition (p = 0.06). Unexpectedly, the participants primed with the nutritional auditory message chose to consume more desserts with high energy density than the participants in the control condition (p = 0.03). In the last condition (odour and nutritional message), they chose to consume more desserts with high energy density than the participants in the control condition (p = 0.01), and the data reveal an additive effect of the two primes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Auditory Processing Disorder in Children
... News & Events NIDCD News Inside NIDCD Newsletter Shareable Images ... Info » Hearing, Ear Infections, and Deafness Auditory Processing Disorder Auditory processing disorder (APD) describes a condition ...
Gallagher, Rosemary; Damodaran, Harish; Werner, William G; Powell, Wendy; Deutsch, Judith E
2016-08-19
Evidence based virtual environments (VEs) that incorporate compensatory strategies such as cueing may change motor behavior and increase exercise intensity while also being engaging and motivating. The purpose of this study was to determine if persons with Parkinson's disease and aged matched healthy adults responded to auditory and visual cueing embedded in a bicycling VE as a method to increase exercise intensity. We tested two groups of participants, persons with Parkinson's disease (PD) (n = 15) and age-matched healthy adults (n = 13) as they cycled on a stationary bicycle while interacting with a VE. Participants cycled under two conditions: auditory cueing (provided by a metronome) and visual cueing (represented as central road markers in the VE). The auditory condition had four trials in which auditory cues or the VE were presented alone or in combination. The visual condition had five trials in which the VE and visual cue rate presentation was manipulated. Data were analyzed by condition using factorial RMANOVAs with planned t-tests corrected for multiple comparisons. There were no differences in pedaling rates between groups for both the auditory and visual cueing conditions. Persons with PD increased their pedaling rate in the auditory (F 4.78, p = 0.029) and visual cueing (F 26.48, p < 0.000) conditions. Age-matched healthy adults also increased their pedaling rate in the auditory (F = 24.72, p < 0.000) and visual cueing (F = 40.69, p < 0.000) conditions. Trial-to-trial comparisons in the visual condition in age-matched healthy adults showed a step-wise increase in pedaling rate (p = 0.003 to p < 0.000). In contrast, persons with PD increased their pedaling rate only when explicitly instructed to attend to the visual cues (p < 0.000). An evidenced based cycling VE can modify pedaling rate in persons with PD and age-matched healthy adults. Persons with PD required attention directed to the visual cues in order to obtain an increase in cycling intensity. The combination of the VE and auditory cues was neither additive nor interfering. These data serve as preliminary evidence that embedding auditory and visual cues to alter cycling speed in a VE as method to increase exercise intensity that may promote fitness.
Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.
Prefrontal control of cerebellum-dependent associative motor learning.
Chen, Hao; Yang, Li; Xu, Yan; Wu, Guang-yan; Yao, Juan; Zhang, Jun; Zhu, Zhi-ru; Hu, Zhi-an; Sui, Jian-feng; Hu, Bo
2014-02-01
Behavioral studies have demonstrated that both medial prefrontal cortex (mPFC) and cerebellum play critical roles in trace eyeblink conditioning. However, little is known regarding the mechanism by which the two brain regions interact. By use of electrical stimulation of the caudal mPFC as a conditioned stimulus, we show evidence that persistent outputs from the mPFC to cerebellum are necessary and sufficient for the acquisition and expression of a trace conditioned response (CR)-like response. Specifically, the persistent outputs of caudal mPFC are relayed to the cerebellum via the rostral part of lateral pontine nuclei. Moreover, interfering with persistent activity by blockade of the muscarinic Ach receptor in the caudal mPFC impairs the expression of learned trace CRs. These results suggest an important way for the caudal mPFC to interact with the cerebellum during associative motor learning.
Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina
2014-12-01
In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory-only speech recognition. Copyright © 2014 Elsevier Ltd. All rights reserved.
Coesmans, Michael; Röder, Christian H.; Smit, Albertine E.; Koekkoek, Sebastiaan K.E.; De Zeeuw, Chris I.; Frens, Maarten A.; van der Geest, Josef N.
2014-01-01
Background The notion that cerebellar deficits may underlie clinical symptoms in people with schizophrenia is tested by evaluating 2 forms of cerebellar learning in patients with recent-onset schizophrenia. A potential medication effect is evaluated by including patients with or without antipsychotics. Methods We assessed saccadic eye movement adaptation and eyeblink conditioning in men with recent-onset schizophrenia who were taking antipsychotic medication or who were antipsychotic-free and in age-matched controls. Results We included 39 men with schizophrenia (10 who were taking clozapine, 16 who were taking haloperidol and 13 who were antipsychotic-free) and 29 controls in our study. All participants showed significant saccadic adaptation. Adaptation strength did not differ between healthy controls and men with schizophrenia. The speed of saccade adaptation, however, was significantly lower in men with schizophrenia. They showed a significantly lower increase in the number of conditioned eyeblink responses. Over all experiments, no consistent effects of medication were observed. These outcomes did not correlate with age, years of education, psychopathology or dose of anti psychotics. Limitations As patients were not randomized for treatment, an influence of confounding variables associated with medication status cannot be excluded. Individual patients also varied along the schizophrenia spectrum despite the relative homogeneity with respect to onset of illness and short usage of medication. Finally, the relatively small number of participants may have concealed effects as a result of insufficient statistical power. Conclusion We found several cerebellar learning deficits in men with schizophrenia that we cannot attribute to the use of antipsychotics. Although this finding, combined with the fact that deficits are already present in patients with recent-onset schizophrenia, could suggest that cerebellar impairments are a trait deficit in people with schizophrenia. This should be confirmed in longitudinal studies. PMID:24083457
Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale
2017-04-01
There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Functional neuroanatomy of auditory scene analysis in Alzheimer's disease
Golden, Hannah L.; Agustus, Jennifer L.; Goll, Johanna C.; Downey, Laura E.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.
2015-01-01
Auditory scene analysis is a demanding computational process that is performed automatically and efficiently by the healthy brain but vulnerable to the neurodegenerative pathology of Alzheimer's disease. Here we assessed the functional neuroanatomy of auditory scene analysis in Alzheimer's disease using the well-known ‘cocktail party effect’ as a model paradigm whereby stored templates for auditory objects (e.g., hearing one's spoken name) are used to segregate auditory ‘foreground’ and ‘background’. Patients with typical amnestic Alzheimer's disease (n = 13) and age-matched healthy individuals (n = 17) underwent functional 3T-MRI using a sparse acquisition protocol with passive listening to auditory stimulus conditions comprising the participant's own name interleaved with or superimposed on multi-talker babble, and spectrally rotated (unrecognisable) analogues of these conditions. Name identification (conditions containing the participant's own name contrasted with spectrally rotated analogues) produced extensive bilateral activation involving superior temporal cortex in both the AD and healthy control groups, with no significant differences between groups. Auditory object segregation (conditions with interleaved name sounds contrasted with superimposed name sounds) produced activation of right posterior superior temporal cortex in both groups, again with no differences between groups. However, the cocktail party effect (interaction of own name identification with auditory object segregation processing) produced activation of right supramarginal gyrus in the AD group that was significantly enhanced compared with the healthy control group. The findings delineate an altered functional neuroanatomical profile of auditory scene analysis in Alzheimer's disease that may constitute a novel computational signature of this neurodegenerative pathology. PMID:26029629
Demonstrating the Potential for Dynamic Auditory Stimulation to Contribute to Motion Sickness
Keshavarz, Behrang; Hettinger, Lawrence J.; Kennedy, Robert S.; Campos, Jennifer L.
2014-01-01
Auditory cues can create the illusion of self-motion (vection) in the absence of visual or physical stimulation. The present study aimed to determine whether auditory cues alone can also elicit motion sickness and how auditory cues contribute to motion sickness when added to visual motion stimuli. Twenty participants were seated in front of a curved projection display and were exposed to a virtual scene that constantly rotated around the participant's vertical axis. The virtual scene contained either visual-only, auditory-only, or a combination of corresponding visual and auditory cues. All participants performed all three conditions in a counterbalanced order. Participants tilted their heads alternately towards the right or left shoulder in all conditions during stimulus exposure in order to create pseudo-Coriolis effects and to maximize the likelihood for motion sickness. Measurements of motion sickness (onset, severity), vection (latency, strength, duration), and postural steadiness (center of pressure) were recorded. Results showed that adding auditory cues to the visual stimuli did not, on average, affect motion sickness and postural steadiness, but it did reduce vection onset times and increased vection strength compared to pure visual or pure auditory stimulation. Eighteen of the 20 participants reported at least slight motion sickness in the two conditions including visual stimuli. More interestingly, six participants also reported slight motion sickness during pure auditory stimulation and two of the six participants stopped the pure auditory test session due to motion sickness. The present study is the first to demonstrate that motion sickness may be caused by pure auditory stimulation, which we refer to as “auditorily induced motion sickness”. PMID:24983752
Zhang, Guang-Wei; Sun, Wen-Jian; Zingg, Brian; Shen, Li; He, Jufang; Xiong, Ying; Tao, Huizhong W; Zhang, Li I
2018-01-17
In the mammalian brain, auditory information is known to be processed along a central ascending pathway leading to auditory cortex (AC). Whether there exist any major pathways beyond this canonical auditory neuraxis remains unclear. In awake mice, we found that auditory responses in entorhinal cortex (EC) cannot be explained by a previously proposed relay from AC based on response properties. By combining anatomical tracing and optogenetic/pharmacological manipulations, we discovered that EC received auditory input primarily from the medial septum (MS), rather than AC. A previously uncharacterized auditory pathway was then revealed: it branched from the cochlear nucleus, and via caudal pontine reticular nucleus, pontine central gray, and MS, reached EC. Neurons along this non-canonical auditory pathway responded selectively to high-intensity broadband noise, but not pure tones. Disruption of the pathway resulted in an impairment of specifically noise-cued fear conditioning. This reticular-limbic pathway may thus function in processing aversive acoustic signals. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Phillips, Rachel; Madhavan, Poornima
2010-01-01
The purpose of this research was to examine the impact of environmental distractions on human trust and utilization of automation during the process of visual search. Participants performed a computer-simulated airline luggage screening task with the assistance of a 70% reliable automated decision aid (called DETECTOR) both with and without environmental distractions. The distraction was implemented as a secondary task in either a competing modality (visual) or non-competing modality (auditory). The secondary task processing code either competed with the luggage screening task (spatial code) or with the automation's textual directives (verbal code). We measured participants' system trust, perceived reliability of the system (when a target weapon was present and absent), compliance, reliance, and confidence when agreeing and disagreeing with the system under both distracted and undistracted conditions. Results revealed that system trust was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Perceived reliability of the system (when the target was present) was significantly higher when the secondary task was visual rather than auditory. Compliance with the aid increased in all conditions except for the auditory-verbal condition, where it decreased. Similar to the pattern for trust, reliance on the automation was lower in the visual-spatial and auditory-verbal conditions than in the visual-verbal and auditory-spatial conditions. Confidence when agreeing with the system decreased with the addition of any kind of distraction; however, confidence when disagreeing increased with the addition of an auditory secondary task but decreased with the addition of a visual task. A model was developed to represent the research findings and demonstrate the relationship between secondary task modality, processing code, and automation use. Results suggest that the nature of environmental distractions influence interaction with automation via significant effects on trust and system utilization. These findings have implications for both automation design and operator training.
Auditory and motor imagery modulate learning in music performance
Brown, Rachel M.; Palmer, Caroline
2013-01-01
Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of auditory interference. Motor imagery aided pitch accuracy overall when interference conditions were manipulated at encoding (Experiment 1) but not at retrieval (Experiment 2). Thus, skilled performers' imagery abilities had distinct influences on encoding and retrieval of musical sequences. PMID:23847495
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Analysis of MEG Auditory 40-Hz Response by Event-Related Coherence
NASA Astrophysics Data System (ADS)
Tanaka, Keita; Kawakatsu, Masaki; Yunokuchi, Kazutomo
We examined the event-related coherence of magnetoencephalography (auditory 40-Hz response) while the subjects were presented click acoustic stimuli at repetition rate 40Hz in the ‘Attend' and ‘Reading' conditions. MEG signals were recorded of 5 healthy males using the whole-head SQUID system. The event-related coherence was used to provide a measurement of short synchronization which occurs in response to a stimulus. The results showed that the peak value of coherence in auditory 40-Hz response between right and left temporal regions was significantly larger when subjects paid attention to stimuli (‘Attend' condition) rather than it was when the subject ignored them (‘Reading' condition). Moreover, the latency of coherence in auditory 40-Hz response was significantly shorter when the subjects paid attention to stimuli (‘Attend' condition). These results suggest that the phase synchronization between right and left temporal region in auditory 40-Hz response correlate closely with selective attention.
Blink and you’ll miss it: the role of blinking in the perception of magic tricks
Nakano, Tamami
2016-01-01
Magicians use several techniques to deceive their audiences, including, for example, the misdirection of attention and verbal suggestion. We explored another potential stratagem, namely the relaxation of attention. Participants watched a video of a highly skilled magician whilst having their eye-blinks recorded. The timing of spontaneous eye-blinks was highly synchronized across participants. In addition, the synchronized blinks frequency occurred immediately after a seemingly impossible feat, and often coincided with actions that the magician wanted to conceal from the audience. Given that blinking is associated with the relaxation of attention, these findings suggest that blinking plays an important role in the perception of magic, and that magicians may utilize blinking and the relaxation of attention to hide certain secret actions. PMID:27069808
Blink and you'll miss it: the role of blinking in the perception of magic tricks.
Wiseman, Richard J; Nakano, Tamami
2016-01-01
Magicians use several techniques to deceive their audiences, including, for example, the misdirection of attention and verbal suggestion. We explored another potential stratagem, namely the relaxation of attention. Participants watched a video of a highly skilled magician whilst having their eye-blinks recorded. The timing of spontaneous eye-blinks was highly synchronized across participants. In addition, the synchronized blinks frequency occurred immediately after a seemingly impossible feat, and often coincided with actions that the magician wanted to conceal from the audience. Given that blinking is associated with the relaxation of attention, these findings suggest that blinking plays an important role in the perception of magic, and that magicians may utilize blinking and the relaxation of attention to hide certain secret actions.
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel
2017-04-01
Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.
2015-01-01
The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…
ERIC Educational Resources Information Center
Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee
2012-01-01
Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or…
The role of emotion in dynamic audiovisual integration of faces and voices
Kotz, Sonja A.; Tavano, Alessandro; Schröger, Erich
2015-01-01
We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. PMID:25147273
Semantic-based crossmodal processing during visual suppression.
Cox, Dustin; Hong, Sang Wook
2015-01-01
To reveal the mechanisms underpinning the influence of auditory input on visual awareness, we examine, (1) whether purely semantic-based multisensory integration facilitates the access to visual awareness for familiar visual events, and (2) whether crossmodal semantic priming is the mechanism responsible for the semantic auditory influence on visual awareness. Using continuous flash suppression, we rendered dynamic and familiar visual events (e.g., a video clip of an approaching train) inaccessible to visual awareness. We manipulated the semantic auditory context of the videos by concurrently pairing them with a semantically matching soundtrack (congruent audiovisual condition), a semantically non-matching soundtrack (incongruent audiovisual condition), or with no soundtrack (neutral video-only condition). We found that participants identified the suppressed visual events significantly faster (an earlier breakup of suppression) in the congruent audiovisual condition compared to the incongruent audiovisual condition and video-only condition. However, this facilitatory influence of semantic auditory input was only observed when audiovisual stimulation co-occurred. Our results suggest that the enhanced visual processing with a semantically congruent auditory input occurs due to audiovisual crossmodal processing rather than semantic priming, which may occur even when visual information is not available to visual awareness.
Baltus, Alina; Vosskuhl, Johannes; Boetzel, Cindy; Herrmann, Christoph Siegfried
2018-05-13
Recent research provides evidence for a functional role of brain oscillations for perception. For example, auditory temporal resolution seems to be linked to individual gamma frequency of auditory cortex. Individual gamma frequency not only correlates with performance in between-channel gap detection tasks but can be modulated via auditory transcranial alternating current stimulation. Modulation of individual gamma frequency is accompanied by an improvement in gap detection performance. Aging changes electrophysiological frequency components and sensory processing mechanisms. Therefore, we conducted a study to investigate the link between individual gamma frequency and gap detection performance in elderly people using auditory transcranial alternating current stimulation. In a within-subject design, twelve participants were electrically stimulated with two individualized transcranial alternating current stimulation frequencies: 3 Hz above their individual gamma frequency (experimental condition) and 4 Hz below their individual gamma frequency (control condition) while they were performing a between-channel gap detection task. As expected, individual gamma frequencies correlated significantly with gap detection performance at baseline and in the experimental condition, transcranial alternating current stimulation modulated gap detection performance. In the control condition, stimulation did not modulate gap detection performance. In addition, in elderly, the effect of transcranial alternating current stimulation on auditory temporal resolution seems to be dependent on endogenous frequencies in auditory cortex: elderlies with slower individual gamma frequencies and lower auditory temporal resolution profit from auditory transcranial alternating current stimulation and show increased gap detection performance during stimulation. Our results strongly suggest individualized transcranial alternating current stimulation protocols for successful modulation of performance. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
2016-11-28
of low spontaneous rate auditory nerve fibers (ANFs) and reduction of auditory brainstem response wave-I amplitudes. The goal of this research is...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram
Ghai, Shashank; Schmitz, Gerd; Hwang, Tong-Hun; Effenberg, Alfred O.
2018-01-01
The purpose of the study was to assess the influence of real-time auditory feedback on knee proprioception. Thirty healthy participants were randomly allocated to control (n = 15), and experimental group I (15). The participants performed an active knee-repositioning task using their dominant leg, with/without additional real-time auditory feedback where the frequency was mapped in a convergent manner to two different target angles (40 and 75°). Statistical analysis revealed significant enhancement in knee re-positioning accuracy for the constant and absolute error with real-time auditory feedback, within and across the groups. Besides this convergent condition, we established a second divergent condition. Here, a step-wise transposition of frequency was performed to explore whether a systematic tuning between auditory-proprioceptive repositioning exists. No significant effects were identified in this divergent auditory feedback condition. An additional experimental group II (n = 20) was further included. Here, we investigated the influence of a larger magnitude and directional change of step-wise transposition of the frequency. In a first step, results confirm the findings of experiment I. Moreover, significant effects on knee auditory-proprioception repositioning were evident when divergent auditory feedback was applied. During the step-wise transposition participants showed systematic modulation of knee movements in the opposite direction of transposition. We confirm that knee re-positioning accuracy can be enhanced with concurrent application of real-time auditory feedback and that knee re-positioning can modulated in a goal-directed manner with step-wise transposition of frequency. Clinical implications are discussed with respect to joint position sense in rehabilitation settings. PMID:29568259
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Transplantation of conditionally immortal auditory neuroblasts to the auditory nerve.
Sekiya, Tetsuji; Holley, Matthew C; Kojima, Ken; Matsumoto, Masahiro; Helyer, Richard; Ito, Juichi
2007-04-01
Cell transplantation is a realistic potential therapy for replacement of auditory sensory neurons and could benefit patients with cochlear implants or acoustic neuropathies. The procedure involves many experimental variables, including the nature and conditioning of donor cells, surgical technique and degree of degeneration in the host tissue. It is essential to control these variables in order to develop cell transplantation techniques effectively. We have characterized a conditionally immortal, mouse cell line suitable for transplantation to the auditory nerve. Structural and physiological markers defined the cells as early auditory neuroblasts that lacked neuronal, voltage-gated sodium or calcium currents and had an undifferentiated morphology. When transplanted into the auditory nerves of rats in vivo, the cells migrated peripherally and centrally and aggregated to form coherent, ectopic 'ganglia'. After 7 days they expressed beta 3-tubulin and adopted a similar morphology to native spiral ganglion neurons. They also developed bipolar projections aligned with the host nerves. There was no evidence for uncontrolled proliferation in vivo and cells survived for at least 63 days. If cells were transplanted with the appropriate surgical technique then the auditory brainstem responses were preserved. We have shown that immortal cell lines can potentially be used in the mammalian ear, that it is possible to differentiate significant numbers of cells within the auditory nerve tract and that surgery and cell injection can be achieved with no damage to the cochlea and with minimal degradation of the auditory brainstem response.
Are memory traces localized or distributed?
Thompson, R F
1991-01-01
Evidence supports the view that "memory traces" are formed in the hippocampus and in the cerebellum in classical conditioning of discrete behavioral responses (e.g. eyeblink conditioning). In the hippocampus, learning results in long-lasting increases in excitability of pyramidal neurons that appear to be localized to these neurons (i.e. changes in membrane properties and receptor function). However, these learning-altered pyramidal neurons are distributed widely throughout CA3 and CA1. Although it plays a key role in certain aspects of classical conditioning, the hippocampus is not necessary for learning and memory of the basic conditioned responses. The cerebellum and its associated brain stem circuitry, on the other hand, does appear to be essential (necessary and sufficient) for learning and memory of the conditioned response. Evidence to date is most consistent with a localized trace in the interpositus nucleus and multiple localized traces in cerebellar cortex, each involving relatively large ensembles of neurons. Perhaps "procedural" memory traces are relatively localized and "declarative" traces more widely distributed.
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Effect of attentional load on audiovisual speech perception: evidence from ERPs.
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.
Auditory-motor learning influences auditory memory for music.
Brown, Rachel M; Palmer, Caroline
2012-05-01
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
The role of auditory and kinaesthetic feedback mechanisms on phonatory stability in children.
Rathna Kumar, S B; Azeem, Suhail; Choudhary, Abhishek Kumar; Prakash, S G R
2013-12-01
Auditory feedback plays an important role in phonatory control. When auditory feedback is disrupted, various changes are observed in vocal motor control. Vocal intensity and fundamental frequency (F0) levels tend to increase in response to auditory masking. Because of the close reflexive links between the auditory and phonatory systems, it is likely that phonatory stability may be disrupted when auditory feedback is disrupted or altered. However, studies on phonatory stability under auditory masking condition in adult subjects showed that most of the subjects maintained normal levels of phonatory stability. The authors in the earlier investigations suggested that auditory feedback is not the sole contributor to vocal motor control and phonatory stability, a complex neuromuscular reflex system known as kinaesthetic feedback may play a role in controlling phonatory stability when auditory feedback is disrupted or lacking. This proposes the need to further investigate this phenomenon as to whether children show similar patterns of phonatory stability under auditory masking since their neuromotor systems are still at developmental stage, less mature and are less resistant to altered auditory feedback than adults. A total of 40 normal hearing and speaking children (20 male and 20 female) between the age group of 6 and 8 years participated as subjects. The acoustic parameters such as shimmer, jitter and harmonic-to-noise ratio (HNR) were measures and compared between no masking condition (0 dB ML) and masking condition (90 dB ML). Despite the neuromotor systems being less mature in children and less resistant than adults to altered auditory feedback, most of the children in the study demonstrated increased phonatory stability which was reflected by reduced shimmer, jitter and increased HNR values. This study implicates that most of the children demonstrate well established patterns of kinaesthetic feedback, which might have allowed them to maintain normal levels of vocal motor control even in the presence of disturbed auditory feedback. Hence, it can be concluded that children also exhibit kinaesthetic feedback mechanism to control phonatory stability when auditory feedback is disrupted which in turn highlights the importance of kinaesthetic feedback to be included in the therapeutic/intervention approaches for children with hearing and neurogenic speech deficits.
Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen
2014-10-01
Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.
Age-Related Deficits in Auditory Confrontation Naming
Hanna-Pladdy, Brenda; Choi, Hyun
2015-01-01
The naming of manipulable objects in older and younger adults was evaluated across auditory, visual, and multisensory conditions. Older adults were less accurate and slower in naming across conditions, and all subjects were more impaired and slower to name action sounds than pictures or audiovisual combinations. Moreover, there was a sensory by age group interaction, revealing lower accuracy and increased latencies in auditory naming for older adults unrelated to hearing insensitivity but modest improvement to multisensory cues. These findings support age-related deficits in object action naming and suggest that auditory confrontation naming may be more sensitive than visual naming. PMID:20677880
ERIC Educational Resources Information Center
Ota, Kristie T.; Monsey, Melissa S.; Wu, Melissa S.; Young, Grace J.; Schafe, Glenn E.
2010-01-01
We have recently hypothesized that NO-cGMP-PKG signaling in the lateral nucleus of the amygdala (LA) during auditory fear conditioning coordinately regulates ERK-driven transcriptional changes in both auditory thalamic (MGm/PIN) and LA neurons that serve to promote pre- and postsynaptic alterations at thalamo-LA synapses, respectively. In the…
Colin, C; Radeau, M; Soquet, A; Demolin, D; Colin, F; Deltenre, P
2002-04-01
The McGurk-MacDonald illusory percept is obtained by dubbing an incongruent articulatory movement on an auditory phoneme. This type of audiovisual speech perception contributes to the assessment of theories of speech perception. The mismatch negativity (MMN) reflects the detection of a deviant stimulus within the auditory short-term memory and besides an acoustic component, possesses, under certain conditions, a phonetic one. The present study assessed the existence of an MMN evoked by McGurk-MacDonald percepts elicited by audiovisual stimuli with constant auditory components. Cortical evoked potentials were recorded using the oddball paradigm on 8 adults in 3 experimental conditions: auditory alone, visual alone and audiovisual stimulation. Obtaining illusory percepts was confirmed in an additional psychophysical condition. The auditory deviant syllables and the audiovisual incongruent syllables elicited a significant MMN at Fz. In the visual condition, no negativity was observed either at Fz, or at O(z). An MMN can be evoked by visual articulatory deviants, provided they are presented in a suitable auditory context leading to a phonetically significant interaction. The recording of an MMN elicited by illusory McGurk percepts suggests that audiovisual integration mechanisms in speech take place rather early during the perceptual processes.
The effects of speech output technology in the learning of graphic symbols.
Schlosser, R W; Belfiore, P J; Nigam, R; Blischak, D; Hetzroni, O
1995-01-01
The effects of auditory stimuli in the form of synthetic speech output on the learning of graphic symbols were evaluated. Three adults with severe to profound mental retardation and communication impairments were taught to point to lexigrams when presented with words under two conditions. In the first condition, participants used a voice output communication aid to receive synthetic speech as antecedent and consequent stimuli. In the second condition, with a nonelectronic communications board, participants did not receive synthetic speech. A parallel treatments design was used to evaluate the effects of the synthetic speech output as an added component of the augmentative and alternative communication system. The 3 participants reached criterion when not provided with the auditory stimuli. Although 2 participants also reached criterion when not provided with the auditory stimuli, the addition of auditory stimuli resulted in more efficient learning and a decreased error rate. Maintenance results, however, indicated no differences between conditions. Finding suggest that auditory stimuli in the form of synthetic speech contribute to the efficient acquisition of graphic communication symbols. PMID:14743828
Genetics Home Reference: autosomal dominant partial epilepsy with auditory features
... Twitter Home Health Conditions ADPEAF Autosomal dominant partial epilepsy with auditory features Printable PDF Open All Close ... the expand/collapse boxes. Description Autosomal dominant partial epilepsy with auditory features ( ADPEAF ) is an uncommon form ...
Wright, Rachel L.; Spurgeon, Laura C.; Elliott, Mark T.
2014-01-01
Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task—correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself. PMID:25309397
Wright, Rachel L; Elliott, Mark T
2014-01-01
Humans can synchronize movements with auditory beats or rhythms without apparent effort. This ability to entrain to the beat is considered automatic, such that any perturbations are corrected for, even if the perturbation was not consciously noted. Temporal correction of upper limb (e.g., finger tapping) and lower limb (e.g., stepping) movements to a phase perturbed auditory beat usually results in individuals being back in phase after just a few beats. When a metronome is presented in more than one sensory modality, a multisensory advantage is observed, with reduced temporal variability in finger tapping movements compared to unimodal conditions. Here, we investigate synchronization of lower limb movements (stepping in place) to auditory, visual and combined auditory-visual (AV) metronome cues. In addition, we compare movement corrections to phase advance and phase delay perturbations in the metronome for the three sensory modality conditions. We hypothesized that, as with upper limb movements, there would be a multisensory advantage, with stepping variability being lowest in the bimodal condition. As such, we further expected correction to the phase perturbation to be quickest in the bimodal condition. Our results revealed lower variability in the asynchronies between foot strikes and the metronome beats in the bimodal condition, compared to unimodal conditions. However, while participants corrected substantially quicker to perturbations in auditory compared to visual metronomes, there was no multisensory advantage in the phase correction task-correction under the bimodal condition was almost identical to the auditory-only (AO) condition. On the whole, we noted that corrections in the stepping task were smaller than those previously reported for finger tapping studies. We conclude that temporal corrections are not only affected by the reliability of the sensory information, but also the complexity of the movement itself.
The role of emotion in dynamic audiovisual integration of faces and voices.
Kokinous, Jenny; Kotz, Sonja A; Tavano, Alessandro; Schröger, Erich
2015-05-01
We used human electroencephalogram to study early audiovisual integration of dynamic angry and neutral expressions. An auditory-only condition served as a baseline for the interpretation of integration effects. In the audiovisual conditions, the validity of visual information was manipulated using facial expressions that were either emotionally congruent or incongruent with the vocal expressions. First, we report an N1 suppression effect for angry compared with neutral vocalizations in the auditory-only condition. Second, we confirm early integration of congruent visual and auditory information as indexed by a suppression of the auditory N1 and P2 components in the audiovisual compared with the auditory-only condition. Third, audiovisual N1 suppression was modulated by audiovisual congruency in interaction with emotion: for neutral vocalizations, there was N1 suppression in both the congruent and the incongruent audiovisual conditions. For angry vocalizations, there was N1 suppression only in the congruent but not in the incongruent condition. Extending previous findings of dynamic audiovisual integration, the current results suggest that audiovisual N1 suppression is congruency- and emotion-specific and indicate that dynamic emotional expressions compared with non-emotional expressions are preferentially processed in early audiovisual integration. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Effect of Three Classroom Listening Conditions on Speech Intelligibility
ERIC Educational Resources Information Center
Ross, Mark; Giolas, Thomas G.
1971-01-01
Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…
The effect of aborting ongoing movements on end point position estimation.
Itaguchi, Yoshihiro; Fukuzawa, Kazuyoshi
2013-11-01
The present study investigated the impact of motor commands to abort ongoing movement on position estimation. Participants carried out visually guided reaching movements on a horizontal plane with their eyes open. By setting a mirror above their arm, however, they could not see the arm, only the start and target points. They estimated the position of their fingertip based solely on proprioception after their reaching movement was stopped before reaching the target. The participants stopped reaching as soon as they heard an auditory cue or were mechanically prevented from moving any further by an obstacle in their path. These reaching movements were carried out at two different speeds (fast or slow). It was assumed that additional motor commands to abort ongoing movement were required and that their magnitude was high, low, and zero, in the auditory-fast condition, the auditory-slow condition, and both the obstacle conditions, respectively. There were two main results. (1) When the participants voluntarily stopped a fast movement in response to the auditory cue (the auditory-fast condition), they showed more underestimates than in the other three conditions. This underestimate effect was positively related to movement velocity. (2) An inverted-U-shaped bias pattern as a function of movement distance was observed consistently, except in the auditory-fast condition. These findings indicate that voluntarily stopping fast ongoing movement created a negative bias in the position estimate, supporting the idea that additional motor commands or efforts to abort planned movement are involved with the position estimation system. In addition, spatially probabilistic inference and signal-dependent noise may explain the underestimate effect of aborting ongoing movement.
Speech Auditory Alerts Promote Memory for Alerted Events in a Video-Simulated Self-Driving Car Ride.
Nees, Michael A; Helbein, Benji; Porter, Anna
2016-05-01
Auditory displays could be essential to helping drivers maintain situation awareness in autonomous vehicles, but to date, few or no studies have examined the effectiveness of different types of auditory displays for this application scenario. Recent advances in the development of autonomous vehicles (i.e., self-driving cars) have suggested that widespread automation of driving may be tenable in the near future. Drivers may be required to monitor the status of automation programs and vehicle conditions as they engage in secondary leisure or work tasks (entertainment, communication, etc.) in autonomous vehicles. An experiment compared memory for alerted events-a component of Level 1 situation awareness-using speech alerts, auditory icons, and a visual control condition during a video-simulated self-driving car ride with a visual secondary task. The alerts gave information about the vehicle's operating status and the driving scenario. Speech alerts resulted in better memory for alerted events. Both auditory display types resulted in less perceived effort devoted toward the study tasks but also greater perceived annoyance with the alerts. Speech auditory displays promoted Level 1 situation awareness during a simulation of a ride in a self-driving vehicle under routine conditions, but annoyance remains a concern with auditory displays. Speech auditory displays showed promise as a means of increasing Level 1 situation awareness of routine scenarios during an autonomous vehicle ride with an unrelated secondary task. © 2016, Human Factors and Ergonomics Society.
Neural plasticity and its initiating conditions in tinnitus.
Roberts, L E
2018-03-01
Deafferentation caused by cochlear pathology (which can be hidden from the audiogram) activates forms of neural plasticity in auditory pathways, generating tinnitus and its associated conditions including hyperacusis. This article discusses tinnitus mechanisms and suggests how these mechanisms may relate to those involved in normal auditory information processing. Research findings from animal models of tinnitus and from electromagnetic imaging of tinnitus patients are reviewed which pertain to the role of deafferentation and neural plasticity in tinnitus and hyperacusis. Auditory neurons compensate for deafferentation by increasing their input/output functions (gain) at multiple levels of the auditory system. Forms of homeostatic plasticity are believed to be responsible for this neural change, which increases the spontaneous and driven activity of neurons in central auditory structures in animals expressing behavioral evidence of tinnitus. Another tinnitus correlate, increased neural synchrony among the affected neurons, is forged by spike-timing-dependent neural plasticity in auditory pathways. Slow oscillations generated by bursting thalamic neurons verified in tinnitus animals appear to modulate neural plasticity in the cortex, integrating tinnitus neural activity with information in brain regions supporting memory, emotion, and consciousness which exhibit increased metabolic activity in tinnitus patients. The latter process may be induced by transient auditory events in normal processing but it persists in tinnitus, driven by phantom signals from the auditory pathway. Several tinnitus therapies attempt to suppress tinnitus through plasticity, but repeated sessions will likely be needed to prevent tinnitus activity from returning owing to deafferentation as its initiating condition.
Effect of attentional load on audiovisual speech perception: evidence from ERPs
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922
Modality-dependent effect of motion information in sensory-motor synchronised tapping.
Ono, Kentaro
2018-05-14
Synchronised action is important for everyday life. Generally, the auditory domain is more sensitive for coding temporal information, and previous studies have shown that auditory-motor synchronisation is much more precise than visuo-motor synchronisation. Interestingly, adding motion information improves synchronisation with visual stimuli and the advantage of the auditory modality seems to diminish. However, whether adding motion information also improves auditory-motor synchronisation remains unknown. This study compared tapping accuracy with a stationary or moving stimulus in both auditory and visual modalities. Participants were instructed to tap in synchrony with the onset of a sound or flash in the stationary condition, while these stimuli were perceived as moving from side to side in the motion condition. The results demonstrated that synchronised tapping with a moving visual stimulus was significantly more accurate than tapping with a stationary visual stimulus, as previous studies have shown. However, tapping with a moving auditory stimulus was significantly poorer than tapping with a stationary auditory stimulus. Although motion information impaired audio-motor synchronisation, an advantage of auditory modality compared to visual modality still existed. These findings are likely the result of higher temporal resolution in the auditory domain, which is likely due to the physiological and structural differences in the auditory and visual pathways in the brain. Copyright © 2018 Elsevier B.V. All rights reserved.
Iliadou, Vasiliki Vivian; Chermak, Gail D; Bamiou, Doris-Eva
2015-04-01
According to the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, diagnosis of speech sound disorder (SSD) requires a determination that it is not the result of other congenital or acquired conditions, including hearing loss or neurological conditions that may present with similar symptomatology. To examine peripheral and central auditory function for the purpose of determining whether a peripheral or central auditory disorder was an underlying factor or contributed to the child's SSD. Central auditory processing disorder clinic pediatric case reports. Three clinical cases are reviewed of children with diagnosed SSD who were referred for audiological evaluation by their speech-language pathologists as a result of slower than expected progress in therapy. Audiological testing revealed auditory deficits involving peripheral auditory function or the central auditory nervous system. These cases demonstrate the importance of increasing awareness among professionals of the need to fully evaluate the auditory system to identify auditory deficits that could contribute to a patient's speech sound (phonological) disorder. Audiological assessment in cases of suspected SSD should not be limited to pure-tone audiometry given its limitations in revealing the full range of peripheral and central auditory deficits, deficits which can compromise treatment of SSD. American Academy of Audiology.
Pilkiw, Maryna; Insel, Nathan; Cui, Younghua; Finney, Caitlin; Morrissey, Mark D; Takehara-Nishiuchi, Kaori
2017-01-01
The lateral entorhinal cortex (LEC) is thought to bind sensory events with the environment where they took place. To compare the relative influence of transient events and temporally stable environmental stimuli on the firing of LEC cells, we recorded neuron spiking patterns in the region during blocks of a trace eyeblink conditioning paradigm performed in two environments and with different conditioning stimuli. Firing rates of some neurons were phasically selective for conditioned stimuli in a way that depended on which room the rat was in; nearly all neurons were tonically selective for environments in a way that depended on which stimuli had been presented in those environments. As rats moved from one environment to another, tonic neuron ensemble activity exhibited prospective information about the conditioned stimulus associated with the environment. Thus, the LEC formed phasic and tonic codes for event-environment associations, thereby accurately differentiating multiple experiences with overlapping features. DOI: http://dx.doi.org/10.7554/eLife.28611.001 PMID:28682237
The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment
Frtusova, Jana B.; Phillips, Natalie A.
2016-01-01
This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed. PMID:27148106
Facilitation of listening comprehension by visual information under noisy listening condition
NASA Astrophysics Data System (ADS)
Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi
2009-02-01
Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.
Further evidence of auditory extinction in aphasia.
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
2013-02-01
Preliminary research (Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Seventeen IWA (M(age) = 53.19 years) and 17 neurologically intact controls (M(age) = 55.18 years) participated. Auditory stimuli were spoken letters presented in a free-field listening environment. Stimuli were presented in single-stimulus stimulation (SSS) or double-simultaneous stimulation (DSS) trials across 5 conditions designed to determine whether extinction is related to binding, inefficient attention resource allocation, or overall deficits in attention. All participants completed all experimental conditions. Significant extinction was demonstrated only by IWA when sounds were different, providing further evidence of auditory extinction. However, binding requirements did not appear to influence the IWA's performance. Results indicate that, for IWA, auditory extinction may not be attributed to a binding deficit or inefficient attention resource allocation because of equivalent performance across all 5 conditions. Rather, overall attentional resources may be influential. Future research in aphasia should explore the effect of the stimulus presentation in addition to the continued study of attention treatment.
Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill
2014-04-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.
Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill
2014-01-01
Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013
2013-07-02
amygdala induced by hippocampal formation stimulation in vivo. The Journal of neuroscience: the official journal of the Society for Neuroscience 15...6 Figure 1.3. Schematic model of the neural circuitry of Pavlovian auditory fear conditioning. Model shows how an auditory conditioned...stimulus and a nociceptive unconditioned foot shock stimulus converge in the lateral amygdala (LA) via auditory thalamus and cortex and somatosensory
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions
Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.
2014-01-01
Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967
Okamoto, Hidehiko; Stracke, Henning; Lagemann, Lothar; Pantev, Christo
2010-01-01
The capability of involuntarily tracking certain sound signals during the simultaneous presence of noise is essential in human daily life. Previous studies have demonstrated that top-down auditory focused attention can enhance excitatory and inhibitory neural activity, resulting in sharpening of frequency tuning of auditory neurons. In the present study, we investigated bottom-up driven involuntary neural processing of sound signals in noisy environments by means of magnetoencephalography. We contrasted two sound signal sequencing conditions: "constant sequencing" versus "random sequencing." Based on a pool of 16 different frequencies, either identical (constant sequencing) or pseudorandomly chosen (random sequencing) test frequencies were presented blockwise together with band-eliminated noises to nonattending subjects. The results demonstrated that the auditory evoked fields elicited in the constant sequencing condition were significantly enhanced compared with the random sequencing condition. However, the enhancement was not significantly different between different band-eliminated noise conditions. Thus the present study confirms that by constant sound signal sequencing under nonattentive listening the neural activity in human auditory cortex can be enhanced, but not sharpened. Our results indicate that bottom-up driven involuntary neural processing may mainly amplify excitatory neural networks, but may not effectively enhance inhibitory neural circuits.
Duval, Elizabeth R; Lovelace, Christopher T; Aarant, Justin; Filion, Diane L
2013-12-01
The purpose of this study was to investigate the effects of both facial expression and face gender on startle eyeblink response patterns at varying lead intervals (300, 800, and 3500ms) indicative of attentional and emotional processes. We aimed to determine whether responses to affective faces map onto the Defense Cascade Model (Lang et al., 1997) to better understand the stages of processing during affective face viewing. At 300ms, there was an interaction between face expression and face gender with female happy and neutral faces and male angry faces producing inhibited startle. At 3500ms, there was a trend for facilitated startle during angry compared to neutral faces. These findings suggest that affective expressions are perceived differently in male and female faces, especially at short lead intervals. Future studies investigating face processing should take both face gender and expression into account. © 2013.
Selective impairment of auditory selective attention under concurrent cognitive load.
Dittrich, Kerstin; Stahl, Christoph
2012-06-01
Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.
Corley, Michael J; Caruso, Michael J; Takahashi, Lorey K
2012-01-18
Posttraumatic stress disorder (PTSD) is characterized by stress-induced symptoms including exaggerated fear memories, hypervigilance and hyperarousal. However, we are unaware of an animal model that investigates these hallmarks of PTSD especially in relation to fear extinction and habituation. Therefore, to develop a valid animal model of PTSD, we exposed rats to different intensities of footshock stress to determine their effects on either auditory predator odor fear extinction or habituation of fear sensitization. In Experiment 1, rats were exposed to acute footshock stress (no shock control, 0.4 mA, or 0.8 mA) immediately prior to auditory fear conditioning training involving the pairing of auditory clicks with a cloth containing cat odor. When presented to the conditioned auditory clicks in the next 5 days of extinction testing conducted in a runway apparatus with a hide box, rats in the two shock groups engaged in higher levels of freezing and head out vigilance-like behavior from the hide box than the no shock control group. This increase in fear behavior during extinction testing was likely due to auditory activation of the conditioned fear state because Experiment 2 demonstrated that conditioned fear behavior was not broadly increased in the absence of the conditioned auditory stimulus. Experiment 3 was then conducted to determine whether acute exposure to stress induces a habituation resistant sensitized fear state. We found that rats exposed to 0.8 mA footshock stress and subsequently tested for 5 days in the runway hide box apparatus with presentations of nonassociative auditory clicks exhibited high initial levels of freezing, followed by head out behavior and culminating in the occurrence of locomotor hyperactivity. In addition, Experiment 4 indicated that without delivery of nonassociative auditory clicks, 0.8 mA footshock stressed rats did not exhibit robust increases in sensitized freezing and locomotor hyperactivity, albeit head out vigilance-like behavior continued to be observed. In summary, our animal model provides novel information on the effects of different intensities of footshock stress, auditory-predator odor fear conditioning, and their interactions on facilitating either extinction-resistant or habituation-resistant fear-related behavior. These results lay the foundation for exciting new investigations of the hallmarks of PTSD that include the stress-induced formation and persistence of traumatic memories and sensitized fear. Copyright © 2011 Elsevier Inc. All rights reserved.
Evans, Julia L.; Pollak, Seth D.
2011-01-01
This study examined the electrophysiological correlates of auditory and visual working memory in children with Specific Language Impairments (SLI). Children with SLI and age-matched controls (11;9 – 14;10) completed visual and auditory working memory tasks while event-related potentials (ERPs) were recorded. In the auditory condition, children with SLI performed similarly to controls when the memory load was kept low (1-back memory load). As expected, when demands for auditory working memory were higher, children with SLI showed decreases in accuracy and attenuated P3b responses. However, children with SLI also evinced difficulties in the visual working memory tasks. In both the low (1-back) and high (2-back) memory load conditions, P3b amplitude was significantly lower for the SLI as compared to CA groups. These data suggest a domain-general working memory deficit in SLI that is manifested across auditory and visual modalities. PMID:21316354
Kwon, Jeong-Tae; Jhang, Jinho; Kim, Hyung-Su; Lee, Sujin; Han, Jin-Hee
2012-09-19
Memory is thought to be sparsely encoded throughout multiple brain regions forming unique memory trace. Although evidence has established that the amygdala is a key brain site for memory storage and retrieval of auditory conditioned fear memory, it remains elusive whether the auditory brain regions may be involved in fear memory storage or retrieval. To investigate this possibility, we systematically imaged the brain activity patterns in the lateral amygdala, MGm/PIN, and AuV/TeA using activity-dependent induction of immediate early gene zif268 after recent and remote memory retrieval of auditory conditioned fear. Consistent with the critical role of the amygdala in fear memory, the zif268 activity in the lateral amygdala was significantly increased after both recent and remote memory retrieval. Interesting, however, the density of zif268 (+) neurons in both MGm/PIN and AuV/TeA, particularly in layers IV and VI, was increased only after remote but not recent fear memory retrieval compared to control groups. Further analysis of zif268 signals in AuV/TeA revealed that conditioned tone induced stronger zif268 induction compared to familiar tone in each individual zif268 (+) neuron after recent memory retrieval. Taken together, our results support that the lateral amygdala is a key brain site for permanent fear memory storage and suggest that MGm/PIN and AuV/TeA might play a role for remote memory storage or retrieval of auditory conditioned fear, or, alternatively, that these auditory brain regions might have a different way of processing for familiar or conditioned tone information at recent and remote time phases.
Retrosplenial cortex is required for the retrieval of remote memory for auditory cues.
Todd, Travis P; Mehlman, Max L; Keene, Christopher S; DeAngeli, Nicole E; Bucci, David J
2016-06-01
The restrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of the RSC to recently acquired auditory fear memories. Since neocortical regions have been implicated in the permanent storage of remote memories, we examined the contribution of the RSC to remotely acquired auditory fear memories. In Experiment 1, retrieval of a remotely acquired auditory fear memory was impaired when permanent lesions (either electrolytic or neurotoxic) were made several weeks after initial conditioning. In Experiment 2, using a chemogenetic approach, we observed impairments in the retrieval of remote memory for an auditory cue when the RSC was temporarily inactivated during testing. In Experiment 3, after injection of a retrograde tracer into the RSC, we observed labeled cells in primary and secondary auditory cortices, as well as the claustrum, indicating that the RSC receives direct projections from auditory regions. Overall our results indicate the RSC has a critical role in the retrieval of remotely acquired auditory fear memories, and we suggest this is related to the quality of the memory, with less precise memories being RSC dependent. © 2016 Todd et al.; Published by Cold Spring Harbor Laboratory Press.
Effects of Meditation Practice on Spontaneous Eye Blink Rate
Kruis, Ayla; Slagter, Heleen A.; Bachhuber, David R.W.; Davidson, Richard J.; Lutz, Antoine
2016-01-01
A rapidly growing body of research suggests that meditation can change brain and cognitive functioning. Yet little is known about the neurochemical mechanisms underlying meditation-related changes in cognition. Here we investigated the effects of meditation on spontaneous Eye Blink Rates (sEBR), a non-invasive peripheral correlate of striatal dopamine activity. Previous studies have shown a relationship between sEBR and cognitive functions such as mind-wandering, cognitive flexibility, and attention–functions that are also affected by meditation. We therefore expected that long-term meditation practice would alter eye-blink activity. To test this, we recorded baseline sEBR and Inter Eye-Blink Intervals (IEBI) in long-term meditators (LTM) and meditation naive participants (MNP). We found that LTM not only blinked less frequently, but also showed a different eye-blink pattern than MNP. This pattern had good to high degree of consistency over three time points. Moreover, we examined the effects of an 8 week-course of Mindfulness Based Stress Reduction (MBSR) on sEBR and IEBI, compared to an active control group and a waitlist-control group. No effect of short-term meditation practice was found. Finally, we investigated whether different types of meditation differentially alter eye blink activity by measuring sEBR and IEBI after a full day of two kinds of meditation practices in the LTM. No effect of meditation type was found. Taken together, these findings may suggest either that individual difference in dopaminergic neurotransmission is a self-selection factor for meditation practice, or that long-term, but not short-term meditation practice induces stable changes in baseline striatal dopaminergic functioning. PMID:26871460
Lafo, Jacob A; Mikos, Ania; Mangal, Paul C; Scott, Bonnie M; Trifilio, Erin; Okun, Michael S; Bowers, Dawn
2017-01-01
Essential tremor is a highly prevalent movement disorder characterized by kinetic tremor and mild cognitive-executive changes. These features are commonly attributed to abnormal cerebellar changes, resulting in disruption of cerebellar-thalamo-cortical networks. Less attention has been paid to alterations in basic emotion processing in essential tremor, despite known cerebellar-limbic interconnectivity. In the current study, we tested the hypothesis that a psychophysiologic index of emotional reactivity, the emotion modulated startle reflex, would be muted in individuals with essential tremor relative to controls. Participants included 19 essential tremor patients and 18 controls, who viewed standard sets of unpleasant, pleasant, and neutral pictures for six seconds each. During picture viewing, white noise bursts were binaurally presented to elicit startle eyeblinks measured over the orbicularis oculi. Consistent with past literature, controls' startle eyeblink responses were modulated according to picture valence (unpleasant > neutral > pleasant). In essential tremor participants, startle eyeblinks were not modulated by emotion. This modulation failure was not due to medication effects, nor was it due to abnormal appraisal of emotional picture content. Neuroanatomically, it remains unclear whether diminished startle modulation in essential tremor is secondary to aberrant cerebellar input to the amygdala, which is involved in priming the startle response in emotional contexts, or due to more direct disruption between the cerebellum and brainstem startle circuitry. If the former is correct, these findings may be the first to reveal dysregulation of emotional networks in essential tremor. Copyright © 2016 Elsevier Ltd. All rights reserved.
Threatening social context facilitates pain-related fear learning.
Karos, Kai; Meulders, Ann; Vlaeyen, Johan W S
2015-03-01
This study investigated the effects of a threatening and a safe social context on learning pain-related fear, a key factor in the development and maintenance of chronic pain. We measured self-reported pain intensity, pain expectancy, pain-related fear (verbal ratings and eyeblink startle responses), and behavioral measures of avoidance (movement-onset latency and duration) using an established differential voluntary movement fear conditioning paradigm. Participants (N = 42) performed different movements with a joystick: during fear acquisition, movement in one direction (CS+) was followed by a painful stimulus (pain-US) whereas movement in another direction (CS-) was not. For participants in the threat group, an angry face was continuously presented in the background during the task, whereas in the safe group, a happy face was presented. During the extinction phase the pain-US was omitted. As compared to the safe social context, a threatening social context led to increased contextual fear and facilitated differentiation between CS+ and CS- movements regarding self-reported pain expectancy, fear of pain, eyeblink startle responses, and movement-onset latency. In contrast, self-reported pain intensity was not affected by social context. These data support the modulation of pain-related fear by social context. A threatening social context leads to stronger acquisition of (pain-related) fear and simultaneous contextual fear but does not affect pain intensity ratings. This knowledge may aid in the prevention of chronic pain and anxiety disorders and shows that social context might modulate pain-related fear without immediately affecting pain intensity itself. Copyright © 2015 American Pain Society. Published by Elsevier Inc. All rights reserved.
Maeng, Lisa Y; Shors, Tracey J
2013-01-01
Women are nearly twice as likely as men to suffer from anxiety and post-traumatic stress disorder (PTSD), indicating that many females are especially vulnerable to stressful life experience. A profound sex difference in the response to stress is also observed in laboratory animals. Acute exposure to an uncontrollable stressful event disrupts associative learning during classical eyeblink conditioning in female rats but enhances this same type of learning process in males. These sex differences in response to stress are dependent on neuronal activity in similar but also different brain regions. Neuronal activity in the basolateral nucleus of the amygdala (BLA) is necessary in both males and females. However, neuronal activity in the medial prefrontal cortex (mPFC) during the stressor is necessary to modify learning in females but not in males. The mPFC is often divided into its prelimbic (PL) and infralimbic (IL) subregions, which differ both in structure and function. Through its connections to the BLA, we hypothesized that neuronal activity within the PL, but not IL, during the stressor is necessary to suppress learning in females. To test this hypothesis, either the PL or IL of adult female rats was bilaterally inactivated with GABAA agonist muscimol during acute inescapable swim stress. About 24 h later, all subjects were trained with classical eyeblink conditioning. Though stressed, females without neuronal activity in the PL learned well. In contrast, females with IL inactivation during the stressor did not learn well, behaving similarly to stressed vehicle-treated females. These data suggest that exposure to a stressful event critically engages the PL, but not IL, to disrupt associative learning in females. Together with previous studies, these data indicate that the PL communicates with the BLA to suppress learning after a stressful experience in females. This circuit may be similarly engaged in women who become cognitively impaired after stressful life events.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
Audio–visual interactions for motion perception in depth modulate activity in visual area V3A
Ogawa, Akitoshi; Macaluso, Emiliano
2013-01-01
Multisensory signals can enhance the spatial perception of objects and events in the environment. Changes of visual size and auditory intensity provide us with the main cues about motion direction in depth. However, frequency changes in audition and binocular disparity in vision also contribute to the perception of motion in depth. Here, we presented subjects with several combinations of auditory and visual depth-cues to investigate multisensory interactions during processing of motion in depth. The task was to discriminate the direction of auditory motion in depth according to increasing or decreasing intensity. Rising or falling auditory frequency provided an additional within-audition cue that matched or did not match the intensity change (i.e. intensity-frequency (IF) “matched vs. unmatched” conditions). In two-thirds of the trials, a task-irrelevant visual stimulus moved either in the same or opposite direction of the auditory target, leading to audio–visual “congruent vs. incongruent” between-modalities depth-cues. Furthermore, these conditions were presented either with or without binocular disparity. Behavioral data showed that the best performance was observed in the audio–visual congruent condition with IF matched. Brain imaging results revealed maximal response in visual area V3A when all cues provided congruent and reliable depth information (i.e. audio–visual congruent, IF-matched condition including disparity cues). Analyses of effective connectivity revealed increased coupling from auditory cortex to V3A specifically in audio–visual congruent trials. We conclude that within- and between-modalities cues jointly contribute to the processing of motion direction in depth, and that they do so via dynamic changes of connectivity between visual and auditory cortices. PMID:23333414
Oscillatory support for rapid frequency change processing in infants.
Musacchia, Gabriella; Choudhury, Naseem A; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P; Benasich, April A
2013-11-01
Rapid auditory processing and auditory change detection abilities are crucial aspects of speech and language development, particularly in the first year of life. Animal models and adult studies suggest that oscillatory synchrony, and in particular low-frequency oscillations play key roles in this process. We hypothesize that infant perception of rapid pitch and timing changes is mediated, at least in part, by oscillatory mechanisms. Using event-related potentials (ERPs), source localization and time-frequency analysis of event-related oscillations (EROs), we examined the neural substrates of rapid auditory processing in 4-month-olds. During a standard oddball paradigm, infants listened to tone pairs with invariant standard (STD, 800-800 Hz) and variant deviant (DEV, 800-1200 Hz) pitch. STD and DEV tone pairs were first presented in a block with a short inter-stimulus interval (ISI) (Rapid Rate: 70 ms ISI), followed by a block of stimuli with a longer ISI (Control Rate: 300 ms ISI). Results showed greater ERP peak amplitude in response to the DEV tone in both conditions and later and larger peaks during Rapid Rate presentation, compared to the Control condition. Sources of neural activity, localized to right and left auditory regions, showed larger and faster activation in the right hemisphere for both rate conditions. Time-frequency analysis of the source activity revealed clusters of theta band enhancement to the DEV tone in right auditory cortex for both conditions. Left auditory activity was enhanced only during Rapid Rate presentation. These data suggest that local low-frequency oscillatory synchrony underlies rapid processing and can robustly index auditory perception in young infants. Furthermore, left hemisphere recruitment during rapid frequency change discrimination suggests a difference in the spectral and temporal resolution of right and left hemispheres at a very young age. © 2013 Elsevier Ltd. All rights reserved.
Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.
McDaniel, Jena; Camarata, Stephen; Yoder, Paul
2018-05-15
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
Teten, Amy F; Dagenais, Paul A; Friehe, Mary J
2015-01-01
This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food.
Auditory and Visual Cues for Topic Maintenance with Persons Who Exhibit Dementia of Alzheimer's Type
Teten, Amy F.; Dagenais, Paul A.; Friehe, Mary J.
2015-01-01
This study compared the effectiveness of auditory and visual redirections in facilitating topic coherence for persons with Dementia of Alzheimer's Type (DAT). Five persons with moderate stage DAT engaged in conversation with the first author. Three topics related to activities of daily living, recreational activities, food, and grooming, were broached. Each topic was presented three times to each participant: once as a baseline condition, once with auditory redirection to topic, and once with visual redirection to topic. Transcripts of the interactions were scored for overall coherence. Condition was a significant factor in that the DAT participants exhibited better topic maintenance under visual and auditory conditions as opposed to baseline. In general, the performance of the participants was not affected by the topic, except for significantly higher overall coherence ratings for the visually redirected interactions dealing with the topic of food. PMID:26171273
NASA Technical Reports Server (NTRS)
Begault, Durand R.
1993-01-01
The advantage of a head-up auditory display was evaluated in a preliminary experiment designed to measure and compare the acquisition time for capturing visual targets under two auditory conditions: standard one-earpiece presentation and two-earpiece three-dimensional (3D) audio presentation. Twelve commercial airline crews were tested under full mission simulation conditions at the NASA-Ames Man-Vehicle Systems Research Facility advanced concepts flight simulator. Scenario software generated visual targets corresponding to aircraft that would activate a traffic collision avoidance system (TCAS) aural advisory; the spatial auditory position was linked to the visual position with 3D audio presentation. Results showed that crew members using a 3D auditory display acquired targets approximately 2.2 s faster than did crew members who used one-earpiece head- sets, but there was no significant difference in the number of targets acquired.
Sekiguchi, Yusuke; Honda, Keita; Ishiguro, Akio
2016-01-01
Sensory impairments caused by neurological or physical disorders hamper kinesthesia, making rehabilitation difficult. In order to overcome this problem, we proposed and developed a novel biofeedback prosthesis called Auditory Foot for transforming sensory modalities, in which the sensor prosthesis transforms plantar sensations to auditory feedback signals. This study investigated the short-term effect of the auditory feedback prosthesis on walking in stroke patients with hemiparesis. To evaluate the effect, we compared four conditions of auditory feedback from plantar sensors at the heel and fifth metatarsal. We found significant differences in the maximum hip extension angle and ankle plantar flexor moment on the affected side during the stance phase, between conditions with and without auditory feedback signals. These results indicate that our sensory prosthesis could enhance walking performance in stroke patients with hemiparesis, resulting in effective short-term rehabilitation. PMID:27547456
Altieri, Nicholas; Wenger, Michael J.
2013-01-01
Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of −12 dB, and S/N ratio of −18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity. PMID:24058358
Altieri, Nicholas; Wenger, Michael J
2013-01-01
Speech perception engages both auditory and visual modalities. Limitations of traditional accuracy-only approaches in the investigation of audiovisual speech perception have motivated the use of new methodologies. In an audiovisual speech identification task, we utilized capacity (Townsend and Nozawa, 1995), a dynamic measure of efficiency, to quantify audiovisual integration. Capacity was used to compare RT distributions from audiovisual trials to RT distributions from auditory-only and visual-only trials across three listening conditions: clear auditory signal, S/N ratio of -12 dB, and S/N ratio of -18 dB. The purpose was to obtain EEG recordings in conjunction with capacity to investigate how a late ERP co-varies with integration efficiency. Results showed efficient audiovisual integration for low auditory S/N ratios, but inefficient audiovisual integration when the auditory signal was clear. The ERP analyses showed evidence for greater audiovisual amplitude compared to the unisensory signals for lower auditory S/N ratios (higher capacity/efficiency) compared to the high S/N ratio (low capacity/inefficient integration). The data are consistent with an interactive framework of integration, where auditory recognition is influenced by speech-reading as a function of signal clarity.
Blom, Jan Dirk
2015-01-01
Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.
Auditory function in children with Charcot-Marie-Tooth disease.
Rance, Gary; Ryan, Monique M; Bayliss, Kristen; Gill, Kathryn; O'Sullivan, Caitlin; Whitechurch, Marny
2012-05-01
The peripheral manifestations of the inherited neuropathies are increasingly well characterized, but their effects upon cranial nerve function are not well understood. Hearing loss is recognized in a minority of children with this condition, but has not previously been systemically studied. A clear understanding of the prevalence and degree of auditory difficulties in this population is important as hearing impairment can impact upon speech/language development, social interaction ability and educational progress. The aim of this study was to investigate auditory pathway function, speech perception ability and everyday listening and communication in a group of school-aged children with inherited neuropathies. Twenty-six children with Charcot-Marie-Tooth disease confirmed by genetic testing and physical examination participated. Eighteen had demyelinating neuropathies (Charcot-Marie-Tooth type 1) and eight had the axonal form (Charcot-Marie-Tooth type 2). While each subject had normal or near-normal sound detection, individuals in both disease groups showed electrophysiological evidence of auditory neuropathy with delayed or low amplitude auditory brainstem responses. Auditory perception was also affected, with >60% of subjects with Charcot-Marie-Tooth type 1 and >85% of Charcot-Marie-Tooth type 2 suffering impaired processing of auditory temporal (timing) cues and/or abnormal speech understanding in everyday listening conditions.
Stekelenburg, Jeroen J; Vroomen, Jean
2012-01-01
In many natural audiovisual events (e.g., a clap of the two hands), the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have reported that there are distinct neural correlates of temporal (when) versus phonetic/semantic (which) content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where) in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual parts. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical sub-additive amplitude reductions (AV - V < A) were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that this N1 suppression was greater for the spatially congruent stimuli. A very early audiovisual interaction was also found at 40-60 ms (P50) in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
Geva, R; Eshel, R; Leitner, Y; Fattal-Valevski, A; Harel, S
2008-12-01
Recent reports showed that children born with intrauterine growth restriction (IUGR) are at greater risk of experiencing verbal short-term memory span (STM) deficits that may impede their learning capacities at school. It is still unknown whether these deficits are modality dependent. This long-term, prospective design study examined modality-dependent verbal STM functions in children who were diagnosed at birth with IUGR (n = 138) and a control group (n = 64). Their STM skills were evaluated individually at 9 years of age with four conditions of the Visual-Aural Digit Span Test (VADS; Koppitz, 1981): auditory-oral, auditory-written, visuospatial-oral and visuospatial-written. Cognitive competence was evaluated with the short form of the Wechsler Intelligence Scales for Children--revised (WISC-R95; Wechsler, 1998). We found IUGR-related specific auditory-oral STM deficits (p < .036) in conjunction with two double dissociations: an auditory-visuospatial (p < .014) and an input-output processing distinction (p < .014). Cognitive competence had a significant effect on all four conditions; however, the effect of IUGR on the auditory-oral condition was not overridden by the effect of intelligence quotient (IQ). Intrauterine growth restriction affects global competence and inter-modality processing, as well as distinct auditory input processing related to verbal STM functions. The findings support a long-term relationship between prenatal aberrant head growth and auditory verbal STM deficits by the end of the first decade of life. Empirical, clinical and educational implications are presented.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers' performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning.
Hannah, Beverly; Wang, Yue; Jongman, Allard; Sereno, Joan A.; Cao, Jiguo; Nie, Yunlong
2017-01-01
Speech perception involves multiple input modalities. Research has indicated that perceivers establish cross-modal associations between auditory and visuospatial events to aid perception. Such intermodal relations can be particularly beneficial for speech development and learning, where infants and non-native perceivers need additional resources to acquire and process new sounds. This study examines how facial articulatory cues and co-speech hand gestures mimicking pitch contours in space affect non-native Mandarin tone perception. Native English as well as Mandarin perceivers identified tones embedded in noise with either congruent or incongruent Auditory-Facial (AF) and Auditory-FacialGestural (AFG) inputs. Native Mandarin results showed the expected ceiling-level performance in the congruent AF and AFG conditions. In the incongruent conditions, while AF identification was primarily auditory-based, AFG identification was partially based on gestures, demonstrating the use of gestures as valid cues in tone identification. The English perceivers’ performance was poor in the congruent AF condition, but improved significantly in AFG. While the incongruent AF identification showed some reliance on facial information, incongruent AFG identification relied more on gestural than auditory-facial information. These results indicate positive effects of facial and especially gestural input on non-native tone perception, suggesting that cross-modal (visuospatial) resources can be recruited to aid auditory perception when phonetic demands are high. The current findings may inform patterns of tone acquisition and development, suggesting how multi-modal speech enhancement principles may be applied to facilitate speech learning. PMID:29255435
Chang, Young-Soo; Hong, Sung Hwa; Kim, Eun Yeon; Choi, Ji Eun; Chung, Won-Ho; Cho, Yang-Sun; Moon, Il Joon
2018-05-18
Despite recent advancement in the prediction of cochlear implant outcome, the benefit of bilateral procedures compared to bimodal stimulation and how we predict speech perception outcomes of sequential bilateral cochlear implant based on bimodal auditory performance in children remain unclear. This investigation was performed: (1) to determine the benefit of sequential bilateral cochlear implant and (2) to identify the associated factors for the outcome of sequential bilateral cochlear implant. Observational and retrospective study. We retrospectively analyzed 29 patients with sequential cochlear implant following bimodal-fitting condition. Audiological evaluations were performed; the categories of auditory performance scores, speech perception with monosyllable and disyllables words, and the Korean version of Ling. Audiological evaluations were performed before sequential cochlear implant with the bimodal fitting condition (CI1+HA) and one year after the sequential cochlear implant with bilateral cochlear implant condition (CI1+CI2). The good Performance Group (GP) was defined as follows; 90% or higher in monosyllable and bisyllable tests with auditory-only condition or 20% or higher improvement of the scores with CI1+CI2. Age at first implantation, inter-implant interval, categories of auditory performance score, and various comorbidities were analyzed by logistic regression analysis. Compared to the CI1+HA, CI1+CI2 provided significant benefit in categories of auditory performance, speech perception, and Korean version of Ling results. Preoperative categories of auditory performance scores were the only associated factor for being GP (odds ratio=4.38, 95% confidence interval - 95%=1.07-17.93, p=0.04). The children with limited language development in bimodal condition should be considered as the sequential bilateral cochlear implant and preoperative categories of auditory performance score could be used as the predictor in speech perception after sequential cochlear implant. Copyright © 2018 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
2001-06-01
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.
Hongratanaworakit, T; Heuberger, E; Buchbauer, G
2004-01-01
The aim of the study was to investigate the effects of East Indian sandalwood oil ( Santalum album, Santalaceae) and alpha-santalol on physiological parameters as well as on mental and emotional conditions in healthy human subjects after transdermal absorption. In order to exclude any olfactory stimulation, the inhalation of the fragrances was prevented by breathing masks. Eight physiological parameters, i. e., blood oxygen saturation, blood pressure, breathing rate, eye-blink rate, pulse rate, skin conductance, skin temperature, and surface electromyogram were recorded. Subjective mental and emotional condition was assessed by means of rating scales. While alpha-santalol caused significant physiological changes which are interpreted in terms of a relaxing/sedative effect, sandalwood oil provoked physiological deactivation but behavioral activation. These findings are likely to represent an uncoupling of physiological and behavioral arousal processes by sandalwood oil.
Peripheral auditory processing changes seasonally in Gambel’s white-crowned sparrow
Caras, Melissa L.; Brenowitz, Eliot; Rubel, Edwin W
2010-01-01
Song in oscine birds is a learned behavior that plays important roles in breeding. Pronounced seasonal differences in song behavior, and in the morphology and physiology of the neural circuit underlying song production are well documented in many songbird species. Androgenic and estrogenic hormones largely mediate these seasonal changes. While much work has focused on the hormonal mechanisms underlying seasonal plasticity in songbird vocal production, relatively less work has investigated seasonal and hormonal effects on songbird auditory processing, particularly at a peripheral level. We addressed this issue in Gambel’s white-crowned sparrow (Zonotrichia leucophrys gambelii), a highly seasonal breeder. Photoperiod and hormone levels were manipulated in the laboratory to simulate natural breeding and non-breeding conditions. Peripheral auditory function was assessed by measuring the auditory brainstem response (ABR) and distortion product otoacoustic emissions (DPOAEs) of males and females in both conditions. Birds exposed to breeding-like conditions demonstrated elevated thresholds and prolonged peak latencies compared with birds housed under non-breeding-like conditions. There were no changes in DPOAEs, however, which indicates that the seasonal differences in ABRs do not arise from changes in hair cell function. These results suggest that seasons and hormones impact auditory processing as well as vocal production in wild songbirds. PMID:20563817
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
The impact of perilaryngeal vibration on the self-perception of loudness and the Lombard effect.
Brajot, François-Xavier; Nguyen, Don; DiGiovanni, Jeffrey; Gracco, Vincent L
2018-04-05
The role of somatosensory feedback in speech and the perception of loudness was assessed in adults without speech or hearing disorders. Participants completed two tasks: loudness magnitude estimation of a short vowel and oral reading of a standard passage. Both tasks were carried out in each of three conditions: no-masking, auditory masking alone, and mixed auditory masking plus vibration of the perilaryngeal area. A Lombard effect was elicited in both masking conditions: speakers unconsciously increased vocal intensity. Perilaryngeal vibration further increased vocal intensity above what was observed for auditory masking alone. Both masking conditions affected fundamental frequency and the first formant frequency as well, but only vibration was associated with a significant change in the second formant frequency. An additional analysis of pure-tone thresholds found no difference in auditory thresholds between masking conditions. Taken together, these findings indicate that perilaryngeal vibration effectively masked somatosensory feedback, resulting in an enhanced Lombard effect (increased vocal intensity) that did not alter speakers' self-perception of loudness. This implies that the Lombard effect results from a general sensorimotor process, rather than from a specific audio-vocal mechanism, and that the conscious self-monitoring of speech intensity is not directly based on either auditory or somatosensory feedback.
Brainstem Auditory Evoked Potential Study in Children with Autistic Disorder.
ERIC Educational Resources Information Center
Wong, Virginia; Wong, Sik Nin
1991-01-01
Brainstem auditory evoked potentials were compared in 109 children with infantile autism, 38 with autistic condition, 19 with mental retardation, and 20 normal children. Children with infantile autism or autistic condition had significantly longer brainstem transmission time than normal children suggesting neurological damage as the basis of…
Cutanda, Diana; Correa, Ángel; Sanabria, Daniel
2015-06-01
The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).
Directional Effects between Rapid Auditory Processing and Phonological Awareness in Children
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lee, Nancy Raitano; Boada, Richard
2009-01-01
Background: Deficient rapid auditory processing (RAP) has been associated with early language impairment and dyslexia. Using an auditory masking paradigm, children with language disabilities perform selectively worse than controls at detecting a tone in a backward masking (BM) condition (tone followed by white noise) compared to a forward masking…
Mahajan, Yatin; McArthur, Genevieve
2011-05-01
To determine if an audible movie soundtrack has a degrading effect on the auditory P1, N1, P2, N2, or mismatch negativity (MMN) event-related potentials (ERPs) in children, adolescents, or adults. The auditory ERPs of 36 children, 32 young adolescents, 19 older adolescents, and 10 adults were measured while they watched a movie in two conditions: with an audible soundtrack and with a silent soundtrack. In children and adolescents, the audible movie soundtrack had a significant impact on amplitude, latency or split-half reliability of the N1, P2, N2, and MMN ERPs. The audible soundtrack had minimal impact on the auditory ERPs of adults. These findings challenge previous claims that an audible soundtrack does not degrade the auditory ERPs of children. Further, the reliability of the MMN is poorer than P1, N1, P2, and N2 peaks in both sound-off and sound-on conditions. Researchers should be cautious about using an audible movie soundtrack when measuring auditory ERPs in younger listeners. Copyright © 2010 International Federation of Clinical Neurophysiology. All rights reserved.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
A psychophysiological evaluation of the perceived urgency of auditory warning signals
NASA Technical Reports Server (NTRS)
Burt, J. L.; Bartolome, D. S.; Burdette, D. W.; Comstock, J. R. Jr
1995-01-01
One significant concern that pilots have about cockpit auditory warnings is that the signals presently used lack a sense of priority. The relationship between auditory warning sound parameters and perceived urgency is, therefore, an important topic of enquiry in aviation psychology. The present investigation examined the relationship among subjective assessments of urgency, reaction time, and brainwave activity with three auditory warning signals. Subjects performed a tracking task involving automated and manual conditions, and were presented with auditory warnings having various levels of perceived and situational urgency. Subjective assessments revealed that subjects were able to rank warnings on an urgency scale, but rankings were altered after warnings were mapped to a situational urgency scale. Reaction times differed between automated and manual tracking task conditions, and physiological data showed attentional differences in response to perceived and situational warning urgency levels. This study shows that the use of physiological measures sensitive to attention and arousal, in conjunction with behavioural and subjective measures, may lead to the design of auditory warnings that produce a sense of urgency in an operator that matches the urgency of the situation.
... Loss Hearing Loss in Seniors Hearing Aids General Information Types Features Fittings Assistive Listening & Alerting Devices Cochlear Implants Aural Rehabilitation Auditory Processing Disorders (APDs) Common Conditions Dizziness Tinnitus Who Are ...
Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor
2014-08-01
The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identification was measured before and after the training (or control activity). Results showed that only audiovisual training improved speech-in-noise identification, demonstrating superiority over auditory-only training.
Gallistel, C R
2017-07-01
Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reduced auditory processing capacity during vocalization in children with Selective Mutism.
Arie, Miri; Henkin, Yael; Lamy, Dominique; Tetin-Schneider, Simona; Apter, Alan; Sadeh, Avi; Bar-Haim, Yair
2007-02-01
Because abnormal Auditory Efferent Activity (AEA) is associated with auditory distortions during vocalization, we tested whether auditory processing is impaired during vocalization in children with Selective Mutism (SM). Participants were children with SM and abnormal AEA, children with SM and normal AEA, and normally speaking controls, who had to detect aurally presented target words embedded within word lists under two conditions: silence (single task), and while vocalizing (dual task). To ascertain specificity of auditory-vocal deficit, effects of concurrent vocalizing were also examined during a visual task. Children with SM and abnormal AEA showed impaired auditory processing during vocalization relative to children with SM and normal AEA, and relative to control children. This impairment is specific to the auditory modality and does not reflect difficulties in dual task per se. The data extends previous findings suggesting that deficient auditory processing is involved in speech selectivity in SM.
Neural mechanisms underlying auditory feedback control of speech
Reilly, Kevin J.; Guenther, Frank H.
2013-01-01
The neural substrates underlying auditory feedback control of speech were investigated using a combination of functional magnetic resonance imaging (fMRI) and computational modeling. Neural responses were measured while subjects spoke monosyllabic words under two conditions: (i) normal auditory feedback of their speech, and (ii) auditory feedback in which the first formant frequency of their speech was unexpectedly shifted in real time. Acoustic measurements showed compensation to the shift within approximately 135 ms of onset. Neuroimaging revealed increased activity in bilateral superior temporal cortex during shifted feedback, indicative of neurons coding mismatches between expected and actual auditory signals, as well as right prefrontal and Rolandic cortical activity. Structural equation modeling revealed increased influence of bilateral auditory cortical areas on right frontal areas during shifted speech, indicating that projections from auditory error cells in posterior superior temporal cortex to motor correction cells in right frontal cortex mediate auditory feedback control of speech. PMID:18035557
Strength of German accent under altered auditory feedback
HOWELL, PETER; DWORZYNSKI, KATHARINA
2007-01-01
Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control. PMID:11414137
Oryadi-Zanjani, Mohammad Majid; Vahab, Maryam; Bazrafkan, Mozhdeh; Haghjoo, Asghar
2015-12-01
The aim of this study was to examine the role of audiovisual speech recognition as a clinical criterion of cochlear implant or hearing aid efficiency in Persian-language children with severe-to-profound hearing loss. This research was administered as a cross-sectional study. The sample size was 60 Persian 5-7 year old children. The assessment tool was one of subtests of Persian version of the Test of Language Development-Primary 3. The study included two experiments: auditory-only and audiovisual presentation conditions. The test was a closed-set including 30 words which were orally presented by a speech-language pathologist. The scores of audiovisual word perception were significantly higher than auditory-only condition in the children with normal hearing (P<0.01) and cochlear implant (P<0.05); however, in the children with hearing aid, there was no significant difference between word perception score in auditory-only and audiovisual presentation conditions (P>0.05). The audiovisual spoken word recognition can be applied as a clinical criterion to assess the children with severe to profound hearing loss in order to find whether cochlear implant or hearing aid has been efficient for them or not; i.e. if a child with hearing impairment who using CI or HA can obtain higher scores in audiovisual spoken word recognition than auditory-only condition, his/her auditory skills have appropriately developed due to effective CI or HA as one of the main factors of auditory habilitation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Knowledge of response location alone is not sufficient to generate social inhibition of return.
Welsh, Timothy N; Manzone, Joseph; McDougall, Laura
2014-11-01
Previous research has revealed that the inhibition of return (IOR) effect emerges when individuals respond to a target at the same location as their own previous response or the previous response of a co-actor. The latter social IOR effect is thought to occur because the observation of co-actor's response evokes a representation of that action in the observer and that the observation-evoked response code subsequently activates the inhibitory mechanisms underlying IOR. The present study was conducted to determine if knowledge of the co-actor's response alone is sufficient to evoke social IOR. Pairs of participants completed responses to targets that appeared at different button locations. Button contact generated location-contingent auditory stimuli (high and low tones in Experiment 1 and colour words in Experiment 2). In the Full condition, the observer saw the response and heard the auditory stimuli. In the Auditory Only condition, the observer did not see the co-actor's response, but heard the auditory stimuli generated via button contact to indicate response endpoint. It was found that, although significant individual and social IOR effects emerged in the Full conditions, there were no social IOR effects in the Auditory Only conditions. These findings suggest that knowledge of the co-actor's response alone via auditory information is not sufficient to activate the inhibitory processes leading to IOR. The activation of the mechanisms that lead to social IOR seems to be dependent on processing channels that code the spatial characteristics of action. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany
2013-01-01
The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…
Individual Differences and Auditory Conditioning in Neonates.
ERIC Educational Resources Information Center
Franz, W. K.; And Others
The purposes of this study are (1) to analyze learning ability in newborns using heart rate responses to auditory temporal conditioning and (2) to correlate these with measures on the Brazelton Neonatal Behavioral Assessment Scale. Twenty normal neonates were tested using the Brazelton Scale on the third day of life. They were also given a…
Auditory Temporal Conditioning in Neonates.
ERIC Educational Resources Information Center
Franz, W. K.; And Others
Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…
Welsh, John P.; Oristaglio, Jeffrey T.
2016-01-01
Changes in the timing performance of conditioned responses (CRs) acquired during trace and delay eyeblink conditioning (EBC) are presented for diagnostic subgroups of children having autism spectrum disorder (ASD) aged 6–15 years. Children diagnosed with autistic disorder (AD) were analyzed separately from children diagnosed with either Asperger’s syndrome or Pervasive developmental disorder (Asp/PDD) not otherwise specified and compared to an age- and IQ-matched group of children who were typically developing (TD). Within-subject and between-groups contrasts in CR performance on sequential exposure to trace and delay EBC were analyzed to determine whether any differences would expose underlying functional heterogeneities of the cerebral and cerebellar systems, in ASD subgroups. The EBC parameters measured were percentage CRs, CR onset latency, and CR peak latency. Neither AD nor Asp/PDD groups were impaired in CR acquisition during trace or delay EBC. Both AD and Asp/PDD altered CR timing, but not always in the same way. Although the AD group showed normal CR timing during trace EBC, the Asp/PDD group showed a significant 27 and 28 ms increase in CR onset and peak latency, respectively, during trace EBC. In contrast, the direction of the timing change was opposite during delay EBC, during which the Asp/PDD group showed a significant 29 ms decrease in CR onset latency and the AD group showed a larger 77 ms decrease in CR onset latency. Only the AD group showed a decrease in CR peak latency during delay EBC, demonstrating another difference between AD and Asp/PDD. The difference in CR onset latency during delay EBC for both AD and Asp/PDD was due to an abnormal prevalence of early onset CRs that were intermixed with CRs having normal timing, as observed both in CR onset histograms and mean CR waveforms. In conclusion, significant heterogeneity in EBC performance was apparent between diagnostic groups, and this may indicate that EBC performance can report the heterogeneity in the neurobiological predispositions for ASD. The findings will inform further explorations with larger cohorts, different sensory modalities, and different EBC paradigms and provide a reference set for future EBC studies of children having ASD and non-human models. PMID:27563293
Heine, Lizette; Castro, Maïté; Martial, Charlotte; Tillmann, Barbara; Laureys, Steven; Perrin, Fabien
2015-01-01
Preferred music is a highly emotional and salient stimulus, which has previously been shown to increase the probability of auditory cognitive event-related responses in patients with disorders of consciousness (DOC). To further investigate whether and how music modifies the functional connectivity of the brain in DOC, five patients were assessed with both a classical functional connectivity scan (control condition), and a scan while they were exposed to their preferred music (music condition). Seed-based functional connectivity (left or right primary auditory cortex), and mean network connectivity of three networks linked to conscious sound perception were assessed. The auditory network showed stronger functional connectivity with the left precentral gyrus and the left dorsolateral prefrontal cortex during music as compared to the control condition. Furthermore, functional connectivity of the external network was enhanced during the music condition in the temporo-parietal junction. Although caution should be taken due to small sample size, these results suggest that preferred music exposure might have effects on patients auditory network (implied in rhythm and music perception) and on cerebral regions linked to autobiographical memory. PMID:26617542
Petrac, D C; Bedwell, J S; Renk, K; Orem, D M; Sims, V
2009-07-01
There have been relatively few studies on the relationship between recent perceived environmental stress and cognitive performance, and the existing studies do not control for state anxiety during the cognitive testing. The current study addressed this need by examining recent self-reported environmental stress and divided attention performance, while controlling for state anxiety. Fifty-four university undergraduates who self-reported a wide range of perceived recent stress (10-item perceived stress scale) completed both single and dual (simultaneous auditory and visual stimuli) continuous performance tests. Partial correlation analysis showed a statistically significant positive correlation between perceived stress and the auditory omission errors from the dual condition, after controlling for state anxiety and auditory omission errors from the single condition (r = 0.41). This suggests that increased environmental stress relates to decreased divided attention performance in auditory vigilance. In contrast, an increase in state anxiety (controlling for perceived stress) was related to a decrease in auditory omission errors from the dual condition (r = - 0.37), which suggests that state anxiety may improve divided attention performance. Results suggest that further examination of the neurobiological consequences of environmental stress on divided attention and other executive functioning tasks is needed.
Hao, Yongxin; Jing, He; Bi, Qiang; Zhang, Jiaozhen; Qin, Ling; Yang, Pingting
2014-12-15
Though accumulating literature implicates that cytokines are involved in the pathophysiology of mental disorders, the role of interleukin-6 (IL-6) in learning and memory functions remains unresolved. The present study was undertaken to investigate the effect of IL-6 on amygdala-dependent fear learning. Adult Wistar rats were used along with the auditory fear conditioning test and pharmacological techniques. The data showed that infusions of IL-6, aimed at the amygdala, dose-dependently impaired the acquisition and extinction of conditioned fear. In addition, the results in the Western blot analysis confirmed that JAK/STAT was temporally activated-phosphorylated by the IL-6 treatment. Moreover, the rats were treated with JSI-124, a JAK/STAT3 inhibitor, prior to the IL-6 treatment showed a significant decrease in the IL-6 induced impairments of fear conditioning. Taken together, our results demonstrate that the learning behavior of rats in the auditory fear conditioning could be modulated by IL-6 via the amygdala. Furthermore, the JAK/STAT3 activation in the amygdala seemed to play a role in the IL-6 mediated behavioral alterations of rats in auditory fear learning. Copyright © 2014 Elsevier B.V. All rights reserved.
Using Auditory Steady State Responses to Outline the Functional Connectivity in the Tinnitus Brain
Schlee, Winfried; Weisz, Nathan; Bertrand, Olivier; Hartmann, Thomas; Elbert, Thomas
2008-01-01
Background Tinnitus is an auditory phantom perception that is most likely generated in the central nervous system. Most of the tinnitus research has concentrated on the auditory system. However, it was suggested recently that also non-auditory structures are involved in a global network that encodes subjective tinnitus. We tested this assumption using auditory steady state responses to entrain the tinnitus network and investigated long-range functional connectivity across various non-auditory brain regions. Methods and Findings Using whole-head magnetoencephalography we investigated cortical connectivity by means of phase synchronization in tinnitus subjects and healthy controls. We found evidence for a deviating pattern of long-range functional connectivity in tinnitus that was strongly correlated with individual ratings of the tinnitus percept. Phase couplings between the anterior cingulum and the right frontal lobe and phase couplings between the anterior cingulum and the right parietal lobe showed significant condition x group interactions and were correlated with the individual tinnitus distress ratings only in the tinnitus condition and not in the control conditions. Conclusions To the best of our knowledge this is the first study that demonstrates existence of a global tinnitus network of long-range cortical connections outside the central auditory system. This result extends the current knowledge of how tinnitus is generated in the brain. We propose that this global extend of the tinnitus network is crucial for the continuos perception of the tinnitus tone and a therapeutical intervention that is able to change this network should result in relief of tinnitus. PMID:19005566
Neural stem/progenitor cell properties of glial cells in the adult mouse auditory nerve
Lang, Hainan; Xing, Yazhi; Brown, LaShardai N.; Samuvel, Devadoss J.; Panganiban, Clarisse H.; Havens, Luke T.; Balasubramanian, Sundaravadivel; Wegner, Michael; Krug, Edward L.; Barth, Jeremy L.
2015-01-01
The auditory nerve is the primary conveyor of hearing information from sensory hair cells to the brain. It has been believed that loss of the auditory nerve is irreversible in the adult mammalian ear, resulting in sensorineural hearing loss. We examined the regenerative potential of the auditory nerve in a mouse model of auditory neuropathy. Following neuronal degeneration, quiescent glial cells converted to an activated state showing a decrease in nuclear chromatin condensation, altered histone deacetylase expression and up-regulation of numerous genes associated with neurogenesis or development. Neurosphere formation assays showed that adult auditory nerves contain neural stem/progenitor cells (NSPs) that were within a Sox2-positive glial population. Production of neurospheres from auditory nerve cells was stimulated by acute neuronal injury and hypoxic conditioning. These results demonstrate that a subset of glial cells in the adult auditory nerve exhibit several characteristics of NSPs and are therefore potential targets for promoting auditory nerve regeneration. PMID:26307538
AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX
Weinberger, Norman M.
2009-01-01
Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002
ERIC Educational Resources Information Center
Beauchamp, Chris M.; Stelmack, Robert M.
2006-01-01
The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…
Anteverted internal auditory canal as an inner ear anomaly in patients with craniofacial microsomia.
L'Heureux-Lebeau, Bénédicte; Saliba, Issam
2014-09-01
Craniofacial microsomia involves structure of the first and second branchial arches. A wide range of ear anomalies, affecting external, middle and inner ear, has been described in association with this condition. We report three cases of anteverted internal auditory canal in patients presenting craniofacial microsomia. This unique internal auditory canal orientation was found on high-resolution computed tomography of the temporal bones. This internal auditory canal anomaly is yet unreported in craniofacial anomalies. Copyright © 2014. Published by Elsevier Ireland Ltd.
Foy, Michael R; Foy, Judith G
2016-12-01
One of the most prolific behavioral neuroscientists of his generation, Richard F. Thompson published more than 450 research articles during his almost 60-year career before his death in 2014. The breadth and reach of his scholarship has extended to a large multidisciplinary audience of scientists. The focal point of this article is arguably his most influential paper on cerebellar classical conditioning entitled "The Neurobiology of Learning and Memory" that appeared in Science in 1986 and has been cited 700 times since its publication. Here, a summary of the initial Thompson laboratory research leading up to an understanding of the cerebellum and its critical role in memory traces will be discussed, along with conclusions from the Science article pertinent to cerebellar classical conditioning. The summary will also discuss how the original 1986 article continues to stimulate and influence new research and provide further insights into the role of the cerebellum in the neurobiology of learning and memory function relevant to studies of mammalian classical conditioning. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The effect of spatial auditory landmarks on ambulation.
Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E
2018-02-01
The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.
Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System
Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.
2015-01-01
Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843
Modulation of auditory processing during speech movement planning is limited in adults who stutter
Daliri, Ayoub; Max, Ludo
2015-01-01
Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults’ auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. PMID:25796060
Peckham, Andrew D.; Johnson, Sheri L.
2015-01-01
Extensive research supports the role of striatal dopamine in pursuing and responding to reward, and that eye-blink rate is a valid indicator of striatal dopamine. This study tested whether phasic changes in blink rate could provide an index of reward pursuit. This hypothesis was tested in people with bipolar I disorder (BD; a population with aberrations in reward responsivity), and in those without BD. Thirty-one adults with BD and 28 control participants completed a laboratory task involving effort towards monetary reward. Blink rate was recorded using eye-tracking at baseline, reward anticipation, and post-reward. Those in the BD group completed self-report measures relating to reward and ambition. Results showed that across all participants, blink rates increased from reward anticipation to post-reward. In the BD group, reward-relevant measures were strongly correlated with variation in blink rate. These findings provide validation for phasic changes in blink rate as an index of reward response. PMID:27274949
Multi-channel orbicularis oculi stimulation to restore eye-blink function in facial paralysis.
Somia, N N; Zonnevijlle, E D; Stremel, R W; Maldonado, C; Gossman, M D; Barker, J H
2001-01-01
Facial paralysis due to facial nerve injury results in the loss of function of the muscles of the hemiface. The most serious complication in extreme cases is the loss of vision. In this study, we compared the effectiveness of single- and multiple-channel electrical stimulation to restore a complete and cosmetically acceptable eye blink. We established bilateral orbicularis oculi muscle (OOM) paralysis in eight dogs; the OOM of one side was directly stimulated using single-channel electrical stimulation and the opposite side was stimulated using multi-channel electrical stimulation. The changes in the palpebral fissure and complete palpebral closure were measured. The difference in current intensities between the multi-channel and single-channel simulation groups was significant, while only multi-channel stimulation produced complete eyelid closure. The latest electronic stimulation circuitry with high-quality implantable electrodes will make it possible to regulate precisely OOM contractions and thus generate complete and cosmetically acceptable eye-blink motion in patients with facial paralysis. Copyright 2001 Wiley-Liss, Inc.
Lee, Mei-Hua; Bodfish, James W; Lewis, Mark H; Newell, Karl M
2010-01-01
This study investigated the mean rate and time-dependent sequential organization of spontaneous eye blinks in adults with intellectual and developmental disability (IDD) and individuals from this group who were additionally categorized with stereotypic movement disorder (IDD+SMD). The mean blink rate was lower in the IDD+SMD group than the IDD group and both of these groups had a lower blink rate than a contrast group of healthy adults. In the IDD group the n to n+1 sequential organization over time of the eye-blink durations showed a stronger compensatory organization than the contrast group suggesting decreased complexity/dimensionality of eye-blink behavior. Very low blink rate (and thus insufficient time series data) precluded analysis of time-dependent sequential properties in the IDD+SMD group. These findings support the hypothesis that both IDD and SMD are associated with a reduction in the dimension and adaptability of movement behavior and that this may serve as a risk factor for the expression of abnormal movements.
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Aroudi, Ali; Doclo, Simon
2017-07-01
To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.
Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.
Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun
2018-01-01
When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.
Kokinous, Jenny; Tavano, Alessandro; Kotz, Sonja A; Schröger, Erich
2017-02-01
The role of spatial frequencies (SF) is highly debated in emotion perception, but previous work suggests the importance of low SFs for detecting emotion in faces. Furthermore, emotion perception essentially relies on the rapid integration of multimodal information from faces and voices. We used EEG to test the functional relevance of SFs in the integration of emotional and non-emotional audiovisual stimuli. While viewing dynamic face-voice pairs, participants were asked to identify auditory interjections, and the electroencephalogram (EEG) was recorded. Audiovisual integration was measured as auditory facilitation, indexed by the extent of the auditory N1 amplitude suppression in audiovisual compared to an auditory only condition. We found an interaction of SF filtering and emotion in the auditory response suppression. For neutral faces, larger N1 suppression ensued in the unfiltered and high SF conditions as compared to the low SF condition. Angry face perception led to a larger N1 suppression in the low SF condition. While the results for the neural faces indicate that perceptual quality in terms of SF content plays a major role in audiovisual integration, the results for angry faces suggest that early multisensory integration of emotional information favors low SF neural processing pathways, overruling the predictive value of the visual signal per se. Copyright © 2016 Elsevier B.V. All rights reserved.
A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training
ERIC Educational Resources Information Center
Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.
2012-01-01
This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Looming auditory collision warnings for driving.
Gray, Rob
2011-02-01
A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.
Effects of transient blur and VDT screen luminance changes on eyeblink rate.
Cardona, Genís; Gómez, Marcelo; Quevedo, Lluïsa; Gispets, Joan
2014-10-01
A study was designed to evaluate the efficacy of three different strategies aiming at increasing spontaneous eyeblink rate (SEBR) during computer use. A total of 12 subjects (5 female) with a mean age of 28.7 years were instructed to read a text presented on a computer display terminal during 15min. Four reading sessions (reference and three "blinking events" [BE]) were programmed in which SEBR was digitally recorded. "Blinking events" were based on either a slight distortion of the text characters or on the presentation of a white screen instead of the text, with or without accompanying blinking instructions. All BE had a duration of 20ms and occurred every 15s. Participants graded the intrusiveness of each BE configuration, and the number of lines participants read in each session was recorded. Data from 11 subjects was analysed. A statistically significant difference in SEBR was encountered between the experimental configuration consisting on a white screen plus blinking instructions (7.8 blinks/min) and both reference (5.2 blinks/min; p=0.049) and white screen without blinking instructions (4.8 blinks/min; p=0.038). All three BE had superior levels of intrusiveness than reference conditions, although the performance of participants (line count) was not compromised. The joint contribution of white screen and blinking instructions has been shown to result in a short term improvement in blinking rate in the present sample of non-dry eye computer users. Further work is necessary to improve the acceptance of any BE aiming at influencing SEBR. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro
2016-09-13
The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part.
Cogné, Mélanie; Violleau, Marie-Hélène; Klinger, Evelyne; Joseph, Pierre-Alain
2018-01-31
Topographical disorientation is frequent among patients after a stroke and can be well explored with virtual environments (VEs). VEs also allow for the addition of stimuli. A previous study did not find any effect of non-contextual auditory stimuli on navigational performance in the virtual action planning-supermarket (VAP-S) simulating a medium-sized 3D supermarket. However, the perceptual or cognitive load of the sounds used was not high. We investigated how non-contextual auditory stimuli with high load affect navigational performance in the VAP-S for patients who have had a stroke and any correlation between this performance and dysexecutive disorders. Four kinds of stimuli were considered: sounds from living beings, sounds from supermarket objects, beeping sounds and names of other products that were not available in the VAP-S. The condition without auditory stimuli was the control. The Groupe de réflexion pour l'évaluation des fonctions exécutives (GREFEX) battery was used to evaluate executive functions of patients. The study included 40 patients who have had a stroke (n=22 right-hemisphere and n=18 left-hemisphere stroke). Patients' navigational performance was decreased under the 4 conditions with non-contextual auditory stimuli (P<0.05), especially for those with dysexecutive disorders. For the 5 conditions, the lower the performance, the more GREFEX tests were failed. Patients felt significantly disadvantaged by the non-contextual sounds sounds from living beings, sounds from supermarket objects and names of other products as compared with beeping sounds (P<0.01). Patients' verbal recall of the collected objects was significantly lower under the condition with names of other products (P<0.001). Left and right brain-damaged patients did not differ in navigational performance in the VAP-S under the 5 auditory conditions. These non-contextual auditory stimuli could be used in neurorehabilitation paradigms to train patients with dysexecutive disorders to inhibit disruptive stimuli. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
Rieger, Kathryn; Rarra, Marie-Helene; Diaz Hernandez, Laura; Hubl, Daniela; Koenig, Thomas
2018-03-01
Auditory verbal hallucinations depend on a broad neurobiological network ranging from the auditory system to language as well as memory-related processes. As part of this, the auditory N100 event-related potential (ERP) component is attenuated in patients with schizophrenia, with stronger attenuation occurring during auditory verbal hallucinations. Changes in the N100 component assumingly reflect disturbed responsiveness of the auditory system toward external stimuli in schizophrenia. With this premise, we investigated the therapeutic utility of neurofeedback training to modulate the auditory-evoked N100 component in patients with schizophrenia and associated auditory verbal hallucinations. Ten patients completed electroencephalography neurofeedback training for modulation of N100 (treatment condition) or another unrelated component, P200 (control condition). On a behavioral level, only the control group showed a tendency for symptom improvement in the Positive and Negative Syndrome Scale total score in a pre-/postcomparison ( t (4) = 2.71, P = .054); however, no significant differences were found in specific hallucination related symptoms ( t (7) = -0.53, P = .62). There was no significant overall effect of neurofeedback training on ERP components in our paradigm; however, we were able to identify different learning patterns, and found a correlation between learning and improvement in auditory verbal hallucination symptoms across training sessions ( r = 0.664, n = 9, P = .05). This effect results, with cautious interpretation due to the small sample size, primarily from the treatment group ( r = 0.97, n = 4, P = .03). In particular, a within-session learning parameter showed utility for predicting symptom improvement with neurofeedback training. In conclusion, patients with schizophrenia and associated auditory verbal hallucinations who exhibit a learning pattern more characterized by within-session aptitude may benefit from electroencephalography neurofeedback. Furthermore, independent of the training group, a significant spatial pre-post difference was found in the event-related component P200 ( P = .04).
Magosso, Elisa; Bertini, Caterina; Cuppini, Cristiano; Ursino, Mauro
2016-10-01
Hemianopic patients retain some abilities to integrate audiovisual stimuli in the blind hemifield, showing both modulation of visual perception by auditory stimuli and modulation of auditory perception by visual stimuli. Indeed, conscious detection of a visual target in the blind hemifield can be improved by a spatially coincident auditory stimulus (auditory enhancement of visual detection), while a visual stimulus in the blind hemifield can improve localization of a spatially coincident auditory stimulus (visual enhancement of auditory localization). To gain more insight into the neural mechanisms underlying these two perceptual phenomena, we propose a neural network model including areas of neurons representing the retina, primary visual cortex (V1), extrastriate visual cortex, auditory cortex and the Superior Colliculus (SC). The visual and auditory modalities in the network interact via both direct cortical-cortical connections and subcortical-cortical connections involving the SC; the latter, in particular, integrates visual and auditory information and projects back to the cortices. Hemianopic patients were simulated by unilaterally lesioning V1, and preserving spared islands of V1 tissue within the lesion, to analyze the role of residual V1 neurons in mediating audiovisual integration. The network is able to reproduce the audiovisual phenomena in hemianopic patients, linking perceptions to neural activations, and disentangles the individual contribution of specific neural circuits and areas via sensitivity analyses. The study suggests i) a common key role of SC-cortical connections in mediating the two audiovisual phenomena; ii) a different role of visual cortices in the two phenomena: auditory enhancement of conscious visual detection being conditional on surviving V1 islands, while visual enhancement of auditory localization persisting even after complete V1 damage. The present study may contribute to advance understanding of the audiovisual dialogue between cortical and subcortical structures in healthy and unisensory deficit conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Decoding spectrotemporal features of overt and covert speech from the human cortex
Martin, Stéphanie; Brunner, Peter; Holdgraf, Chris; Heinze, Hans-Jochen; Crone, Nathan E.; Rieger, Jochem; Schalk, Gerwin; Knight, Robert T.; Pasley, Brian N.
2014-01-01
Auditory perception and auditory imagery have been shown to activate overlapping brain regions. We hypothesized that these phenomena also share a common underlying neural representation. To assess this, we used electrocorticography intracranial recordings from epileptic patients performing an out loud or a silent reading task. In these tasks, short stories scrolled across a video screen in two conditions: subjects read the same stories both aloud (overt) and silently (covert). In a control condition the subject remained in a resting state. We first built a high gamma (70–150 Hz) neural decoding model to reconstruct spectrotemporal auditory features of self-generated overt speech. We then evaluated whether this same model could reconstruct auditory speech features in the covert speech condition. Two speech models were tested: a spectrogram and a modulation-based feature space. For the overt condition, reconstruction accuracy was evaluated as the correlation between original and predicted speech features, and was significant in each subject (p < 10−5; paired two-sample t-test). For the covert speech condition, dynamic time warping was first used to realign the covert speech reconstruction with the corresponding original speech from the overt condition. Reconstruction accuracy was then evaluated as the correlation between original and reconstructed speech features. Covert reconstruction accuracy was compared to the accuracy obtained from reconstructions in the baseline control condition. Reconstruction accuracy for the covert condition was significantly better than for the control condition (p < 0.005; paired two-sample t-test). The superior temporal gyrus, pre- and post-central gyrus provided the highest reconstruction information. The relationship between overt and covert speech reconstruction depended on anatomy. These results provide evidence that auditory representations of covert speech can be reconstructed from models that are built from an overt speech data set, supporting a partially shared neural substrate. PMID:24904404
Daliri, Ayoub; Max, Ludo
2018-02-01
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre-speech modulation is not directly related to limited auditory-motor adaptation; and in AWS, DAF paradoxically tends to normalize their otherwise limited pre-speech auditory modulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Shared and distinct factors driving attention and temporal processing across modalities
Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy
2013-01-01
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664
Li, Yuanqing; Wang, Fangyi; Chen, Yongbin; Cichocki, Andrzej; Sejnowski, Terrence
2017-09-25
At cocktail parties, our brains often simultaneously receive visual and auditory information. Although the cocktail party problem has been widely investigated under auditory-only settings, the effects of audiovisual inputs have not. This study explored the effects of audiovisual inputs in a simulated cocktail party. In our fMRI experiment, each congruent audiovisual stimulus was a synthesis of 2 facial movie clips, each of which could be classified into 1 of 2 emotion categories (crying and laughing). Visual-only (faces) and auditory-only stimuli (voices) were created by extracting the visual and auditory contents from the synthesized audiovisual stimuli. Subjects were instructed to selectively attend to 1 of the 2 objects contained in each stimulus and to judge its emotion category in the visual-only, auditory-only, and audiovisual conditions. The neural representations of the emotion features were assessed by calculating decoding accuracy and brain pattern-related reproducibility index based on the fMRI data. We compared the audiovisual condition with the visual-only and auditory-only conditions and found that audiovisual inputs enhanced the neural representations of emotion features of the attended objects instead of the unattended objects. This enhancement might partially explain the benefits of audiovisual inputs for the brain to solve the cocktail party problem. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn
2013-01-01
Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants' cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by auditory-motor synchronization helps to optimize running economy.
Air traffic controllers' long-term speech-in-noise training effects: A control group study.
Zaballos, Maria T P; Plasencia, Daniel P; González, María L Z; de Miguel, Angel R; Macías, Ángel R
2016-01-01
Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and -5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=-5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions.
Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.
Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd
2014-11-01
In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and sensory brain activation rather mirrored expectation than stimulation. Silent music reading probably relies on these basic neurocognitive mechanisms. Copyright © 2014 Elsevier Inc. All rights reserved.
Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu
2015-01-01
Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828
Ito, Masanori; Kado, Naoki; Suzuki, Toshiaki; Ando, Hiroshi
2013-01-01
[Purpose] The purpose of this study was to investigate the influence of external pacing with periodic auditory stimuli on the control of periodic movement. [Subjects and Methods] Eighteen healthy subjects performed self-paced, synchronization-continuation, and syncopation-continuation tapping. Inter-onset intervals were 1,000, 2,000 and 5,000 ms. The variability of inter-tap intervals was compared between the different pacing conditions and between self-paced tapping and each continuation phase. [Results] There were no significant differences in the mean and standard deviation of the inter-tap interval between pacing conditions. For the 1,000 and 5,000 ms tasks, there were significant differences in the mean inter-tap interval following auditory pacing compared with self-pacing. For the 2,000 ms syncopation condition and 5,000 ms task, there were significant differences from self-pacing in the standard deviation of the inter-tap interval following auditory pacing. [Conclusion] These results suggest that the accuracy of periodic movement with intervals of 1,000 and 5,000 ms can be improved by the use of auditory pacing. However, the consistency of periodic movement is mainly dependent on the inherent skill of the individual; thus, improvement of consistency based on pacing is unlikely. PMID:24259932
Brébion, Gildas; Stephan-Otto, Christian; Usall, Judith; Huerta-Ramos, Elena; Perez del Olmo, Mireia; Cuevas-Esteban, Jorge; Haro, Josep Maria; Ochoa, Susana
2015-09-01
A number of cognitive underpinnings of auditory hallucinations have been established in schizophrenia patients, but few have, as yet, been uncovered for visual hallucinations. In previous research, we unexpectedly observed that auditory hallucinations were associated with poor recognition of color, but not black-and-white (b/w), pictures. In this study, we attempted to replicate and explain this finding. Potential associations with visual hallucinations were explored. B/w and color pictures were presented to 50 schizophrenia patients and 45 healthy individuals under 2 conditions of visual context presentation corresponding to 2 levels of visual encoding complexity. Then, participants had to recognize the target pictures among distractors. Auditory-verbal hallucinations were inversely associated with the recognition of the color pictures presented under the most effortful encoding condition. This association was fully mediated by working-memory span. Visual hallucinations were associated with improved recognition of the color pictures presented under the less effortful condition. Patients suffering from visual hallucinations were not impaired, relative to the healthy participants, in the recognition of these pictures. Decreased working-memory span in patients with auditory-verbal hallucinations might impede the effortful encoding of stimuli. Visual hallucinations might be associated with facilitation in the visual encoding of natural scenes, or with enhanced color perception abilities. (c) 2015 APA, all rights reserved).
Haley, Katarina L.
2015-01-01
Purpose To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not expected to improve speech fluency. Method Ten participants with APH/AOS and 10 neurologically healthy (NH) participants were studied under both feedback conditions. To allow examination of individual responses, we used an ABACA design. Effects were examined on syllable rate, disfluency duration, and vocal intensity. Results Seven of 10 APH/AOS participants increased fluency with masking by increasing rate, decreasing disfluency duration, or both. In contrast, none of the NH participants increased speaking rate with MAF. In the AAF condition, only 1 APH/AOS participant increased fluency. Four APH/AOS participants and 8 NH participants slowed their rate with AAF. Conclusions Speaking with MAF appears to increase fluency in a subset of individuals with APH/AOS, indicating that overreliance on auditory feedback monitoring may contribute to their disorder presentation. The distinction between responders and nonresponders was not linked to AOS diagnosis, so additional work is needed to develop hypotheses for candidacy and underlying control mechanisms. PMID:26363508
An investigation of the auditory perception of western lowland gorillas in an enrichment study.
Brooker, Jake S
2016-09-01
Previous research has highlighted the varied effects of auditory enrichment on different captive animals. This study investigated how manipulating musical components can influence the behavior of a group of captive western lowland gorillas (Gorilla gorilla gorilla) at Bristol Zoo. The gorillas were observed during exposure to classical music, rock-and-roll music, and rainforest sounds. The two music conditions were modified to create five further conditions: unmanipulated, decreased pitch, increased pitch, decreased tempo, and increased tempo. We compared the prevalence of activity, anxiety, and social behaviors between the standard conditions. We also compared the prevalence of each of these behaviors across the manipulated conditions of each type of music independently and collectively. Control observations with no sound exposure were regularly scheduled between the observations of the 12 auditory conditions. The results suggest that naturalistic rainforest sounds had no influence on the anxiety of captive gorillas, contrary to past research. The tempo of music appears to be significantly associated with activity levels among this group, and social behavior may be affected by pitch. Low tempo music also may be effective at reducing anxiety behavior in captive gorillas. Regulated auditory enrichment may provide effective means of calming gorillas, or for facilitating active behavior. Zoo Biol. 35:398-408, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Towards a truly mobile auditory brain-computer interface: exploring the P300 to take away.
De Vos, Maarten; Gandras, Katharina; Debener, Stefan
2014-01-01
In a previous study we presented a low-cost, small, and wireless 14-channel EEG system suitable for field recordings (Debener et al., 2012, psychophysiology). In the present follow-up study we investigated whether a single-trial P300 response can be reliably measured with this system, while subjects freely walk outdoors. Twenty healthy participants performed a three-class auditory oddball task, which included rare target and non-target distractor stimuli presented with equal probabilities of 16%. Data were recorded in a seated (control condition) and in a walking condition, both of which were realized outdoors. A significantly larger P300 event-related potential amplitude was evident for targets compared to distractors (p<.001), but no significant interaction with recording condition emerged. P300 single-trial analysis was performed with regularized stepwise linear discriminant analysis and revealed above chance-level classification accuracies for most participants (19 out of 20 for the seated, 16 out of 20 for the walking condition), with mean classification accuracies of 71% (seated) and 64% (walking). Moreover, the resulting information transfer rates for the seated and walking conditions were comparable to a recently published laboratory auditory brain-computer interface (BCI) study. This leads us to conclude that a truly mobile auditory BCI system is feasible. © 2013.
2006-08-01
Space Administration ( NASA ) Task Load Index ( TLX ...SITREP Questionnaire Example 33 Appendix C. NASA - TLX 35 Appendix D. Demographic Questionnaire 39 Appendix E. Post-Test Questionnaire 41...Mean ratings of physical demand by cue condition using NASA - TLX . ..................... 19 Figure 9. Mean ratings of temporal demand by cue condition
ERIC Educational Resources Information Center
Dorman, Michael F.; Liss, Julie; Wang, Shuai; Berisha, Visar; Ludwig, Cimarron; Natale, Sarah Cook
2016-01-01
Purpose: Five experiments probed auditory-visual (AV) understanding of sentences by users of cochlear implants (CIs). Method: Sentence material was presented in auditory (A), visual (V), and AV test conditions to listeners with normal hearing and CI users. Results: (a) Most CI users report that most of the time, they have access to both A and V…
ERIC Educational Resources Information Center
Timmons, Beverly A.; Boudreau, James P.
Reported are five studies on the use of delayed auditory feedback (DAF) with stutterers. The first study indicates that sex differences and age differences in temporal reaction were found when subjects (5-, 7-, 9-, 11-, and 13-years-old) recited a nursery rhyme under DAF and NAF (normal auditory feedback) conditions. The second study is reported…
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
Electrophysiological Evidence for the Sources of the Masking Level Difference.
Fowler, Cynthia G
2017-08-16
The purpose of this review article is to review evidence from auditory evoked potential studies to describe the contributions of the auditory brainstem and cortex to the generation of the masking level difference (MLD). A literature review was performed, focusing on the auditory brainstem, middle, and late latency responses used in protocols similar to those used to generate the behavioral MLD. Temporal coding of the signals necessary for generating the MLD occurs in the auditory periphery and brainstem. Brainstem disorders up to wave III of the auditory brainstem response (ABR) can disrupt the MLD. The full MLD requires input to the generators of the auditory late latency potentials to produce all characteristics of the MLD; these characteristics include threshold differences for various binaural signal and noise conditions. Studies using central auditory lesions are beginning to identify the cortical effects on the MLD. The MLD requires auditory processing from the periphery to cortical areas. A healthy auditory periphery and brainstem codes temporal synchrony, which is essential for the ABR. Threshold differences require engaging cortical function beyond the primary auditory cortex. More studies using cortical lesions and evoked potentials or imaging should clarify the specific cortical areas involved in the MLD.
Morgan, Simeon J; Paolini, Antonio G
2012-06-06
Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Murphy, Kathleen M; Saunders, Muriel D; Saunders, Richard R; Olswang, Lesley B
2004-01-01
The effects of different types and amounts of environmental stimuli (visual and auditory) on microswitch use and behavioral states of three individuals with profound multiple impairments were examined. The individual's switch use and behavioral states were measured under three setting conditions: natural stimuli (typical visual and auditory stimuli in a recreational situation), reduced visual stimuli, and reduced visual and auditory stimuli. Results demonstrated differential switch use in all participants with the varying environmental setting conditions. No consistent effects were observed in behavioral state related to environmental condition. Predominant behavioral state scores and switch use did not systematically covary with any participant. Results suggest the importance of considering environmental stimuli in relationship to switch use when working with individuals with profound multiple impairments.
Development of kinesthetic-motor and auditory-motor representations in school-aged children.
Kagerer, Florian A; Clark, Jane E
2015-07-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age.
Development of kinesthetic-motor and auditory-motor representations in school-aged children
Clark, Jane E.
2015-01-01
In two experiments using a center-out task, we investigated kinesthetic-motor and auditory-motor integrations in 5- to 12-year-old children and young adults. In experiment 1, participants moved a pen on a digitizing tablet from a starting position to one of three targets (visuo-motor condition), and then to one of four targets without visual feedback of the movement. In both conditions, we found that with increasing age, the children moved faster and straighter, and became less variable in their feedforward control. Higher control demands for movements toward the contralateral side were reflected in longer movement times and decreased spatial accuracy across all age groups. When feedforward control relies predominantly on kinesthesia, 7- to 10-year-old children were more variable, indicating difficulties in switching between feedforward and feedback control efficiently during that age. An inverse age progression was found for directional endpoint error; larger errors increasing with age likely reflect stronger functional lateralization for the dominant hand. In experiment 2, the same visuo-motor condition was followed by an auditory-motor condition in which participants had to move to acoustic targets (either white band or one-third octave noise). Since in the latter directional cues come exclusively from transcallosally mediated interaural time differences, we hypothesized that auditory-motor representations would show age effects. The results did not show a clear age effect, suggesting that corpus callosum functionality is sufficient in children to allow them to form accurate auditory-motor maps already at a young age. PMID:25912609
The Contribution of Brainstem and Cerebellar Pathways to Auditory Recognition
McLachlan, Neil M.; Wilson, Sarah J.
2017-01-01
The cerebellum has been known to play an important role in motor functions for many years. More recently its role has been expanded to include a range of cognitive and sensory-motor processes, and substantial neuroimaging and clinical evidence now points to cerebellar involvement in most auditory processing tasks. In particular, an increase in the size of the cerebellum over recent human evolution has been attributed in part to the development of speech. Despite this, the auditory cognition literature has largely overlooked afferent auditory connections to the cerebellum that have been implicated in acoustically conditioned reflexes in animals, and could subserve speech and other auditory processing in humans. This review expands our understanding of auditory processing by incorporating cerebellar pathways into the anatomy and functions of the human auditory system. We reason that plasticity in the cerebellar pathways underpins implicit learning of spectrotemporal information necessary for sound and speech recognition. Once learnt, this information automatically recognizes incoming auditory signals and predicts likely subsequent information based on previous experience. Since sound recognition processes involving the brainstem and cerebellum initiate early in auditory processing, learnt information stored in cerebellar memory templates could then support a range of auditory processing functions such as streaming, habituation, the integration of auditory feature information such as pitch, and the recognition of vocal communications. PMID:28373850
Binaural beats increase interhemispheric alpha-band coherence between auditory cortices.
Solcà, Marco; Mottaz, Anaïs; Guggisberg, Adrian G
2016-02-01
Binaural beats (BBs) are an auditory illusion occurring when two tones of slightly different frequency are presented separately to each ear. BBs have been suggested to alter physiological and cognitive processes through synchronization of the brain hemispheres. To test this, we recorded electroencephalograms (EEG) at rest and while participants listened to BBs or a monaural control condition during which both tones were presented to both ears. We calculated for each condition the interhemispheric coherence, which expressed the synchrony between neural oscillations of both hemispheres. Compared to monaural beats and resting state, BBs enhanced interhemispheric coherence between the auditory cortices. Beat frequencies in the alpha (10 Hz) and theta (4 Hz) frequency range both increased interhemispheric coherence selectively at alpha frequencies. In a second experiment, we evaluated whether this coherence increase has a behavioral aftereffect on binaural listening. No effects were observed in a dichotic digit task performed immediately after BBs presentation. Our results suggest that BBs enhance alpha-band oscillation synchrony between the auditory cortices during auditory stimulation. This effect seems to reflect binaural integration rather than entrainment. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of auditory impression of the coolness and warmness of automotive HVAC noise.
Nakagawa, Seiji; Hotehama, Takuya; Kamiya, Masaru
2017-07-01
Noise induced by a heating, ventilation and air conditioning (HVAC) system in a vehicle is an important factor that affects the comfort of the interior of a car cabin. Much effort has been devoted to reduce noise levels, however, there is a need for a new sound design that addresses the noise problem from a different point of view. In this study, focusing on the auditory impression of automotive HVAC noise concerning coolness and warmness, psychoacoustical listening tests were performed using a paired comparison technique under various conditions of room temperature. Five stimuli were synthesized by stretching the spectral envelopes of recorded automotive HVAC noise to assess the effect of the spectral centroid, and were presented to normal-hearing subjects. Results show that the spectral centroid significantly affects the auditory impression concerning coolness and warmness; a higher spectral centroid induces a cooler auditory impression regardless of the room temperature.
NASA Astrophysics Data System (ADS)
Bechara, Antoine; Tranel, Daniel; Damasio, Hanna; Adolphs, Ralph; Rockland, Charles; Damasio, Antonio R.
1995-08-01
A patient with selective bilateral damage to the amygdala did not acquire conditioned autonomic responses to visual or auditory stimuli but did acquire the declarative facts about which visual or auditory stimuli were paired with the unconditioned stimulus. By contrast, a patient with selective bilateral damage to the hippocampus failed to acquire the facts but did acquire the conditioning. Finally, a patient with bilateral damage to both amygdala and hippocampal formation acquired neither the conditioning nor the facts. These findings demonstrate a double dissociation of conditioning and declarative knowledge relative to the human amygdala and hippocampus.
The effects of context and musical training on auditory temporal-interval discrimination.
Banai, Karen; Fisher, Shirley; Ganot, Ron
2012-02-01
Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.
Abikoff, H; Courtney, M E; Szeibel, P J; Koplewicz, H S
1996-05-01
This study evaluated the impact of extra-task stimulation on the academic task performance of children with attention-deficit/hyperactivity disorder (ADHD). Twenty boys with ADHD and 20 nondisabled boys worked on an arithmetic task during high stimulation (music), low stimulation (speech), and no stimulation (silence). The music "distractors" were individualized for each child, and the arithmetic problems were at each child's ability level. A significant Group x Condition interaction was found for number of correct answers. Specifically, the nondisabled youngsters performed similarly under all three auditory conditions. In contrast, the children with ADHD did significantly better under the music condition than speech or silence conditions. However, a significant Group x Order interaction indicated that arithmetic performance was enhanced only for those children with ADHD who received music as the first condition. The facilitative effects of salient auditory stimulation on the arithmetic performance of the children with ADHD provide some support for the underarousal/optimal stimulation theory of ADHD.
Ocean acidification erodes crucial auditory behaviour in a marine fish.
Simpson, Stephen D; Munday, Philip L; Wittenrich, Matthew L; Manassa, Rachel; Dixson, Danielle L; Gagliano, Monica; Yan, Hong Y
2011-12-23
Ocean acidification is predicted to affect marine ecosystems in many ways, including modification of fish behaviour. Previous studies have identified effects of CO(2)-enriched conditions on the sensory behaviour of fishes, including the loss of natural responses to odours resulting in ecologically deleterious decisions. Many fishes also rely on hearing for orientation, habitat selection, predator avoidance and communication. We used an auditory choice chamber to study the influence of CO(2)-enriched conditions on directional responses of juvenile clownfish (Amphiprion percula) to daytime reef noise. Rearing and test conditions were based on Intergovernmental Panel on Climate Change predictions for the twenty-first century: current-day ambient, 600, 700 and 900 µatm pCO(2). Juveniles from ambient CO(2)-conditions significantly avoided the reef noise, as expected, but this behaviour was absent in juveniles from CO(2)-enriched conditions. This study provides, to our knowledge, the first evidence that ocean acidification affects the auditory response of fishes, with potentially detrimental impacts on early survival.
Ocean acidification erodes crucial auditory behaviour in a marine fish
Simpson, Stephen D.; Munday, Philip L.; Wittenrich, Matthew L.; Manassa, Rachel; Dixson, Danielle L.; Gagliano, Monica; Yan, Hong Y.
2011-01-01
Ocean acidification is predicted to affect marine ecosystems in many ways, including modification of fish behaviour. Previous studies have identified effects of CO2-enriched conditions on the sensory behaviour of fishes, including the loss of natural responses to odours resulting in ecologically deleterious decisions. Many fishes also rely on hearing for orientation, habitat selection, predator avoidance and communication. We used an auditory choice chamber to study the influence of CO2-enriched conditions on directional responses of juvenile clownfish (Amphiprion percula) to daytime reef noise. Rearing and test conditions were based on Intergovernmental Panel on Climate Change predictions for the twenty-first century: current-day ambient, 600, 700 and 900 µatm pCO2. Juveniles from ambient CO2-conditions significantly avoided the reef noise, as expected, but this behaviour was absent in juveniles from CO2-enriched conditions. This study provides, to our knowledge, the first evidence that ocean acidification affects the auditory response of fishes, with potentially detrimental impacts on early survival. PMID:21632617
Text as a Supplement to Speech in Young and Older Adults a)
Krull, Vidya; Humes, Larry E.
2015-01-01
Objective The purpose of this experiment was to quantify the contribution of visual text to auditory speech recognition in background noise. Specifically, we tested the hypothesis that partially accurate visual text from an automatic speech recognizer could be used successfully to supplement speech understanding in difficult listening conditions in older adults, with normal or impaired hearing. Our working hypotheses were based on what is known regarding audiovisual speech perception in the elderly from speechreading literature. We hypothesized that: 1) combining auditory and visual text information will result in improved recognition accuracy compared to auditory or visual text information alone; 2) benefit from supplementing speech with visual text (auditory and visual enhancement) in young adults will be greater than that in older adults; and 3) individual differences in performance on perceptual measures would be associated with cognitive abilities. Design Fifteen young adults with normal hearing, fifteen older adults with normal hearing, and fifteen older adults with hearing loss participated in this study. All participants completed sentence recognition tasks in auditory-only, text-only, and combined auditory-text conditions. The auditory sentence stimuli were spectrally shaped to restore audibility for the older participants with impaired hearing. All participants also completed various cognitive measures, including measures of working memory, processing speed, verbal comprehension, perceptual and cognitive speed, processing efficiency, inhibition, and the ability to form wholes from parts. Group effects were examined for each of the perceptual and cognitive measures. Audiovisual benefit was calculated relative to performance on auditory-only and visual-text only conditions. Finally, the relationship between perceptual measures and other independent measures were examined using principal-component factor analyses, followed by regression analyses. Results Both young and older adults performed similarly on nine out of ten perceptual measures (auditory, visual, and combined measures). Combining degraded speech with partially correct text from an automatic speech recognizer improved the understanding of speech in both young and older adults, relative to both auditory- and text-only performance. In all subjects, cognition emerged as a key predictor for a general speech-text integration ability. Conclusions These results suggest that neither age nor hearing loss affected the ability of subjects to benefit from text when used to support speech, after ensuring audibility through spectral shaping. These results also suggest that the benefit obtained by supplementing auditory input with partially accurate text is modulated by cognitive ability, specifically lexical and verbal skills. PMID:26458131
Mochida, Takemi; Gomi, Hiroaki; Kashino, Makio
2010-11-08
There has been plentiful evidence of kinesthetically induced rapid compensation for unanticipated perturbation in speech articulatory movements. However, the role of auditory information in stabilizing articulation has been little studied except for the control of voice fundamental frequency, voice amplitude and vowel formant frequencies. Although the influence of auditory information on the articulatory control process is evident in unintended speech errors caused by delayed auditory feedback, the direct and immediate effect of auditory alteration on the movements of articulators has not been clarified. This work examined whether temporal changes in the auditory feedback of bilabial plosives immediately affects the subsequent lip movement. We conducted experiments with an auditory feedback alteration system that enabled us to replace or block speech sounds in real time. Participants were asked to produce the syllable /pa/ repeatedly at a constant rate. During the repetition, normal auditory feedback was interrupted, and one of three pre-recorded syllables /pa/, /Φa/, or /pi/, spoken by the same participant, was presented once at a different timing from the anticipated production onset, while no feedback was presented for subsequent repetitions. Comparisons of the labial distance trajectories under altered and normal feedback conditions indicated that the movement quickened during the short period immediately after the alteration onset, when /pa/ was presented 50 ms before the expected timing. Such change was not significant under other feedback conditions we tested. The earlier articulation rapidly induced by the progressive auditory input suggests that a compensatory mechanism helps to maintain a constant speech rate by detecting errors between the internally predicted and actually provided auditory information associated with self movement. The timing- and context-dependent effects of feedback alteration suggest that the sensory error detection works in a temporally asymmetric window where acoustic features of the syllable to be produced may be coded.
Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.
2015-01-01
The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746
Characterizing the roles of alpha and theta oscillations in multisensory attention.
Keller, Arielle S; Payne, Lisa; Sekuler, Robert
2017-05-01
Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterizing the roles of alpha and theta oscillations in multisensory attention
Keller, Arielle S.; Payne, Lisa; Sekuler, Robert
2017-01-01
Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771
Contralateral Noise Stimulation Delays P300 Latency in School-Aged Children.
Ubiali, Thalita; Sanfins, Milaine Dominici; Borges, Leticia Reis; Colella-Santos, Maria Francisca
2016-01-01
The auditory cortex modulates auditory afferents through the olivocochlear system, which innervates the outer hair cells and the afferent neurons under the inner hair cells in the cochlea. Most of the studies that investigated the efferent activity in humans focused on evaluating the suppression of the otoacoustic emissions by stimulating the contralateral ear with noise, which assesses the activation of the medial olivocochlear bundle. The neurophysiology and the mechanisms involving efferent activity on higher regions of the auditory pathway, however, are still unknown. Also, the lack of studies investigating the effects of noise on human auditory cortex, especially in peadiatric population, points to the need for recording the late auditory potentials in noise conditions. Assessing the auditory efferents in schoolaged children is highly important due to some of its attributed functions such as selective attention and signal detection in noise, which are important abilities related to the development of language and academic skills. For this reason, the aim of the present study was to evaluate the effects of noise on P300 responses of children with normal hearing. P300 was recorded in 27 children aged from 8 to 14 years with normal hearing in two conditions: with and whitout contralateral white noise stimulation. P300 latencies were significantly longer at the presence of contralateral noise. No significant changes were observed for the amplitude values. Contralateral white noise stimulation delayed P300 latency in a group of school-aged children with normal hearing. These results suggest a possible influence of the medial olivocochlear activation on P300 responses under noise condition.
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn
2013-01-01
Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants’ cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants’ cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli –which was most salient during the metronome condition– helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner’s cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by auditory-motor synchronization helps to optimize running economy. PMID:23951000
Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals
Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz
2016-01-01
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. PMID:27169504
Dependence of auditory spatial updating on vestibular, proprioceptive, and efference copy signals.
Genzel, Daria; Firzlaff, Uwe; Wiegrebe, Lutz; MacNeilage, Paul R
2016-08-01
Humans localize sounds by comparing inputs across the two ears, resulting in a head-centered representation of sound-source position. When the head moves, information about head movement must be combined with the head-centered estimate to correctly update the world-centered sound-source position. Spatial updating has been extensively studied in the visual system, but less is known about how head movement signals interact with binaural information during auditory spatial updating. In the current experiments, listeners compared the world-centered azimuthal position of two sound sources presented before and after a head rotation that depended on condition. In the active condition, subjects rotated their head by ∼35° to the left or right, following a pretrained trajectory. In the passive condition, subjects were rotated along the same trajectory in a rotating chair. In the cancellation condition, subjects rotated their head as in the active condition, but the chair was counter-rotated on the basis of head-tracking data such that the head effectively remained fixed in space while the body rotated beneath it. Subjects updated most accurately in the passive condition but erred in the active and cancellation conditions. Performance is interpreted as reflecting the accuracy of perceived head rotation across conditions, which is modeled as a linear combination of proprioceptive/efference copy signals and vestibular signals. Resulting weights suggest that auditory updating is dominated by vestibular signals but with significant contributions from proprioception/efference copy. Overall, results shed light on the interplay of sensory and motor signals that determine the accuracy of auditory spatial updating. Copyright © 2016 the American Physiological Society.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2015-02-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
2014-01-01
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals - over a range of time scales from milliseconds to seconds - renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own ‚privileged‘ temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. PMID:24956028
Babies in traffic: infant vocalizations and listener sex modulate auditory motion perception.
Neuhoff, John G; Hamilton, Grace R; Gittleson, Amanda L; Mejia, Adolfo
2014-04-01
Infant vocalizations and "looming sounds" are classes of environmental stimuli that are critically important to survival but can have dramatically different emotional valences. Here, we simultaneously presented listeners with a stationary infant vocalization and a 3D virtual looming tone for which listeners made auditory time-to-arrival judgments. Negatively valenced infant cries produced more cautious (anticipatory) estimates of auditory arrival time of the tone over a no-vocalization control. Positively valenced laughs had the opposite effect, and across all conditions, men showed smaller anticipatory biases than women. In Experiment 2, vocalization-matched vocoded noise stimuli did not influence concurrent auditory time-to-arrival estimates compared with a control condition. In Experiment 3, listeners estimated the egocentric distance of a looming tone that stopped before arriving. For distant stopping points, women estimated the stopping point as closer when the tone was presented with an infant cry than when it was presented with a laugh. For near stopping points, women showed no differential effect of vocalization type. Men did not show differential effects of vocalization type at either distance. Our results support the idea that both the sex of the listener and the emotional valence of infant vocalizations can influence auditory motion perception and can modulate motor responses to other behaviorally relevant environmental sounds. We also find support for previous work that shows sex differences in emotion processing are diminished under conditions of higher stress.
A Challenge for Cochlear Implantation: Duplicated Internal Auditory Canal.
Binnetoğlu, Adem; Bağlam, Tekin; Sarı, Murat; Gündoğdu, Yavuz; Batman, Çağlar
2016-08-01
Duplication of the internal auditory canal is an uncommon, congenital malformation that can be associated with sensorineural hearing loss owing to aplasia/hypoplasia of the vestibulocochlear nerve. Only 14 such cases have been reported to date. We report the case of a 13-month-old girl with bilateral, congenital, sensorineural hearing loss caused by narrow, duplicated internal auditory canals and discuss the challenges encountered in the diagnosis and treatment of this condition.
What Does Eye-Blink Rate Variability Dynamics Tell Us About Cognitive Performance?
Paprocki, Rafal; Lenskiy, Artem
2017-01-01
Cognitive performance is defined as the ability to utilize knowledge, attention, memory, and working memory. In this study, we briefly discuss various markers that have been proposed to predict cognitive performance. Next, we develop a novel approach to characterize cognitive performance by analyzing eye-blink rate variability dynamics. Our findings are based on a sample of 24 subjects. The subjects were given a 5-min resting period prior to a 10-min IQ test. During both stages, eye blinks were recorded from Fp1 and Fp2 electrodes. We found that scale exponents estimated for blink rate variability during rest were correlated with subjects' performance on the subsequent IQ test. This surprising phenomenon could be explained by the person to person variation in concentrations of dopamine in PFC and accumulation of GABA in the visual cortex, as both neurotransmitters play a key role in cognitive processes and affect blinking. This study demonstrates the possibility that blink rate variability dynamics at rest carry information about cognitive performance and can be employed in the assessment of cognitive abilities without taking a test. PMID:29311876
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
Abdul Wahab, Noor Alaudin; Zakaria, Mohd Normani; Abdul Rahman, Abdul Hamid; Sidek, Dinsuhaimi; Wahab, Suzaily
2017-11-01
The present, case-control, study investigates binaural hearing performance in schizophrenia patients towards sentences presented in quiet and noise. Participants were twenty-one healthy controls and sixteen schizophrenia patients with normal peripheral auditory functions. The binaural hearing was examined in four listening conditions by using the Malay version of hearing in noise test. The syntactically and semantically correct sentences were presented via headphones to the randomly selected subjects. In each condition, the adaptively obtained reception thresholds for speech (RTS) were used to determine RTS noise composite and spatial release from masking. Schizophrenia patients demonstrated significantly higher mean RTS value relative to healthy controls (p=0.018). The large effect size found in three listening conditions, i.e., in quiet (d=1.07), noise right (d=0.88) and noise composite (d=0.90) indicates statistically significant difference between the groups. However, noise front and noise left conditions show medium (d=0.61) and small (d=0.50) effect size respectively. No statistical difference between groups was noted in regards to spatial release from masking on right (p=0.305) and left (p=0.970) ear. The present findings suggest an abnormal unilateral auditory processing in central auditory pathway in schizophrenia patients. Future studies to explore the role of binaural and spatial auditory processing were recommended.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Ludersdorfer, Philipp; Wimmer, Heinz; Richlan, Fabio; Schurz, Matthias; Hutzler, Florian; Kronbichler, Martin
2016-01-01
The present fMRI study investigated the hypothesis that activation of the left ventral occipitotemporal cortex (vOT) in response to auditory words can be attributed to lexical orthographic rather than lexico-semantic processing. To this end, we presented auditory words in both an orthographic ("three or four letter word?") and a semantic ("living or nonliving?") task. In addition, a auditory control condition presented tones in a pitch evaluation task. The results showed that the left vOT exhibited higher activation for orthographic relative to semantic processing of auditory words with a peak in the posterior part of vOT. Comparisons to the auditory control condition revealed that orthographic processing of auditory words elicited activation in a large vOT cluster. In contrast, activation for semantic processing was only weak and restricted to the middle part vOT. We interpret our findings as speaking for orthographic processing in left vOT. In particular, we suggest that activation in left middle vOT can be attributed to accessing orthographic whole-word representations. While activation of such representations was experimentally ascertained in the orthographic task, it might have also occurred automatically in the semantic task. Activation in the more posterior vOT region, on the other hand, may reflect the generation of explicit images of word-specific letter sequences required by the orthographic but not the semantic task. In addition, based on cross-modal suppression, the finding of marked deactivations in response to the auditory tones is taken to reflect the visual nature of representations and processes in left vOT. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Klein-Hennig, Martin; Dietz, Mathias; Hohmann, Volker
2018-03-01
Both harmonic and binaural signal properties are relevant for auditory processing. To investigate how these cues combine in the auditory system, detection thresholds for an 800-Hz tone masked by a diotic (i.e., identical between the ears) harmonic complex tone were measured in six normal-hearing subjects. The target tone was presented either diotically or with an interaural phase difference (IPD) of 180° and in either harmonic or "mistuned" relationship to the diotic masker. Three different maskers were used, a resolved and an unresolved complex tone (fundamental frequency: 160 and 40 Hz) with four components below and above the target frequency and a broadband unresolved complex tone with 12 additional components. The target IPD provided release from masking in most masker conditions, whereas mistuning led to a significant release from masking only in the diotic conditions with the resolved and the narrowband unresolved maskers. A significant effect of mistuning was neither found in the diotic condition with the wideband unresolved masker nor in any of the dichotic conditions. An auditory model with a single analysis frequency band and different binaural processing schemes was employed to predict the data of the unresolved masker conditions. Sensitivity to modulation cues was achieved by including an auditory-motivated modulation filter in the processing pathway. The predictions of the diotic data were in line with the experimental results and literature data in the narrowband condition, but not in the broadband condition, suggesting that across-frequency processing is involved in processing modulation information. The experimental and model results in the dichotic conditions show that the binaural processor cannot exploit modulation information in binaurally unmasked conditions. Copyright © 2017 Elsevier B.V. All rights reserved.
A Generative Model of Speech Production in Broca’s and Wernicke’s Areas
Price, Cathy J.; Crinion, Jenny T.; MacSweeney, Mairéad
2011-01-01
Speech production involves the generation of an auditory signal from the articulators and vocal tract. When the intended auditory signal does not match the produced sounds, subsequent articulatory commands can be adjusted to reduce the difference between the intended and produced sounds. This requires an internal model of the intended speech output that can be compared to the produced speech. The aim of this functional imaging study was to identify brain activation related to the internal model of speech production after activation related to vocalization, auditory feedback, and movement in the articulators had been controlled. There were four conditions: silent articulation of speech, non-speech mouth movements, finger tapping, and visual fixation. In the speech conditions, participants produced the mouth movements associated with the words “one” and “three.” We eliminated auditory feedback from the spoken output by instructing participants to articulate these words without producing any sound. The non-speech mouth movement conditions involved lip pursing and tongue protrusions to control for movement in the articulators. The main difference between our speech and non-speech mouth movement conditions is that prior experience producing speech sounds leads to the automatic and covert generation of auditory and phonological associations that may play a role in predicting auditory feedback. We found that, relative to non-speech mouth movements, silent speech activated Broca’s area in the left dorsal pars opercularis and Wernicke’s area in the left posterior superior temporal sulcus. We discuss these results in the context of a generative model of speech production and propose that Broca’s and Wernicke’s areas may be involved in predicting the speech output that follows articulation. These predictions could provide a mechanism by which rapid movement of the articulators is precisely matched to the intended speech outputs during future articulations. PMID:21954392
Tanaka, T; Kojima, S; Takeda, H; Ino, S; Ifukube, T
2001-12-15
The maintenance of postural balance depends on effective and efficient feedback from various sensory inputs. The importance of auditory inputs in this respect is not, as yet, fully understood. The purpose of this study was to analyse how the moving auditory stimuli could affect the standing balance in healthy adults of different ages. The participants of the study were 12 healthy volunteers, who were divided into two age categories: the young group (mean = 21.9 years) and the elderly group (mean = 68.9 years). The instrument used for evaluation of standing balance was a force plate for measuring body sway parameters. The toe pressure was measured using the F-scan Tactile Sensor System. The moving auditory stimulus produced a white-noise sound and binaural cue using the Beachtron Affordable 3D Audio system. The moving auditory stimulus conditions were employed by having the sound come from the right to left or vice versa at the height of the participant's ears. Participants were asked to stand on the force plate in the Romberg position for 20 s with either eyes opened or eyes closed for analysing the effect of visual input. Simultaneously, all participants tried to remain in the standing position with and without auditory stimulation that the participants heard from the headphone. In addition, the variables of body sway were measured under four conditions for analysing the effect of decreased tactile sensation of toes and feet soles: standing on the normal surface (NS) or soft surface (SS) with and without auditory stimulation. The participants were asked to stand in a total of eight conditions. The results showed that the lateral body sway of the elderly group was more influenced than that of the young group by the lateral moving auditory stimulation. The analysis of toe pressure indicated that all participants used their left feet more than their right feet to maintain balance. Moreover, the elderly had the tendency to be stabilized mainly by use of their heels. The young group were mainly stabilized by the toes of their feet. The results suggest that the elderly may need a more appropriate stimulus of tactile and auditory sense as a feedback system than the young for maintaining and control of their standing postures.
Auditory fatigue : influence of mental factors.
DOT National Transportation Integrated Search
1965-01-01
Conflicting reports regarding the influence of mental tasks on auditory fatigue have recently appeared in the literature. In the present study, 10 male subjects were exposed to 4000 cps fatigue toe at 40 dB SL for 3 min under conditions of mental ari...
Jones, David L; Gao, Sujuan; Svirsky, Mario A
2003-06-01
The purpose of this study was to determine whether 2 speech measures (peak intraoral air pressure [IOP] and IOP duration) obtained during the production of intervocalic stops would be altered as a function of the presence or absence of auditory stimulation provided by a cochlear implant (CI). Five pediatric CI users were required to produce repetitions of the words puppy and baby with their CIs turned on. The CIs were then turned off for 1 hr, at which time the speech sample was repeated with the CI still turned off. Seven children with normal hearing formed a comparison group. They were also tested twice, with a 1-hr intermediate interval. IOP and IOP duration were measured for the medial consonant in both auditory conditions. The results show that auditory condition affected peak IOP more so than IOP duration. Peak IOP was greater for /p/ than /b/ with the CI off, but some participants reduced or reversed this contrast when the CI was on. The findings suggest that different speakers with CIs may use different speech production strategies as they learn to use the auditory signal for speech.
Bravi, Riccardo; Del Tongo, Claudia; Cohen, Erez James; Dalle Mura, Gabriele; Tognetti, Alessandro; Minciacchi, Diego
2014-06-01
The ability to perform isochronous movements while listening to a rhythmic auditory stimulus requires a flexible process that integrates timing information with movement. Here, we explored how non-temporal and temporal characteristics of an auditory stimulus (presence, interval occupancy, and tempo) affect motor performance. These characteristics were chosen on the basis of their ability to modulate the precision and accuracy of synchronized movements. Subjects have participated in sessions in which they performed sets of repeated isochronous wrist's flexion-extensions under various conditions. The conditions were chosen on the basis of the defined characteristics. Kinematic parameters were evaluated during each session, and temporal parameters were analyzed. In order to study the effects of the auditory stimulus, we have minimized all other sensory information that could interfere with its perception or affect the performance of repeated isochronous movements. The present study shows that the distinct characteristics of an auditory stimulus significantly influence isochronous movements by altering their duration. Results provide evidence for an adaptable control of timing in the audio-motor coupling for isochronous movements. This flexibility would make plausible the use of different encoding strategies to adapt audio-motor coupling for specific tasks.
Air Traffic Controllers’ Long-Term Speech-in-Noise Training Effects: A Control Group Study
Zaballos, María T.P.; Plasencia, Daniel P.; González, María L.Z.; de Miguel, Angel R.; Macías, Ángel R.
2016-01-01
Introduction: Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. Subjects and Methods: 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and −5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Results: Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=−5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. Discussion: ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Conclusion: Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions. PMID:27991470
Evaluative Conditioning Induces Changes in Sound Valence
Bolders, Anna C.; Band, Guido P. H.; Stallen, Pieter Jan
2012-01-01
Through evaluative conditioning (EC) a stimulus can acquire an affective value by pairing it with another affective stimulus. While many sounds we encounter daily have acquired an affective value over life, EC has hardly been tested in the auditory domain. To get a more complete understanding of affective processing in auditory domain we examined EC of sound. In Experiment 1 we investigated whether the affective evaluation of short environmental sounds can be changed using affective words as unconditioned stimuli (US). Congruency effects on an affective priming task for conditioned sounds demonstrated successful EC. Subjective ratings for sounds paired with negative words changed accordingly. In Experiment 2 we investigated whether extinction occurs, i.e., whether the acquired valence remains stable after repeated presentation of the conditioned sound without the US. The acquired affective value remained present, albeit weaker, even after 40 extinction trials. These results provide clear evidence for EC effects in the auditory domain. We will argue that both associative as well as propositional processes are likely to underlie these effects. PMID:22514545
Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.
Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C
1995-01-01
Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.
Working memory capacity affects the interference control of distractors at auditory gating.
Tsuchida, Yukio; Katayama, Jun'ichi; Murohashi, Harumitsu
2012-05-10
It is important to understand the role of individual differences in working memory capacity (WMC). We investigated the relation between differences in WMC and N1 in event-related brain potentials as a measure of early selective attention for an auditory distractor in three-stimulus oddball tasks that required minimum memory. A high-WMC group (n=13) showed a smaller N1 in response to a distractor and target than did a low-WMC group (n=13) in the novel condition with high distraction. However, in the simple condition with low distraction, there was no difference in N1 between the groups. For all participants (n=52), the correlation between the scores for WMC and N1 peak amplitude was strong for distractors in the novel condition, whereas there was no relation in the simple condition. These results suggest that WMC can predict the interference control for a salient distractor at auditory gating even during a selective attention task. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training.
Bernstein, Lynne E; Auer, Edward T; Eberhardt, Silvio P; Jiang, Jintao
2013-01-01
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.
Auditory Perceptual Learning for Speech Perception Can be Enhanced by Audiovisual Training
Bernstein, Lynne E.; Auer, Edward T.; Eberhardt, Silvio P.; Jiang, Jintao
2013-01-01
Speech perception under audiovisual (AV) conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how AV training might benefit or impede auditory perceptual learning of speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures. In Experiment 1, paired-associates (PA) AV training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called “reverse hierarchy theory” of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning. PMID:23515520
Lee, Shao-Hsuan; Fang, Tuan-Jen; Yu, Jen-Fang; Lee, Guo-She
2017-09-01
Auditory feedback can make reflexive responses on sustained vocalizations. Among them, the middle-frequency power of F0 (MFP) may provide a sensitive index to access the subtle changes in different auditory feedback conditions. Phonatory airflow temperature was obtained from 20 healthy adults at two vocal intensity ranges under four auditory feedback conditions: (1) natural auditory feedback (NO); (2) binaural speech noise masking (SN); (3) bone-conducted feedback of self-generated voice (BAF); and (4) SN and BAF simultaneously. The modulations of F0 in low-frequency (0.2 Hz-3 Hz), middle-frequency (3 Hz-8 Hz), and high-frequency (8 Hz-25 Hz) bands were acquired using power spectral analysis of F0. Acoustic and aerodynamic analyses were used to acquire vocal intensity, maximum phonation time (MPT), phonatory airflow, and MFP-based vocal efficiency (MBVE). SN and high vocal intensity decreased MFP and raised MBVE and MPT significantly. BAF showed no effect on MFP but significantly lowered MBVE. Moreover, BAF significantly increased the perception of voice feedback and the sensation of vocal effort. Altered auditory feedback significantly changed the middle-frequency modulations of F0. MFP and MBVE could well detect these subtle responses of audio-vocal feedback. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Ansari, M S; Rangasayee, R; Ansari, M A H
2017-03-01
Poor auditory speech perception in geriatrics is attributable to neural de-synchronisation due to structural and degenerative changes of ageing auditory pathways. The speech-evoked auditory brainstem response may be useful for detecting alterations that cause loss of speech discrimination. Therefore, this study aimed to compare the speech-evoked auditory brainstem response in adult and geriatric populations with normal hearing. The auditory brainstem responses to click sounds and to a 40 ms speech sound (the Hindi phoneme |da|) were compared in 25 young adults and 25 geriatric people with normal hearing. The latencies and amplitudes of transient peaks representing neural responses to the onset, offset and sustained portions of the speech stimulus in quiet and noisy conditions were recorded. The older group had significantly smaller amplitudes and longer latencies for the onset and offset responses to |da| in noisy conditions. Stimulus-to-response times were longer and the spectral amplitude of the sustained portion of the stimulus was reduced. The overall stimulus level caused significant shifts in latency across the entire speech-evoked auditory brainstem response in the older group. The reduction in neural speech processing in older adults suggests diminished subcortical responsiveness to acoustically dynamic spectral cues. However, further investigations are needed to encode temporal cues at the brainstem level and determine their relationship to speech perception for developing a routine tool for clinical decision-making.
Interoceptive threat leads to defensive mobilization in highly anxiety sensitive persons.
Melzig, Christiane A; Holtz, Katharina; Michalowski, Jaroslaw M; Hamm, Alfons O
2011-06-01
To study defensive mobilization elicited by the exposure to interoceptive arousal sensations, we exposed highly anxiety sensitive students to a symptom provocation task. Symptom reports, autonomic arousal, and the startle eyeblink response were monitored during guided hyperventilation and a recovery period in 26 highly anxiety sensitive persons and 22 controls. Normoventilation was used as a non-provocative comparison condition. Hyperventilation led to autonomic arousal and a marked increase in somatic symptoms. While high and low anxiety sensitive persons did not differ in their defensive activation during hyperventilation, group differences were detected during early recovery. Highly anxiety sensitive students exhibited a potentiation of startle response magnitudes and increased autonomic arousal after hyper- as compared to after normoventilation, indicating defensive mobilization evoked by the prolonged presence of feared somatic sensations. Copyright © 2010 Society for Psychophysiological Research.
Gomes, Hilary; Barrett, Sophia; Duff, Martin; Barnhardt, Jack; Ritter, Walter
2008-03-01
We examined the impact of perceptual load by manipulating interstimulus interval (ISI) in two auditory selective attention studies that varied in the difficulty of the target discrimination. In the paradigm, channels were separated by frequency and target/deviant tones were softer in intensity. Three ISI conditions were presented: fast (300ms), medium (600ms) and slow (900ms). Behavioral (accuracy and RT) and electrophysiological measures (Nd, P3b) were observed. In both studies, participants evidenced poorer accuracy during the fast ISI condition than the slow suggesting that ISI impacted task difficulty. However, none of the three measures of processing examined, Nd amplitude, P3b amplitude elicited by unattended deviant stimuli, or false alarms to unattended deviants, were impacted by ISI in the manner predicted by perceptual load theory. The prediction based on perceptual load theory, that there would be more processing of irrelevant stimuli under conditions of low as compared to high perceptual load, was not supported in these auditory studies. Task difficulty/perceptual load impacts the processing of irrelevant stimuli in the auditory modality differently than predicted by perceptual load theory, and perhaps differently than in the visual modality.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Discrimination of timbre in early auditory responses of the human brain.
Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee
2011-01-01
The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn
2017-01-01
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
Most, Tova; Michaelis, Hilit
2012-08-01
This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Evaluation of an imputed pitch velocity model of the auditory kappa effect.
Henry, Molly J; McAuley, J Devin
2009-04-01
Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect. (c) 2009 APA, all rights reserved.
Involvement of the human midbrain and thalamus in auditory deviance detection.
Cacciaglia, Raffaele; Escera, Carles; Slabu, Lavinia; Grimm, Sabine; Sanjuán, Ana; Ventura-Campos, Noelia; Ávila, César
2015-02-01
Prompt detection of unexpected changes in the sensory environment is critical for survival. In the auditory domain, the occurrence of a rare stimulus triggers a cascade of neurophysiological events spanning over multiple time-scales. Besides the role of the mismatch negativity (MMN), whose cortical generators are located in supratemporal areas, cumulative evidence suggests that violations of auditory regularities can be detected earlier and lower in the auditory hierarchy. Recent human scalp recordings have shown signatures of auditory mismatch responses at shorter latencies than those of the MMN. Moreover, animal single-unit recordings have demonstrated that rare stimulus changes cause a release from stimulus-specific adaptation in neurons of the primary auditory cortex, the medial geniculate body (MGB), and the inferior colliculus (IC). Although these data suggest that change detection is a pervasive property of the auditory system which may reside upstream cortical sites, direct evidence for the involvement of subcortical stages in the human auditory novelty system is lacking. Using event-related functional magnetic resonance imaging during a frequency oddball paradigm, we here report that auditory deviance detection occurs in the MGB and the IC of healthy human participants. By implementing a random condition controlling for neural refractoriness effects, we show that auditory change detection in these subcortical stations involves the encoding of statistical regularities from the acoustic input. These results provide the first direct evidence of the existence of multiple mismatch detectors nested at different levels along the human ascending auditory pathway. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fundamental deficits of auditory perception in Wernicke's aphasia.
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen
2013-01-01
This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.
Yahata, Izumi; Kawase, Tetsuaki; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker's face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker's face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker's face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information.
Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.
Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier
2016-02-03
Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole auditory scene and how increasing background noise corrupts this process is still debated. In this magnetoencephalography study, subjects had to attend a speech stream with or without multitalker background noise. Results argue for frequency-dependent cortical tracking mechanisms for the attended speech stream. The left superior temporal gyrus tracked the ∼0.5 Hz modulations of the attended speech stream only when the speech was embedded in multitalker background, whereas the right supratemporal auditory cortex tracked 4-8 Hz modulations during both noiseless and cocktail-party conditions. Copyright © 2016 the authors 0270-6474/16/361597-11$15.00/0.
Giordano, Bruno L; Visell, Yon; Yao, Hsin-Yun; Hayward, Vincent; Cooperstock, Jeremy R; McAdams, Stephen
2012-05-01
Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.
Ménard, Lucie; Polak, Marek; Denny, Margaret; Burton, Ellen; Lane, Harlan; Matthies, Melanie L; Marrone, Nicole; Perkell, Joseph S; Tiede, Mark; Vick, Jennell
2007-06-01
This study investigates the effects of speaking condition and auditory feedback on vowel production by postlingually deafened adults. Thirteen cochlear implant users produced repetitions of nine American English vowels prior to implantation, and at one month and one year after implantation. There were three speaking conditions (clear, normal, and fast), and two feedback conditions after implantation (implant processor turned on and off). Ten normal-hearing controls were also recorded once. Vowel contrasts in the formant space (expressed in mels) were larger in the clear than in the fast condition, both for controls and for implant users at all three time samples. Implant users also produced differences in duration between clear and fast conditions that were in the range of those obtained from the controls. In agreement with prior work, the implant users had contrast values lower than did the controls. The implant users' contrasts were larger with hearing on than off and improved from one month to one year postimplant. Because the controls and implant users responded similarly to a change in speaking condition, it is inferred that auditory feedback, although demonstrably important for maintaining normative values of vowel contrasts, is not needed to maintain the distinctiveness of those contrasts in different speaking conditions.
Validation of the Emotiv EPOC® EEG gaming system for measuring research quality auditory ERPs
Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
2013-01-01
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants – particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC®, www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC®). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs. PMID:23638374
Validation of the Emotiv EPOC(®) EEG gaming system for measuring research quality auditory ERPs.
Badcock, Nicholas A; Mousikou, Petroula; Mahajan, Yatin; de Lissa, Peter; Thie, Johnson; McArthur, Genevieve
2013-01-01
Background. Auditory event-related potentials (ERPs) have proved useful in investigating the role of auditory processing in cognitive disorders such as developmental dyslexia, specific language impairment (SLI), attention deficit hyperactivity disorder (ADHD), schizophrenia, and autism. However, laboratory recordings of auditory ERPs can be lengthy, uncomfortable, or threatening for some participants - particularly children. Recently, a commercial gaming electroencephalography (EEG) system has been developed that is portable, inexpensive, and easy to set up. In this study we tested if auditory ERPs measured using a gaming EEG system (Emotiv EPOC(®), www.emotiv.com) were equivalent to those measured by a widely-used, laboratory-based, research EEG system (Neuroscan). Methods. We simultaneously recorded EEGs with the research and gaming EEG systems, whilst presenting 21 adults with 566 standard (1000 Hz) and 100 deviant (1200 Hz) tones under passive (non-attended) and active (attended) conditions. The onset of each tone was marked in the EEGs using a parallel port pulse (Neuroscan) or a stimulus-generated electrical pulse injected into the O1 and O2 channels (Emotiv EPOC(®)). These markers were used to calculate research and gaming EEG system late auditory ERPs (P1, N1, P2, N2, and P3 peaks) and the mismatch negativity (MMN) in active and passive listening conditions for each participant. Results. Analyses were restricted to frontal sites as these are most commonly reported in auditory ERP research. Intra-class correlations (ICCs) indicated that the morphology of the research and gaming EEG system late auditory ERP waveforms were similar across all participants, but that the research and gaming EEG system MMN waveforms were only similar for participants with non-noisy MMN waveforms (N = 11 out of 21). Peak amplitude and latency measures revealed no significant differences between the size or the timing of the auditory P1, N1, P2, N2, P3, and MMN peaks. Conclusions. Our findings suggest that the gaming EEG system may prove a valid alternative to laboratory ERP systems for recording reliable late auditory ERPs (P1, N1, P2, N2, and the P3) over the frontal cortices. In the future, the gaming EEG system may also prove useful for measuring less reliable ERPs, such as the MMN, if the reliability of such ERPs can be boosted to the same level as late auditory ERPs.
Costa, Nayara Thais de Oliveira; Martinho-Carvalho, Ana Claudia; Cunha, Maria Claudia; Lewis, Doris Ruthi
2012-01-01
This study had the aim to investigate the auditory and communicative abilities of children diagnosed with Auditory Neuropathy Spectrum Disorder due to mutation in the Otoferlin gene. It is a descriptive and qualitative study in which two siblings with this diagnosis were assessed. The procedures conducted were: speech perception tests for children with profound hearing loss, and assessment of communication abilities using the Behavioral Observation Protocol. Because they were siblings, the subjects in the study shared family and communicative context. However, they developed different communication abilities, especially regarding the use of oral language. The study showed that the Auditory Neuropathy Spectrum Disorder is a heterogeneous condition in all its aspects, and it is not possible to make generalizations or assume that cases with similar clinical features will develop similar auditory and communicative abilities, even when they are siblings. It is concluded that the acquisition of communicative abilities involves subjective factors, which should be investigated based on the uniqueness of each case.
Thinking about touch facilitates tactile but not auditory processing.
Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris
2012-05-01
Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.
Evidence for Auditory-Motor Impairment in Individuals with Hyperfunctional Voice Disorders
ERIC Educational Resources Information Center
Stepp, Cara E.; Lester-Smith, Rosemary A.; Abur, Defne; Daliri, Ayoub; Noordzij, J. Pieter; Lupiani, Ashling A.
2017-01-01
Purpose: The vocal auditory-motor control of individuals with hyperfunctional voice disorders was examined using a sensorimotor adaptation paradigm. Method: Nine individuals with hyperfunctional voice disorders and 9 individuals with typical voices produced sustained vowels over 160 trials in 2 separate conditions: (a) while experiencing gradual…
Auditory Neuropathy Spectrum Disorder: A Review
ERIC Educational Resources Information Center
Norrix, Linda W.; Velenovsky, David S.
2014-01-01
Purpose: Auditory neuropathy spectrum disorder, or ANSD, can be a confusing diagnosis to physicians, clinicians, those diagnosed, and parents of children diagnosed with the condition. The purpose of this review is to provide the reader with an understanding of the disorder, the limitations in current tools to determine site(s) of lesion, and…
The influence of an auditory-memory attention-demanding task on postural control in blind persons.
Melzer, Itshak; Damry, Elad; Landau, Anat; Yagev, Ronit
2011-05-01
In order to evaluate the effect of an auditory-memory attention-demanding task on balance control, nine blind adults were compared to nine age-gender-matched sighted controls. This issue is particularly relevant for the blind population in which functional assessment of postural control has to be revealed through "real life" motor and cognitive function. The study aimed to explore whether an auditory-memory attention-demanding cognitive task would influence postural control in blind persons and compare this with blindfolded sighted persons. Subjects were instructed to minimize body sway during narrow base upright standing on a single force platform under two conditions: 1) standing still (single task); 2) as in 1) while performing an auditory-memory attention-demanding cognitive task (dual task). Subjects in both groups were required to stand blindfolded with their eyes closed. Center of Pressure displacement data were collected and analyzed using summary statistics and stabilogram-diffusion analysis. Blind and sighted subjects had similar postural sway in eyes closed condition. However, for dual compared to single task, sighted subjects show significant decrease in postural sway while blind subjects did not. The auditory-memory attention-demanding cognitive task had no interference effect on balance control on blind subjects. It seems that sighted individuals used auditory cues to compensate for momentary loss of vision, whereas blind subjects did not. This may suggest that blind and sighted people use different sensorimotor strategies to achieve stability. Copyright © 2010 Elsevier Ltd. All rights reserved.
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
Ranaweera, Ruwan D; Kwon, Minseok; Hu, Shuowen; Tamer, Gregory G; Luh, Wen-Ming; Talavage, Thomas M
2016-01-01
This study investigated the hemisphere-specific effects of the temporal pattern of imaging related acoustic noise on auditory cortex activation. Hemodynamic responses (HDRs) to five temporal patterns of imaging noise corresponding to noise generated by unique combinations of imaging volume and effective repetition time (TR), were obtained using a stroboscopic event-related paradigm with extra-long (≥27.5 s) TR to minimize inter-acquisition effects. In addition to confirmation that fMRI responses in auditory cortex do not behave in a linear manner, temporal patterns of imaging noise were found to modulate both the shape and spatial extent of hemodynamic responses, with classically non-auditory areas exhibiting responses to longer duration noise conditions. Hemispheric analysis revealed the right primary auditory cortex to be more sensitive than the left to the presence of imaging related acoustic noise. Right primary auditory cortex responses were significantly larger during all the conditions. This asymmetry of response to imaging related acoustic noise could lead to different baseline activation levels during acquisition schemes using short TR, inducing an observed asymmetry in the responses to an intended acoustic stimulus through limitations of dynamic range, rather than due to differences in neuronal processing of the stimulus. These results emphasize the importance of accounting for the temporal pattern of the acoustic noise when comparing findings across different fMRI studies, especially those involving acoustic stimulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Ten Brink, Antonia F.; Nijboer, Tanja C. W.; Bergsma, Douwe P.; Barton, Jason J. S.; Van der Stigchel, Stefan
2015-01-01
In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC) is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal), or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal). For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone). In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia. PMID:25835952
Increased Early Processing of Task-Irrelevant Auditory Stimuli in Older Adults
Tusch, Erich S.; Alperin, Brittany R.; Holcomb, Phillip J.; Daffner, Kirk R.
2016-01-01
The inhibitory deficit hypothesis of cognitive aging posits that older adults’ inability to adequately suppress processing of irrelevant information is a major source of cognitive decline. Prior research has demonstrated that in response to task-irrelevant auditory stimuli there is an age-associated increase in the amplitude of the N1 wave, an ERP marker of early perceptual processing. Here, we tested predictions derived from the inhibitory deficit hypothesis that the age-related increase in N1 would be 1) observed under an auditory-ignore, but not auditory-attend condition, 2) attenuated in individuals with high executive capacity (EC), and 3) augmented by increasing cognitive load of the primary visual task. ERPs were measured in 114 well-matched young, middle-aged, young-old, and old-old adults, designated as having high or average EC based on neuropsychological testing. Under the auditory-ignore (visual-attend) task, participants ignored auditory stimuli and responded to rare target letters under low and high load. Under the auditory-attend task, participants ignored visual stimuli and responded to rare target tones. Results confirmed an age-associated increase in N1 amplitude to auditory stimuli under the auditory-ignore but not auditory-attend task. Contrary to predictions, EC did not modulate the N1 response. The load effect was the opposite of expectation: the N1 to task-irrelevant auditory events was smaller under high load. Finally, older adults did not simply fail to suppress the N1 to auditory stimuli in the task-irrelevant modality; they generated a larger response than to identical stimuli in the task-relevant modality. In summary, several of the study’s findings do not fit the inhibitory-deficit hypothesis of cognitive aging, which may need to be refined or supplemented by alternative accounts. PMID:27806081
Event-related potential evidence of processing lexical pitch-accent in auditory Japanese sentences.
Koso, Ayumi; Hagiwara, Hiroko
2009-09-23
Neural mechanisms that underlie the processing of lexical pitch-accent in auditory Japanese were investigated by using event-related potentials. Native speakers of Japanese listened to two types of short sentences, both consisting of a noun and a verb. The sentences ended with a verb with either congruous or incongruous pitch-accent pattern, where pitch-accent violations occur at the verb in the incongruent condition. The event-related potentials of the incongruent condition showed an increased widespread negativity that started 400 ms after the onset of the deviant lexical item and lasted for about 400 ms. These results suggest that the negativity evoked by violations in lexical-pitch accent indicates electrophysiological evidence for the online processing of lexical-pitch accent in auditory Japanese.
Moradi, Shahram; Wahlin, Anna; Hällgren, Mathias; Rönnberg, Jerker; Lidestam, Björn
2017-01-01
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research. PMID:28348542
Strait, Dana L.; Kraus, Nina
2011-01-01
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636
Schall, Sonja; von Kriegstein, Katharina
2014-01-01
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas. PMID:24466026
Sullivan, Jessica R.; Thibodeau, Linda M.; Assmann, Peter F.
2013-01-01
Previous studies have indicated that individuals with normal hearing (NH) experience a perceptual advantage for speech recognition in interrupted noise compared to continuous noise. In contrast, adults with hearing impairment (HI) and younger children with NH receive a minimal benefit. The objective of this investigation was to assess whether auditory training in interrupted noise would improve speech recognition in noise for children with HI and perhaps enhance their utilization of glimpsing skills. A partially-repeated measures design was used to evaluate the effectiveness of seven 1-h sessions of auditory training in interrupted and continuous noise. Speech recognition scores in interrupted and continuous noise were obtained from pre-, post-, and 3 months post-training from 24 children with moderate-to-severe hearing loss. Children who participated in auditory training in interrupted noise demonstrated a significantly greater improvement in speech recognition compared to those who trained in continuous noise. Those who trained in interrupted noise demonstrated similar improvements in both noise conditions while those who trained in continuous noise only showed modest improvements in the interrupted noise condition. This study presents direct evidence that auditory training in interrupted noise can be beneficial in improving speech recognition in noise for children with HI. PMID:23297921
Multisensory integration across the senses in young and old adults
Mahoney, Jeannette R.; Li, Po Ching Clara; Oh-Park, Mooyeon; Verghese, Joe; Holtzer, Roee
2011-01-01
Stimuli are processed concurrently and across multiple sensory inputs. Here we directly compared the effect of multisensory integration (MSI) on reaction time across three paired sensory inputs in eighteen young (M=19.17 yrs) and eighteen old (M=76.44 yrs) individuals. Participants were determined to be non-demented and without any medical or psychiatric conditions that would affect their performance. Participants responded to randomly presented unisensory (auditory, visual, somatosensory) stimuli and three paired sensory inputs consisting of auditory-somatosensory (AS) auditory-visual (AV) and visual-somatosensory (VS) stimuli. Results revealed that reaction time (RT) to all multisensory pairings was significantly faster than those elicited to the constituent unisensory conditions across age groups; findings that could not be accounted for by simple probability summation. Both young and old participants responded the fastest to multisensory pairings containing somatosensory input. Compared to younger adults, older adults demonstrated a significantly greater RT benefit when processing concurrent VS information. In terms of co-activation, older adults demonstrated a significant increase in the magnitude of visual-somatosensory co-activation (i.e., multisensory integration), while younger adults demonstrated a significant increase in the magnitude of auditory-visual and auditory-somatosensory co-activation. This study provides first evidence in support of the facilitative effect of pairing somatosensory with visual stimuli in older adults. PMID:22024545
Wang, Yibei; Fan, Xinmiao; Wang, Pu; Fan, Yue; Chen, Xiaowei
2018-01-01
To evaluate auditory development and hearing improvement in patients with bilateral microtia-atresia using softband and implanted bone-anchored hearing devices and to modify the implantation surgery. The subjects were divided into two groups: the softband group (40 infants, 3 months to 2 years old, Ponto softband) and the implanted group (6 patients, 6-28 years old, Ponto). The Infant-Toddler Meaning Auditory Integration Scale was used conducted to evaluate auditory development at baseline and after 3, 6, 12, and 24 months, and visual reinforcement audiometry was used to assess the auditory threshold in the softband group. In the implanted group, bone-anchored hearing devices were implanted combined with the auricular reconstruction surgery, and high-resolution CT was used to assess the deformity preoperatively. Auditory threshold and speech discrimination scores of the patients with implants were measured under the unaided, softband, and implanted conditions. Total Infant-Toddler Meaning Auditory Integration Scale scores in the softband group improved significantly and approached normal levels. The average visual reinforcement audiometry values under the unaided and softband conditions were 76.75 ± 6.05 dB HL and 32.25 ± 6.20 dB HL (P < 0.01), respectively. In the implanted group, the auditory thresholds under the unaided, softband, and implanted conditions were 59.17 ± 3.76 dB HL, 32.5 ± 2.74 dB HL, and 17.5 ± 5.24 dB HL (P < 0.01), respectively. The respective speech discrimination scores were 23.33 ± 14.72%, 77.17 ± 6.46%, and 96.50 ± 2.66% (P < 0.01). Using softband bone-anchored hearing devices is effective for auditory development and hearing improvement in infants with bilateral microtia-atresia. Wearing softband bone-anchored hearing devices before auricle reconstruction and combining bone-anchored hearing device implantation with auricular reconstruction surgery may bethe optimal clinical choice for these patients, and results in more significant hearing improvement and minimal surgical and anesthetic injury. Copyright © 2017 Elsevier B.V. All rights reserved.
Parthasarathy, Aravindakshan; Bartlett, Edward
2012-07-01
Auditory brainstem responses (ABRs), and envelope and frequency following responses (EFRs and FFRs) are widely used to study aberrant auditory processing in conditions such as aging. We have previously reported age-related deficits in auditory processing for rapid amplitude modulation (AM) frequencies using EFRs recorded from a single channel. However, sensitive testing of EFRs along a wide range of modulation frequencies is required to gain a more complete understanding of the auditory processing deficits. In this study, ABRs and EFRs were recorded simultaneously from two electrode configurations in young and old Fischer-344 rats, a common auditory aging model. Analysis shows that the two channels respond most sensitively to complementary AM frequencies. Channel 1, recorded from Fz to mastoid, responds better to faster AM frequencies in the 100-700 Hz range of frequencies, while Channel 2, recorded from the inter-aural line to the mastoid, responds better to slower AM frequencies in the 16-100 Hz range. Simultaneous recording of Channels 1 and 2 using AM stimuli with varying sound levels and modulation depths show that age-related deficits in temporal processing are not present at slower AM frequencies but only at more rapid ones, which would not have been apparent recording from either channel alone. Comparison of EFRs between un-anesthetized and isoflurane-anesthetized recordings in young animals, as well as comparison with previously published ABR waveforms, suggests that the generators of Channel 1 may emphasize more caudal brainstem structures while those of Channel 2 may emphasize more rostral auditory nuclei including the inferior colliculus and the forebrain, with the boundary of separation potentially along the cochlear nucleus/superior olivary complex. Simultaneous two-channel recording of EFRs help to give a more complete understanding of the properties of auditory temporal processing over a wide range of modulation frequencies which is useful in understanding neural representations of sound stimuli in normal, developmental or pathological conditions. Copyright © 2012 Elsevier B.V. All rights reserved.
Brain-wide maps of Fos expression during fear learning and recall.
Cho, Jin-Hyung; Rendall, Sam D; Gray, Jesse M
2017-04-01
Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. © 2017 Cho et al.; Published by Cold Spring Harbor Laboratory Press.
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.
Treille, Avril; Vilain, Coriandre; Kandel, Sonia; Sato, Marc
2017-09-01
Previous electrophysiological studies have provided strong evidence for early multisensory integrative mechanisms during audiovisual speech perception. From these studies, one unanswered issue is whether hearing our own voice and seeing our own articulatory gestures facilitate speech perception, possibly through a better processing and integration of sensory inputs with our own sensory-motor knowledge. The present EEG study examined the impact of self-knowledge during the perception of auditory (A), visual (V) and audiovisual (AV) speech stimuli that were previously recorded from the participant or from a speaker he/she had never met. Audiovisual interactions were estimated by comparing N1 and P2 auditory evoked potentials during the bimodal condition (AV) with the sum of those observed in the unimodal conditions (A + V). In line with previous EEG studies, our results revealed an amplitude decrease of P2 auditory evoked potentials in AV compared to A + V conditions. Crucially, a temporal facilitation of N1 responses was observed during the visual perception of self speech movements compared to those of another speaker. This facilitation was negatively correlated with the saliency of visual stimuli. These results provide evidence for a temporal facilitation of the integration of auditory and visual speech signals when the visual situation involves our own speech gestures.
Brain-wide maps of Fos expression during fear learning and recall
Cho, Jin-Hyung; Rendall, Sam D.; Gray, Jesse M.
2017-01-01
Fos induction during learning labels neuronal ensembles in the hippocampus that encode a specific physical environment, revealing a memory trace. In the cortex and other regions, the extent to which Fos induction during learning reveals specific sensory representations is unknown. Here we generate high-quality brain-wide maps of Fos mRNA expression during auditory fear conditioning and recall in the setting of the home cage. These maps reveal a brain-wide pattern of Fos induction that is remarkably similar among fear conditioning, shock-only, tone-only, and fear recall conditions, casting doubt on the idea that Fos reveals auditory-specific sensory representations. Indeed, novel auditory tones lead to as much gene induction in visual as in auditory cortex, while familiar (nonconditioned) tones do not appreciably induce Fos anywhere in the brain. Fos expression levels do not correlate with physical activity, suggesting that they are not determined by behavioral activity-driven alterations in sensory experience. In the thalamus, Fos is induced more prominently in limbic than in sensory relay nuclei, suggesting that Fos may be most sensitive to emotional state. Thus, our data suggest that Fos expression during simple associative learning labels ensembles activated generally by arousal rather than specifically by a particular sensory cue. PMID:28331016
Turgeon, Christine; Prémont, Amélie; Trudeau-Fisette, Paméla; Ménard, Lucie
2015-05-01
Studies have reported strong links between speech production and perception. We aimed to evaluate the role of long- and short-term auditory feedback alteration on speech production. Eleven adults with normal hearing (controls) and 17 cochlear implant (CI) users (7 pre-lingually deaf and 10 post-lingually deaf adults) were recruited. Short-term auditory feedback deprivation was induced by turning off the CI or by providing masking noise. Acoustic and articulatory measures were obtained during the production of /u/, with and without a tube inserted between the lips (perturbation), and with and without auditory feedback. F1 values were significantly different between the implant OFF and ON conditions for the pre-lingually deaf participants. In the absence of auditory feedback, the pre-lingually deaf participants moved the tongue more forward. Thus, a lack of normal auditory experience of speech may affect the internal representation of a vowel.
ERIC Educational Resources Information Center
Jurkowski, A.J.; Stepp, E.; Hackley, S.A.
2005-01-01
The effect of a visual warning signal (1.0-6.5s random foreperiod, FP) on the latency of voluntary (hand-grip) and reflexive (startle-eyeblink) reactions was investigated in Parkinson's disease (PD) patients and in young and aged control subjects. Equivalent FP effects on blink were observed across groups. By contrast, FP effects diverged for…
Blechert, Jens; Naumann, Eva; Schmitz, Julian; Herbert, Beate M; Tuschen-Caffier, Brunna
2014-01-01
Many individuals restrict their food intake to prevent weight gain. This restriction has both homeostatic and hedonic effects but their relative contribution is currently unclear. To isolate hedonic effects of food restriction, we exposed regular chocolate eaters to one week of chocolate deprivation but otherwise regular eating. Before and after this hedonic deprivation, participants viewed images of chocolate and images of high-calorie but non-chocolate containing foods, while experiential, behavioral and eyeblink startle responses were measured. Compared to satiety, hedonic deprivation triggered increased chocolate wanting, liking, and chocolate consumption but also feelings of frustration and startle potentiation during the intertrial intervals. Deprivation was further characterized by startle inhibition during both chocolate and food images relative to the intertrial intervals. Individuals who responded with frustration to the manipulation and those who scored high on a questionnaire of impulsivity showed more relative startle inhibition. The results reveal the profound effects of hedonic deprivation on experiential, behavioral and attentional/appetitive response systems and underscore the role of individual differences and state variables for startle modulation. Implications for dieting research and practice as well as for eating and weight disorders are discussed.
Blechert, Jens; Naumann, Eva; Schmitz, Julian; Herbert, Beate M.; Tuschen-Caffier, Brunna
2014-01-01
Many individuals restrict their food intake to prevent weight gain. This restriction has both homeostatic and hedonic effects but their relative contribution is currently unclear. To isolate hedonic effects of food restriction, we exposed regular chocolate eaters to one week of chocolate deprivation but otherwise regular eating. Before and after this hedonic deprivation, participants viewed images of chocolate and images of high-calorie but non-chocolate containing foods, while experiential, behavioral and eyeblink startle responses were measured. Compared to satiety, hedonic deprivation triggered increased chocolate wanting, liking, and chocolate consumption but also feelings of frustration and startle potentiation during the intertrial intervals. Deprivation was further characterized by startle inhibition during both chocolate and food images relative to the intertrial intervals. Individuals who responded with frustration to the manipulation and those who scored high on a questionnaire of impulsivity showed more relative startle inhibition. The results reveal the profound effects of hedonic deprivation on experiential, behavioral and attentional/appetitive response systems and underscore the role of individual differences and state variables for startle modulation. Implications for dieting research and practice as well as for eating and weight disorders are discussed. PMID:24416437
Herbert, Cornelia; Kissler, Johanna
2010-05-01
Valence-driven modulation of the startle reflex, that is larger eyeblinks during viewing of unpleasant pictures and inhibited blinks while viewing pleasant pictures, is well documented. The current study investigated, whether this motivational priming pattern also occurs during processing of unpleasant and pleasant words, and to what extent it is influenced by shallow vs. deep encoding of verbal stimuli. Emotional and neutral adjectives were presented for 5s, and the acoustically elicited startle eyeblink response was measured while subjects memorized the words by means of shallow or deep processing strategies. Results showed blink potentiation to unpleasant and blink inhibition to pleasant adjectives in subjects using shallow encoding strategies. In subjects using deep-encoding strategies, blinks were larger for pleasant than unpleasant or neutral adjectives. In line with this, free recall of pleasant words was also better in subjects who engaged in deep processing. The results suggest that motivational priming holds as long as processing is perceptual. However, during deep processing the startle reflex appears to represent a measure of "processing interrupt", facilitating blinks to those stimuli that are more deeply encoded. Copyright 2010 Elsevier B.V. All rights reserved.
Brüggemann, Petra; Szczepek, Agnieszka J.; Klee, Katharina; Gräbel, Stefan; Mazurek, Birgit; Olze, Heidi
2017-01-01
Cochlear implantation (CI) is increasingly being used in the auditory rehabilitation of deaf patients. Here, we investigated whether the auditory rehabilitation can be influenced by the psychological burden caused by mental conditions. Our sample included 47 patients who underwent implantation. All patients were monitored before and 6 months after CI. Auditory performance was assessed using the Oldenburg Inventory (OI) and Freiburg monosyllable (FB MS) speech discrimination test. The health-related quality of life was measured with Nijmegen Cochlear implantation Questionnaire (NCIQ) whereas tinnitus-related distress was measured with the German version of Tinnitus Questionnaire (TQ). We additionally assessed the general perceived quality of life, the perceived stress, coping abilities, anxiety levels and the depressive symptoms. Finally, a structured interview to detect mental conditions (CIDI) was performed before and after surgery. We found that CI led to an overall improvement in auditory performance as well as the anxiety and depression, quality of life, tinnitus distress and coping strategies. CIDI revealed that 81% of patients in our sample had affective, anxiety, and/or somatoform disorders before or after CI. The affective disorders included dysthymia and depression, while anxiety disorders included agoraphobias and unspecified phobias. We also diagnosed cases of somatoform pain disorders and unrecognizable figure somatoform disorders. We found a positive correlation between the auditory performance and the decrease of anxiety and depression, tinnitus-related distress and perceived stress. There was no association between the presence of a mental condition itself and the outcome of auditory rehabilitation. We conclude that the CI candidates exhibit high rates of psychological disorders, and there is a particularly strong association between somatoform disorders and tinnitus. The presence of mental disorders remained unaffected by CI but the degree of psychological burden decreased significantly post-CI. The implants benefitted patients in a number of psychosocial areas, improving the symptoms of depression and anxiety, tinnitus, and their quality of life and coping strategies. The prevalence of mental disorders in patients who are candidates for CI suggests the need for a comprehensive psychological and psychosomatic management of their treatment. PMID:28529479
Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe
2015-04-01
To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Zhang, Y; Li, D D; Chen, X W
2017-06-20
Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Inter-subject synchronization of brain responses during natural music listening
Abrams, Daniel A.; Ryali, Srikanth; Chen, Tianwen; Chordia, Parag; Khouzam, Amirah; Levitin, Daniel J.; Menon, Vinod
2015-01-01
Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real-world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non-musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right-lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo-musical control conditions. Remarkably, inter-subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro-temporal features of the stimulus. Increased synchronization during music listening was also evident in a right-hemisphere fronto-parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences. PMID:23578016
Semantic congruency and the (reversed) Colavita effect in children and adults.
Wille, Claudia; Ebersbach, Mirjam
2016-01-01
When presented with auditory, visual, or bimodal audiovisual stimuli in a discrimination task, adults tend to ignore the auditory component in bimodal stimuli and respond to the visual component only (i.e., Colavita visual dominance effect). The same is true for older children, whereas young children are dominated by the auditory component of bimodal audiovisual stimuli. This suggests a change of sensory dominance during childhood. The aim of the current study was to investigate, in three experimental conditions, whether children and adults show sensory dominance when presented with complex semantic stimuli and whether this dominance can be modulated by stimulus characteristics such as semantic (in)congruency, frequency of bimodal trials, and color information. Semantic (in)congruency did not affect the magnitude of the auditory dominance effect in 6-year-olds or the visual dominance effect in adults, but it was a modulating factor of the visual dominance in 9-year-olds (Conditions 1 and 2). Furthermore, the absence of color information (Condition 3) did not affect auditory dominance in 6-year-olds and hardly affected visual dominance in adults, whereas the visual dominance in 9-year-olds disappeared. Our results suggest that (a) sensory dominance in children and adults is not restricted to simple lights and sounds, as used in previous research, but can be extended to semantically meaningful stimuli and that (b) sensory dominance is more robust in 6-year-olds and adults than in 9-year-olds, implying a transitional stage around this age. Copyright © 2015 Elsevier Inc. All rights reserved.
Visual-auditory integration during speech imitation in autism.
Williams, Justin H G; Massaro, Dominic W; Peel, Natalie J; Bosseler, Alexis; Suddendorf, Thomas
2004-01-01
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training.
Music playing and memory trace: evidence from event-related potentials.
Kamiyama, Keiko; Katahira, Kentaro; Abla, Dilshat; Hori, Koji; Okanoya, Kazuo
2010-08-01
We examined the relationship between motor practice and auditory memory for sound sequences to evaluate the hypothesis that practice involving physical performance might enhance auditory memory. Participants learned two unfamiliar sound sequences using different training methods. Under the key-press condition, they learned a melody while pressing a key during auditory input. Under the no-key-press condition, they listened to another melody without any key pressing. The two melodies were presented alternately, and all participants were trained in both methods. Participants were instructed to pay attention under both conditions. After training, they listened to the two melodies again without pressing keys, and ERPs were recorded. During the ERP recordings, 10% of the tones in these melodies deviated from the originals. The grand-average ERPs showed that the amplitude of mismatch negativity (MMN) elicited by deviant stimuli was larger under the key-press condition than under the no-key-press condition. This effect appeared only in the high absolute pitch group, which included those with a pronounced ability to identify a note without external reference. This result suggests that the effect of training with key pressing was mediated by individual musical skills. Copyright 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.
Phillips, D P; Farmer, M E
1990-11-15
This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.
Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L
2012-04-01
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.
When listening to rain sounds boosts arithmetic ability
De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia
2018-01-01
Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli. PMID:29466472
Effects of Visual Speech on Early Auditory Evoked Fields - From the Viewpoint of Individual Variance
Yahata, Izumi; Kanno, Akitake; Hidaka, Hiroshi; Sakamoto, Shuichi; Nakasato, Nobukazu; Kawashima, Ryuta; Katori, Yukio
2017-01-01
The effects of visual speech (the moving image of the speaker’s face uttering speech sound) on early auditory evoked fields (AEFs) were examined using a helmet-shaped magnetoencephalography system in 12 healthy volunteers (9 males, mean age 35.5 years). AEFs (N100m) in response to the monosyllabic sound /be/ were recorded and analyzed under three different visual stimulus conditions, the moving image of the same speaker’s face uttering /be/ (congruent visual stimuli) or uttering /ge/ (incongruent visual stimuli), and visual noise (still image processed from speaker’s face using a strong Gaussian filter: control condition). On average, latency of N100m was significantly shortened in the bilateral hemispheres for both congruent and incongruent auditory/visual (A/V) stimuli, compared to the control A/V condition. However, the degree of N100m shortening was not significantly different between the congruent and incongruent A/V conditions, despite the significant differences in psychophysical responses between these two A/V conditions. Moreover, analysis of the magnitudes of these visual effects on AEFs in individuals showed that the lip-reading effects on AEFs tended to be well correlated between the two different audio-visual conditions (congruent vs. incongruent visual stimuli) in the bilateral hemispheres but were not significantly correlated between right and left hemisphere. On the other hand, no significant correlation was observed between the magnitudes of visual speech effects and psychophysical responses. These results may indicate that the auditory-visual interaction observed on the N100m is a fundamental process which does not depend on the congruency of the visual information. PMID:28141836
When listening to rain sounds boosts arithmetic ability.
Proverbio, Alice Mado; De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia
2018-01-01
Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli.
Speech Compensation for Time-Scale-Modified Auditory Feedback
ERIC Educational Resources Information Center
Ogane, Rintaro; Honda, Masaaki
2014-01-01
Purpose: The purpose of this study was to examine speech compensation in response to time-scale-modified auditory feedback during the transition of the semivowel for a target utterance of /ija/. Method: Each utterance session consisted of 10 control trials in the normal feedback condition followed by 20 perturbed trials in the modified auditory…
Age-Related Hearing Loss: Quality of Care for Quality of Life
ERIC Educational Resources Information Center
Li-Korotky, Ha-Sheng
2012-01-01
Age-related hearing loss (ARHL), known as presbycusis, is characterized by progressive deterioration of auditory sensitivity, loss of the auditory sensory cells, and central processing functions associated with the aging process. ARHL is the third most prevalent chronic condition in older Americans, after hypertension and arthritis, and is a…
ERIC Educational Resources Information Center
Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb
2017-01-01
The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…
Ménard, Lucie; Turgeon, Christine; Trudeau-Fisette, Paméla; Bellavance-Courtemanche, Marie
2016-01-01
The impact of congenital visual deprivation on speech production in adults was examined in an ultrasound study of compensation strategies for lip-tube perturbation. Acoustic and articulatory analyses of the rounded vowel /u/ produced by 12 congenitally blind adult French speakers and 11 sighted adult French speakers were conducted under two conditions: normal and perturbed (with a 25-mm diameter tube inserted between the lips). Vowels were produced with auditory feedback and without auditory feedback (masked noise) to evaluate the extent to which both groups relied on this type of feedback to control speech movements. The acoustic analyses revealed that all participants mainly altered F2 and F0 and, to a lesser extent, F1 in the perturbed condition - only when auditory feedback was available. There were group differences in the articulatory strategies recruited to compensate; while all speakers moved their tongues more backward in the perturbed condition, blind speakers modified tongue-shape parameters to a greater extent than sighted speakers.
Identification of neural structures involved in stuttering using vibrotactile feedback.
Cheadle, Oliver; Sorger, Clarissa; Howell, Peter
Feedback delivered over auditory and vibratory afferent pathways has different effects on the fluency of people who stutter (PWS). These features were exploited to investigate the neural structures involved in stuttering. The speech signal vibrated locations on the body (vibrotactile feedback, VTF). Eleven PWS read passages under VTF and control (no-VTF) conditions. All combinations of vibration amplitude, synchronous or delayed VTF and vibrator position (hand, sternum or forehead) were presented. Control conditions were performed at the beginning, middle and end of test sessions. Stuttering rate, but not speaking rate, differed between the control and VTF conditions. Notably, speaking rate did not change between when VTF was delayed versus when it was synchronous in contrast with what happens with auditory feedback. This showed that cerebellar mechanisms, which are affected when auditory feedback is delayed, were not implicated in the fluency-enhancing effects of VTF, suggesting that there is a second fluency-enhancing mechanism. Copyright © 2018 Elsevier Inc. All rights reserved.
Developing Physiologic Models for Emergency Medical Procedures Under Microgravity
NASA Technical Reports Server (NTRS)
Parker, Nigel; O'Quinn, Veronica
2012-01-01
Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI s patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.
Developing Physiologic Models for Emergency Medical Procedures Under Microgravity
NASA Technical Reports Server (NTRS)
Parker, Nigel; OQuinn, Veronica
2012-01-01
Several technological enhancements have been made to METI's commercial Emergency Care Simulator (ECS) with regard to how microgravity affects human physiology. The ECS uses both a software-only lung simulation, and an integrated mannequin lung that uses a physical lung bag for creating chest excursions, and a digital simulation of lung mechanics and gas exchange. METI's patient simulators incorporate models of human physiology that simulate lung and chest wall mechanics, as well as pulmonary gas exchange. Microgravity affects how O2 and CO2 are exchanged in the lungs. Procedures were also developed to take into affect the Glasgow Coma Scale for determining levels of consciousness by varying the ECS eye-blinking function to partially indicate the level of consciousness of the patient. In addition, the ECS was modified to provide various levels of pulses from weak and thready to hyper-dynamic to assist in assessing patient conditions from the femoral, carotid, brachial, and pedal pulse locations.
Fontán-Lozano, Angela; Romero-Granados, Rocío; Troncoso, Julieta; Múnera, Alejandro; Delgado-García, José María; Carrión, Angel M
2008-10-01
Histone deacetylases (HDAC) are enzymes that maintain chromatin in a condensate state, related with absence of transcription. We have studied the role of HDAC on learning and memory processes. Both eyeblink classical conditioning (EBCC) and object recognition memory (ORM) induced an increase in histone H3 acetylation (Ac-H3). Systemic treatment with HDAC inhibitors improved cognitive processes in EBCC and in ORM tests. Immunohistochemistry and gene expression analyses indicated that administration of HDAC inhibitors decreased the stimulation threshold for Ac-H3, and gene expression to reach the levels required for learning and memory. Finally, we evaluated the effect of systemic administration of HDAC inhibitors to mice models of neurodegeneration and aging. HDAC inhibitors reversed learning and consolidation deficits in ORM in these models. These results point out HDAC inhibitors as candidate agents for the palliative treatment of learning and memory impairments in aging and in neurodegenerative disorders.
Klinke, R; Kral, A; Heid, S; Tillein, J; Hartmann, R
1999-09-10
In congenitally deaf cats, the central auditory system is deprived of acoustic input because of degeneration of the organ of Corti before the onset of hearing. Primary auditory afferents survive and can be stimulated electrically. By means of an intracochlear implant and an accompanying sound processor, congenitally deaf kittens were exposed to sounds and conditioned to respond to tones. After months of exposure to meaningful stimuli, the cortical activity in chronically implanted cats produced field potentials of higher amplitudes, expanded in area, developed long latency responses indicative of intracortical information processing, and showed more synaptic efficacy than in naïve, unstimulated deaf cats. The activity established by auditory experience resembles activity in hearing animals.
Quantifying auditory handicap. A new approach.
Jerger, S; Jerger, J
1979-01-01
This report describes a new audiovisual test procedure for the quantification of auditory handicap (QUAH). The QUAH test attempts to recreate in the laboratory a series of everyday listening situations. Individual test items represent psychomotor tasks. Data on 53 normal-hearing listeners described performance as a function of the message-to-competition ratio (MCR). Results indicated that, for further studies, an MCR of 0 dB represents the condition above which the task seemed too easy and below which the task appeared too difficult for normal-hearing subjects. The QUAH approach to the measurement of auditory handicap seems promising as an experimental tool. Further studies are needed to describe the relation of QUAH results (1) to clinical audiologic measures and (2) to more traditional indices of auditory handicap.
Hearing visuo-tactile synchrony - Sound-induced proprioceptive drift in the invisible hand illusion.
Darnai, Gergely; Szolcsányi, Tibor; Hegedüs, Gábor; Kincses, Péter; Kállai, János; Kovács, Márton; Simon, Eszter; Nagy, Zsófia; Janszky, József
2017-02-01
The rubber hand illusion (RHI) and its variant the invisible hand illusion (IHI) are useful for investigating multisensory aspects of bodily self-consciousness. Here, we explored whether auditory conditioning during an RHI could enhance the trisensory visuo-tactile-proprioceptive interaction underlying the IHI. Our paradigm comprised of an IHI session that was followed by an RHI session and another IHI session. The IHI sessions had two parts presented in counterbalanced order. One part was conducted in silence, whereas the other part was conducted on the backdrop of metronome beats that occurred in synchrony with the brush movements used for the induction of the illusion. In a first experiment, the RHI session also involved metronome beats and was aimed at creating an associative memory between the brush stroking of a rubber hand and the sounds. An analysis of IHI sessions showed that the participants' perceived hand position drifted more towards the body-midline in the metronome relative to the silent condition without any sound-related session differences. Thus, the sounds, but not the auditory RHI conditioning, influenced the IHI. In a second experiment, the RHI session was conducted without metronome beats. This confirmed the conditioning-independent presence of sound-induced proprioceptive drift in the IHI. Together, these findings show that the influence of visuo-tactile integration on proprioceptive updating is modifiable by irrelevant auditory cues merely through the temporal correspondence between the visuo-tactile and auditory events. © 2016 The British Psychological Society.
Sommers, Mitchell S.; Phelps, Damian
2016-01-01
One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement – the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last 3 words that were presented. Listening effort was assessed using recall performance in the 2-back and 3-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the 2- and 3-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception. PMID:27355772
Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F
2015-11-01
Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Effect of Auditory Constraints on Motor Performance Depends on Stage of Recovery Post-Stroke
Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti
2014-01-01
In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID:25002859
Franken, Matthias K; Eisner, Frank; Acheson, Daniel J; McQueen, James M; Hagoort, Peter; Schoffelen, Jan-Mathijs
2018-06-21
Speaking is a complex motor skill which requires near instantaneous integration of sensory and motor-related information. Current theory hypothesizes a complex interplay between motor and auditory processes during speech production, involving the online comparison of the speech output with an internally generated forward model. To examine the neural correlates of this intricate interplay between sensory and motor processes, the current study uses altered auditory feedback (AAF) in combination with magnetoencephalography (MEG). Participants vocalized the vowel/e/and heard auditory feedback that was temporarily pitch-shifted by only 25 cents, while neural activity was recorded with MEG. As a control condition, participants also heard the recordings of the same auditory feedback that they heard in the first half of the experiment, now without vocalizing. The participants were not aware of any perturbation of the auditory feedback. We found auditory cortical areas responded more strongly to the pitch shifts during vocalization. In addition, auditory feedback perturbation resulted in spectral power increases in the θ and lower β bands, predominantly in sensorimotor areas. These results are in line with current models of speech production, suggesting auditory cortical areas are involved in an active comparison between a forward model's prediction and the actual sensory input. Subsequently, these areas interact with motor areas to generate a motor response. Furthermore, the results suggest that θ and β power increases support auditory-motor interaction, motor error detection and/or sensory prediction processing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Dagnino-Subiabre, A; Terreros, G; Carmona-Fontaine, C; Zepeda, R; Orellana, J A; Díaz-Véliz, G; Mora, S; Aboitiz, F
2005-01-01
Chronic stress affects brain areas involved in learning and emotional responses. These alterations have been related with the development of cognitive deficits in major depression. The aim of this study was to determine the effect of chronic immobilization stress on the auditory and visual mesencephalic regions in the rat brain. We analyzed in Golgi preparations whether stress impairs the neuronal morphology of the inferior (auditory processing) and superior colliculi (visual processing). Afterward, we examined the effect of stress on acoustic and visual conditioning using an avoidance conditioning test. We found that stress induced dendritic atrophy in inferior colliculus neurons and did not affect neuronal morphology in the superior colliculus. Furthermore, stressed rats showed a stronger impairment in acoustic conditioning than in visual conditioning. Fifteen days post-stress the inferior colliculus neurons completely restored their dendritic structure, showing a high level of neural plasticity that is correlated with an improvement in acoustic learning. These results suggest that chronic stress has more deleterious effects in the subcortical auditory system than in the visual system and may affect the aversive system and fear-like behaviors. Our study opens a new approach to understand the pathophysiology of stress and stress-related disorders such as major depression.
Orsini, Caitlin A; Maren, Stephen
2009-11-01
Auditory fear conditioning requires anatomical projections from the medial geniculate nucleus (MGN) of the thalamus to the amygdala. Several lines of work indicate that the MGN is a critical sensory relay for auditory information during conditioning, but is not itself involved in the encoding of long-term fear memories. In the present experiments, we examined whether the MGN plays a similar role in the extinction of conditioned fear. Twenty-four hours after Pavlovian fear conditioning, rats received bilateral intra-thalamic infusions of either with NBQX (an AMPA receptor antagonist; Experiment 1) or MK-801 (an NMDA receptor antagonist; Experiment 1), anisomycin (a protein synthesis inhibitor; Experiment 2) or U0126 (a MEK inhibitor; Experiment 3) immediately prior to an extinction session in a novel context. The next day rats received a tone test in a drug-free state to assess their extinction memory; freezing served as an index of fear. Glutamate receptor antagonism prevented both the expression and extinction of conditioned fear. In contrast, neither anisomycin nor U0126 affected extinction. These results suggest that the MGN is a critical sensory relay for auditory information during extinction training, but is not itself a site of plasticity underlying the formation of the extinction memory.
Trivedi, Mehul A; Coover, Gary D
2006-04-03
Pavlovian delay conditioning, in which a conditioned stimulus (CS) and unconditioned stimulus (US) co-terminate, is thought to reflect non-declarative memory. In contrast, trace conditioning, in which the CS and US are temporally separate, is thought to reflect declarative memory. Hippocampal lesions impair acquisition and expression of trace conditioning measured by the conditioned freezing and eyeblink responses, while having little effect on the acquisition of delay conditioning. Recent evidence suggests that lesions of the ventral hippocampus (VH) impair conditioned fear under conditions in which dorsal hippocampal (DH) lesions have little effect. In the present study, we examined the time-course of fear expression after delay and trace conditioning using the fear-potentiated startle (FPS) reflex, and the effects of pre- and post-training lesions to the VH and DH on trace-conditioned FPS. We found that both delay- and trace-conditioned rats displayed significant FPS near the end of the CS relative to the unpaired control group. In contrast, trace-conditioned rats displayed significant FPS throughout the duration of the trace interval, whereas FPS decayed rapidly to baseline after CS offset in delay-conditioned rats. In experiment 2, both DH and VH lesions were found to significantly reduce the overall magnitude of FPS compared to the control group, however, no differences were found between the DH and VH groups. These findings support a role for both the DH and VH in trace fear conditioning, and suggest that the greater effect of VH lesions on conditioned fear might be specific to certain measures of fear.
Impaired eye blink classical conditioning distinguishes dystonic patients with and without tremor.
Antelmi, E; Di Stasio, F; Rocchi, L; Erro, R; Liguori, R; Ganos, C; Brugger, F; Teo, J; Berardelli, A; Rothwell, J; Bhatia, K P
2016-10-01
Tremor is frequently associated with dystonia, but its pathophysiology is still unclear. Dysfunctions of cerebellar circuits are known to play a role in the pathophysiology of action-induced tremors, and cerebellar impairment has frequently been associated to dystonia. However, a link between dystonic tremor and cerebellar abnormalities has not been demonstrated so far. Twenty-five patients with idiopathic isolated cervical dystonia, with and without tremor, were enrolled. We studied the excitability of inhibitory circuits in the brainstem by measuring the R2 blink reflex recovery cycle (BRC) and implicit learning mediated by the cerebellum by means of eyeblink classical conditioning (EBCC). Results were compared with those obtained in a group of age-matched healthy subjects (HS). Statistical analysis did not disclose any significant clinical differences among dystonic patients with and without tremor. Patients with dystonia (regardless of the presence of tremor) showed decreased inhibition of R2 blink reflex by conditioning pulses compared with HS. Patients with dystonic tremor showed a decreased number of conditioned responses in the EBCC paradigm compared to HS and dystonic patients without tremor. The present data show that cerebellar impairment segregates with the presence of tremor in patients with dystonia, suggesting that the cerebellum might have a role in the occurrence of dystonic tremor. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sullivan, Jessica R; Osman, Homira; Schafer, Erin C
2015-06-01
The objectives of the current study were to examine the effect of noise (-5 dB SNR) on auditory comprehension and to examine its relationship with working memory. It was hypothesized that noise has a negative impact on information processing, auditory working memory, and comprehension. Children with normal hearing between the ages of 8 and 10 years were administered working memory and comprehension tasks in quiet and noise. The comprehension measure comprised 5 domains: main idea, details, reasoning, vocabulary, and understanding messages. Performance on auditory working memory and comprehension tasks were significantly poorer in noise than in quiet. The reasoning, details, understanding, and vocabulary subtests were particularly affected in noise (p < .05). The relationship between auditory working memory and comprehension was stronger in noise than in quiet, suggesting an increased contribution of working memory. These data suggest that school-age children's auditory working memory and comprehension are negatively affected by noise. Performance on comprehension tasks in noise is strongly related to demands placed on working memory, supporting the theory that degrading listening conditions draws resources away from the primary task.
Impact of Educational Level on Performance on Auditory Processing Tests.
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
2016-01-01
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.
Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet
2017-06-01
Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.
Fear Conditioning is Disrupted by Damage to the Postsubiculum
Robinson, Siobhan; Bucci, David J.
2011-01-01
The hippocampus plays a central role in spatial and contextual learning and memory, however relatively little is known about the specific contributions of parahippocampal structures that interface with the hippocampus. The postsubiculum (PoSub) is reciprocally connected with a number of hippocampal, parahippocampal and subcortical structures that are involved in spatial learning and memory. In addition, behavioral data suggest that PoSub is needed for optimal performance during tests of spatial memory. Together, these data suggest that PoSub plays a prominent role in spatial navigation. Currently it is unknown whether the PoSub is needed for other forms of learning and memory that also require the formation of associations among multiple environmental stimuli. To address this gap in the literature we investigated the role of PoSub in Pavlovian fear conditioning. In Experiment 1 male rats received either lesions of PoSub or Sham surgery prior to training in a classical fear conditioning procedure. On the training day a tone was paired with foot shock three times. Conditioned fear to the training context was evaluated 24 hr later by placing rats back into the conditioning chamber without presenting any tones or shocks. Auditory fear was assessed on the third day by presenting the auditory stimulus in a novel environment (no shock). PoSub-lesioned rats exhibited impaired acquisition of the conditioned fear response as well as impaired expression of contextual and auditory fear conditioning. In Experiment 2, PoSub lesions were made 1 day after training to specifically assess the role of PoSub in fear memory. No deficits in the expression of contextual fear were observed, but freezing to the tone was significantly reduced in PoSub-lesioned rats compared to shams. Together, these results indicate that PoSub is necessary for normal acquisition of conditioned fear, and that PoSub contributes to the expression of auditory but not contextual fear memory. PMID:22076971
Brenowitz, Eliot A; Lent, Karin; Rubel, Edwin W
2007-06-20
An important area of research in neuroscience is understanding what properties of brain structure and function are stimulated by sensory experience and behavioral performance. We tested the roles of experience and behavior in seasonal plasticity of the neural circuits that regulate learned song behavior in adult songbirds. Neurons in these circuits receive auditory input and show selective auditory responses to conspecific song. We asked whether auditory input or song production contribute to seasonal growth of telencephalic song nuclei. Adult male Gambel's white-crowned sparrows were surgically deafened, which eliminates auditory input and greatly reduces song production. These birds were then exposed to photoperiod and hormonal conditions that regulate the growth of song nuclei. We measured the volumes of the nuclei HVC, robust nucleus of arcopallium (RA), and area X at 7 and 30 d after exposure to long days plus testosterone in deafened and normally hearing birds. We also assessed song production and examined protein kinase C (PKC) expression because previous research reported that immunostaining for PKC increases transiently after deafening. Deafening did not delay or block the growth of the song nuclei to their full breeding-condition size. PKC activity in RA was not altered by deafening in the sparrows. Song continued to be well structured for up to 10 months after deafening, but song production decreased almost eightfold. These results suggest that neither auditory input nor high rates of song production are necessary for seasonal growth of the adult song control system in this species.
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Henry, Kenneth S.; Heinz, Michael G.
2013-01-01
People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. PMID:23376018
Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.
2014-01-01
People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
Cross-modal perceptual load: the impact of modality and individual differences.
Sandhu, Rajwant; Dyson, Benjamin James
2016-05-01
Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.
Stuttering in adults: the acoustic startle response, temperamental traits, and biological factors.
Alm, Per A; Risberg, Jarl
2007-01-01
The purpose of this study was to investigate the relation between stuttering and a range of variables of possible relevance, with the main focus on neuromuscular reactivity, and anxiety. The explorative analysis also included temperament, biochemical variables, heredity, preonset lesions, and altered auditory feedback (AAF). An increased level of neuromuscular reactivity in stuttering adults has previously been reported by [Guitar, B. (2003). Acoustic startle responses and temperament in individuals who stutter. Journal of Speech Language and Hearing Research, 46, 233-240], also indicating a link to anxiety and temperament. The present study included a large number of variables in order to enable analysis of subgroups and relations between variables. Totally 32 stuttering adults were compared with nonstuttering controls. The acoustic startle eyeblink response was used as a measure of neuromuscular reactivity. No significant group difference was found regarding startle, and startle was not significantly correlated with trait anxiety, stuttering severity, or AAF. Startle was mainly related to calcium and prolactin. The stuttering group had significantly higher scores for anxiety and childhood ADHD. Two subgroups of stuttering were found, with high versus low traits of childhood ADHD, characterized by indications of preonset lesions versus heredity for stuttering. The study does not support the view that excessive reactivity is a typical characteristic of stuttering. The increased anxiety is suggested to mainly be an effect of experiences of stuttering. As a result of reading this article, the reader will be able to: (a) critically discuss the literature regarding stuttering in relation to acoustic startle, anxiety, and temperament; (b) describe the effect of calcium on neuromuscular reactivity; (c) discuss findings supporting the importance of early neurological incidents in some cases of stuttering, and the relation between such incidents and traits of ADHD or ADD; and (d) discuss the role of genetics in stuttering.
Auditory memory function in expert chess players.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
Using Neuroplasticity-Based Auditory Training to Improve Verbal Memory in Schizophrenia
Fisher, Melissa; Holland, Christine; Merzenich, Michael M.; Vinogradov, Sophia
2009-01-01
Objective Impaired verbal memory in schizophrenia is a key rate-limiting factor for functional outcome, does not respond to currently available medications, and shows only modest improvement after conventional behavioral remediation. The authors investigated an innovative approach to the remediation of verbal memory in schizophrenia, based on principles derived from the basic neuroscience of learning-induced neuroplasticity. The authors report interim findings in this ongoing study. Method Fifty-five clinically stable schizophrenia subjects were randomly assigned to either 50 hours of computerized auditory training or a control condition using computer games. Those receiving auditory training engaged in daily computerized exercises that placed implicit, increasing demands on auditory perception through progressively more difficult auditory-verbal working memory and verbal learning tasks. Results Relative to the control group, subjects who received active training showed significant gains in global cognition, verbal working memory, and verbal learning and memory. They also showed reliable and significant improvement in auditory psychophysical performance; this improvement was significantly correlated with gains in verbal working memory and global cognition. Conclusions Intensive training in early auditory processes and auditory-verbal learning results in substantial gains in verbal cognitive processes relevant to psychosocial functioning in schizophrenia. These gains may be due to a training method that addresses the early perceptual impairments in the illness, that exploits intact mechanisms of repetitive practice in schizophrenia, and that uses an intensive, adaptive training approach. PMID:19448187
Auditory display as feedback for a novel eye-tracking system for sterile operating room interaction.
Black, David; Unger, Michael; Fischer, Nele; Kikinis, Ron; Hahn, Horst; Neumuth, Thomas; Glaser, Bernhard
2018-01-01
The growing number of technical systems in the operating room has increased attention on developing touchless interaction methods for sterile conditions. However, touchless interaction paradigms lack the tactile feedback found in common input devices such as mice and keyboards. We propose a novel touchless eye-tracking interaction system with auditory display as a feedback method for completing typical operating room tasks. Auditory display provides feedback concerning the selected input into the eye-tracking system as well as a confirmation of the system response. An eye-tracking system with a novel auditory display using both earcons and parameter-mapping sonification was developed to allow touchless interaction for six typical scrub nurse tasks. An evaluation with novice participants compared auditory display with visual display with respect to reaction time and a series of subjective measures. When using auditory display to substitute for the lost tactile feedback during eye-tracking interaction, participants exhibit reduced reaction time compared to using visual-only display. In addition, the auditory feedback led to lower subjective workload and higher usefulness and system acceptance ratings. Due to the absence of tactile feedback for eye-tracking and other touchless interaction methods, auditory display is shown to be a useful and necessary addition to new interaction concepts for the sterile operating room, reducing reaction times while improving subjective measures, including usefulness, user satisfaction, and cognitive workload.
Children's Auditory Working Memory Performance in Degraded Listening Conditions
ERIC Educational Resources Information Center
Osman, Homira; Sullivan, Jessica R.
2014-01-01
Purpose: The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It…
ERIC Educational Resources Information Center
Megnin-Viggars, Odette; Goswami, Usha
2013-01-01
Visual speech inputs can enhance auditory speech information, particularly in noisy or degraded conditions. The natural statistics of audiovisual speech highlight the temporal correspondence between visual and auditory prosody, with lip, jaw, cheek and head movements conveying information about the speech envelope. Low-frequency spatial and…
The Effect of Modality Shifts on Practive Interference in Long-Term Memory.
ERIC Educational Resources Information Center
Dean, Raymond S.; And Others
1983-01-01
In experiment one, subjects learned a word list in blocked or random forms of auditory/visual change. In experiment two, high- and low-conceptual rigid subjects read passages in shift conditions or nonshift, exclusively in auditory or visual modes. A shift in modality provided a powerful release from proactive interference. (Author/CM)
Orienting Attention in Audition and between Audition and Vision: Young and Elderly Subjects.
ERIC Educational Resources Information Center
Robin, Donald A.; Rizzo, Matthew
1992-01-01
Thirty young and 10 elderly adults were assessed on orienting auditory attention, in a mixed-modal condition in which stimuli were either auditory or visual. Findings suggest that the mechanisms involved in orienting attention operate in audition and that individuals may allocate their processing resources among multiple sensory pools. (Author/JDD)
Perception of Auditory-Visual Distance Relations by 5-Month-Old Infants.
ERIC Educational Resources Information Center
Pickens, Jeffrey
1994-01-01
Sixty-four infants viewed side-by-side videotapes of toy trains (in four visual conditions) and listened to sounds at increasing or decreasing amplitude designed to match one of the videos. Results suggested that five-month olds were sensitive to auditory-visual distance relations and that change in size was an important visual depth cue. (MDM)
Retrosplenial Cortex Is Required for the Retrieval of Remote Memory for Auditory Cues
ERIC Educational Resources Information Center
Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.
2016-01-01
The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…
Verbal Recall of Auditory and Visual Signals by Normal and Deficient Reading Children.
ERIC Educational Resources Information Center
Levine, Maureen Julianne
Verbal recall of bisensory memory tasks was compared among 48 9- to 12-year old boys in three groups: normal readers, primary deficit readers, and secondary deficit readers. Auditory and visual stimulus pairs composed of digits, which incorporated variations of intersensory and intrasensory conditions were administered to Ss through a Bell and…
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Binaural auditory beats affect long-term memory.
Garcia-Argibay, Miguel; Santed, Miguel A; Reales, José M
2017-12-08
The presentation of two pure tones to each ear separately with a slight difference in their frequency results in the perception of a single tone that fluctuates in amplitude at a frequency that equals the difference of interaural frequencies. This perceptual phenomenon is known as binaural auditory beats, and it is thought to entrain electrocortical activity and enhance cognition functions such as attention and memory. The aim of this study was to determine the effect of binaural auditory beats on long-term memory. Participants (n = 32) were kept blind to the goal of the study and performed both the free recall and recognition tasks after being exposed to binaural auditory beats, either in the beta (20 Hz) or theta (5 Hz) frequency bands and white noise as a control condition. Exposure to beta-frequency binaural beats yielded a greater proportion of correctly recalled words and a higher sensitivity index d' in recognition tasks, while theta-frequency binaural-beat presentation lessened the number of correctly remembered words and the sensitivity index. On the other hand, we could not find differences in the conditional probability for recall given recognition between beta and theta frequencies and white noise, suggesting that the observed changes in recognition were due to the recollection component. These findings indicate that the presentation of binaural auditory beats can affect long-term memory both positively and negatively, depending on the frequency used.
Using multisensory cues to facilitate air traffic management.
Ngo, Mary K; Pierce, Russell S; Spence, Charles
2012-12-01
In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. These results have important implications for the design and use of multisensory cues in air traffic management.
Hoefer, M; Tyll, S; Kanowski, M; Brosch, M; Schoenfeld, M A; Heinze, H-J; Noesselt, T
2013-10-01
Although multisensory integration has been an important area of recent research, most studies focused on audiovisual integration. Importantly, however, the combination of audition and touch can guide our behavior as effectively which we studied here using psychophysics and functional magnetic resonance imaging (fMRI). We tested whether task-irrelevant tactile stimuli would enhance auditory detection, and whether hemispheric asymmetries would modulate these audiotactile benefits using lateralized sounds. Spatially aligned task-irrelevant tactile stimuli could occur either synchronously or asynchronously with the sounds. Auditory detection was enhanced by non-informative synchronous and asynchronous tactile stimuli, if presented on the left side. Elevated fMRI-signals to left-sided synchronous bimodal stimulation were found in primary auditory cortex (A1). Adjacent regions (planum temporale, PT) expressed enhanced BOLD-responses for synchronous and asynchronous left-sided bimodal conditions. Additional connectivity analyses seeded in right-hemispheric A1 and PT for both bimodal conditions showed enhanced connectivity with right-hemispheric thalamic, somatosensory and multisensory areas that scaled with subjects' performance. Our results indicate that functional asymmetries interact with audiotactile interplay which can be observed for left-lateralized stimulation in the right hemisphere. There, audiotactile interplay recruits a functional network of unisensory cortices, and the strength of these functional network connections is directly related to subjects' perceptual sensitivity. Copyright © 2013 Elsevier Inc. All rights reserved.
Sela, Itamar
2014-01-01
Visual and auditory temporal processing and crossmodal integration are crucial factors in the word decoding process. The speed of processing (SOP) gap (Asynchrony) between these two modalities, which has been suggested as related to the dyslexia phenomenon, is the focus of the current study. Nineteen dyslexic and 17 non-impaired University adult readers were given stimuli in a reaction time (RT) procedure where participants were asked to identify whether the stimulus type was only visual, only auditory or crossmodally integrated. Accuracy, RT, and Event Related Potential (ERP) measures were obtained for each of the three conditions. An algorithm to measure the contribution of the temporal SOP of each modality to the crossmodal integration in each group of participants was developed. Results obtained using this model for the analysis of the current study data, indicated that in the crossmodal integration condition the presence of the auditory modality at the pre-response time frame (between 170 and 240 ms after stimulus presentation), increased processing speed in the visual modality among the non-impaired readers, but not in the dyslexic group. The differences between the temporal SOP of the modalities among the dyslexics and the non-impaired readers give additional support to the theory that an asynchrony between the visual and auditory modalities is a cause of dyslexia. PMID:24959125
Audiovisual integration of emotional signals in voice and face: an event-related fMRI study.
Kreifelts, Benjamin; Ethofer, Thomas; Grodd, Wolfgang; Erb, Michael; Wildgruber, Dirk
2007-10-01
In a natural environment, non-verbal emotional communication is multimodal (i.e. speech melody, facial expression) and multifaceted concerning the variety of expressed emotions. Understanding these communicative signals and integrating them into a common percept is paramount to successful social behaviour. While many previous studies have focused on the neurobiology of emotional communication in the auditory or visual modality alone, far less is known about multimodal integration of auditory and visual non-verbal emotional information. The present study investigated this process using event-related fMRI. Behavioural data revealed that audiovisual presentation of non-verbal emotional information resulted in a significant increase in correctly classified stimuli when compared with visual and auditory stimulation. This behavioural gain was paralleled by enhanced activation in bilateral posterior superior temporal gyrus (pSTG) and right thalamus, when contrasting audiovisual to auditory and visual conditions. Further, a characteristic of these brain regions, substantiating their role in the emotional integration process, is a linear relationship between the gain in classification accuracy and the strength of the BOLD response during the bimodal condition. Additionally, enhanced effective connectivity between audiovisual integration areas and associative auditory and visual cortices was observed during audiovisual stimulation, offering further insight into the neural process accomplishing multimodal integration. Finally, we were able to document an enhanced sensitivity of the putative integration sites to stimuli with emotional non-verbal content as compared to neutral stimuli.
Effects of Hatchery Rearing on the Structure and Function of Salmonid Mechanosensory Systems.
Brown, Andrew D; Sisneros, Joseph A; Jurasin, Tyler; Coffin, Allison B
2016-01-01
This paper reviews recent studies on the effects of hatchery rearing on the auditory and lateral line systems of salmonid fishes. Major conclusions are that (1) hatchery-reared juveniles exhibit abnormal lateral line morphology (relative to wild-origin conspecifics), suggesting that the hatchery environment affects lateral line structure, perhaps due to differences in the hydrodynamic conditions of hatcheries versus natural rearing environments, and (2) hatchery-reared salmonids have a high proportion of abnormal otoliths, a condition associated with reduced auditory sensitivity and suggestive of inner ear dysfunction.
Rosemann, Stephanie; Thiel, Christiane M
2018-07-15
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss. Copyright © 2018 Elsevier Inc. All rights reserved.
Grahn, Jessica A.; Rowe, James B.
2009-01-01
Little is known about the underlying neurobiology of rhythm and beat perception, despite its universal cultural importance. Here we used functional magnetic resonance imaging to study rhythm perception in musicians and non-musicians. Three conditions varied in the degree to which external reinforcement versus internal generation of the beat was required. The ‘Volume’ condition strongly externally marked the beat with volume changes, the ‘Duration’ condition marked the beat with weaker accents arising from duration changes, and the ‘Unaccented’ condition required the beat to be entirely internally generated. In all conditions, beat rhythms compared to nonbeat control rhythms revealed putamen activity. The presence of a beat was also associated with greater connectivity between the putamen and the supplementary motor area (SMA), the premotor cortex (PMC) and auditory cortex. In contrast, the type of accent within the beat conditions modulated the coupling between premotor and auditory cortex, with greater modulation for musicians than non-musicians. Importantly, the putamen's response to beat conditions was not due to differences in temporal complexity between the three rhythm conditions. We propose that a cortico-subcortical network including the putamen, SMA, and PMC is engaged for the analysis of temporal sequences and prediction or generation of putative beats, especially under conditions that may require internal generation of the beat. The importance of this system for auditory-motor interaction and development of precisely timed movement is suggested here by its facilitation in musicians. PMID:19515922
Grahn, Jessica A; Rowe, James B
2009-06-10
Little is known about the underlying neurobiology of rhythm and beat perception, despite its universal cultural importance. Here we used functional magnetic resonance imaging to study rhythm perception in musicians and nonmusicians. Three conditions varied in the degree to which external reinforcement versus internal generation of the beat was required. The "volume" condition strongly externally marked the beat with volume changes, the "duration" condition marked the beat with weaker accents arising from duration changes, and the "unaccented" condition required the beat to be entirely internally generated. In all conditions, beat rhythms compared with nonbeat control rhythms revealed putamen activity. The presence of a beat was also associated with greater connectivity between the putamen and the supplementary motor area (SMA), the premotor cortex (PMC), and auditory cortex. In contrast, the type of accent within the beat conditions modulated the coupling between premotor and auditory cortex, with greater modulation for musicians than nonmusicians. Importantly, the response of the putamen to beat conditions was not attributable to differences in temporal complexity between the three rhythm conditions. We propose that a cortico-subcortical network including the putamen, SMA, and PMC is engaged for the analysis of temporal sequences and prediction or generation of putative beats, especially under conditions that may require internal generation of the beat. The importance of this system for auditory-motor interaction and development of precisely timed movement is suggested here by its facilitation in musicians.
Bellis, Teri James; Ross, Jody
2011-09-01
It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.
2010-01-01
Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535
De Paolis, Annalisa; Bikson, Marom; Nelson, Jeremy T; de Ru, J Alexander; Packer, Mark; Cardoso, Luis
2017-06-01
Hearing is an extremely complex phenomenon, involving a large number of interrelated variables that are difficult to measure in vivo. In order to investigate such process under simplified and well-controlled conditions, models of sound transmission have been developed through many decades of research. The value of modeling the hearing system is not only to explain the normal function of the hearing system and account for experimental and clinical observations, but to simulate a variety of pathological conditions that lead to hearing damage and hearing loss, as well as for development of auditory implants, effective ear protections and auditory hazard countermeasures. In this paper, we provide a review of the strategies used to model the auditory function of the external, middle, inner ear, and the micromechanics of the organ of Corti, along with some of the key results obtained from such modeling efforts. Recent analytical and numerical approaches have incorporated the nonlinear behavior of some parameters and structures into their models. Few models of the integrated hearing system exist; in particular, we describe the evolution of the Auditory Hazard Assessment Algorithm for Human (AHAAH) model, used for prediction of hearing damage due to high intensity sound pressure. Unlike the AHAAH model, 3D finite element models of the entire hearing system are not able yet to predict auditory risk and threshold shifts. It is expected that both AHAAH and FE models will evolve towards a more accurate assessment of threshold shifts and hearing loss under a variety of stimuli conditions and pathologies. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.