Auditory Spatial Attention Representations in the Human Cerebral Cortex
Kong, Lingqiang; Michalka, Samantha W.; Rosen, Maya L.; Sheremata, Summer L.; Swisher, Jascha D.; Shinn-Cunningham, Barbara G.; Somers, David C.
2014-01-01
Auditory spatial attention serves important functions in auditory source separation and selection. Although auditory spatial attention mechanisms have been generally investigated, the neural substrates encoding spatial information acted on by attention have not been identified in the human neocortex. We performed functional magnetic resonance imaging experiments to identify cortical regions that support auditory spatial attention and to test 2 hypotheses regarding the coding of auditory spatial attention: 1) auditory spatial attention might recruit the visuospatial maps of the intraparietal sulcus (IPS) to create multimodal spatial attention maps; 2) auditory spatial information might be encoded without explicit cortical maps. We mapped visuotopic IPS regions in individual subjects and measured auditory spatial attention effects within these regions of interest. Contrary to the multimodal map hypothesis, we observed that auditory spatial attentional modulations spared the visuotopic maps of IPS; the parietal regions activated by auditory attention lacked map structure. However, multivoxel pattern analysis revealed that the superior temporal gyrus and the supramarginal gyrus contained significant information about the direction of spatial attention. These findings support the hypothesis that auditory spatial information is coded without a cortical map representation. Our findings suggest that audiospatial and visuospatial attention utilize distinctly different spatial coding schemes. PMID:23180753
Higher dietary diversity is related to better visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Qorbani, Mostafa; Sotoudeh, Gity; Rostami, Reza; Narmaki, Elham; Yavari, Parvaneh; Aghasi, Mohadeseh; Shaibu, Osman Mohammed
2016-04-01
Attention is a complex cognitive function that is necessary for learning, for following social norms of behaviour and for effective performance of responsibilities and duties. It is especially important in sensitive occupations requiring sustained attention. Improvement of dietary diversity (DD) is recognised as an important factor in health promotion, but its association with sustained attention is unknown. The aim of this study was to determine the association between auditory and visual sustained attention and DD. A cross-sectional study was carried out on 400 women aged 20-50 years who attended sports clubs at Tehran Municipality. Sustained attention was evaluated on the basis of the Integrated Visual and Auditory Continuous Performance Test using Integrated Visual and Auditory software. A single 24-h dietary recall questionnaire was used for DD assessment. Dietary diversity scores (DDS) were determined using the FAO guidelines. The mean visual and auditory sustained attention scores were 40·2 (sd 35·2) and 42·5 (sd 38), respectively. The mean DDS was 4·7 (sd 1·5). After adjusting for age, education years, physical activity, energy intake and BMI, mean visual and auditory sustained attention showed a significant increase as the quartiles of DDS increased (P=0·001). In addition, the mean subscales of attention, including auditory consistency and vigilance, visual persistence, visual and auditory focus, speed, comprehension and full attention, increased significantly with increasing DDS (P<0·05). In conclusion, higher DDS is associated with better visual and auditory sustained attention.
Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies
2016-01-01
Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872
Lin, Hung-Yu; Hsieh, Hsieh-Chun; Lee, Posen; Hong, Fu-Yuan; Chang, Wen-Dien; Liu, Kuo-Cheng
2017-08-01
This study explored auditory and visual attention in children with ADHD. In a randomized, two-period crossover design, 50 children with ADHD and 50 age- and sex-matched typically developing peers were measured with the Test of Various Attention (TOVA). The deficiency of visual attention is more serious than that of auditory attention in children with ADHD. On the auditory modality, only the deficit of attentional inconsistency is sufficient to explain most cases of ADHD; however, most of the children with ADHD suffered from deficits of sustained attention, response inhibition, and attentional inconsistency on the visual modality. Our results also showed that the deficit of attentional inconsistency is the most important indicator in diagnosing and intervening in ADHD when both auditory and visual modalities are considered. The findings provide strong evidence that the deficits of auditory attention are different from those of visual attention in children with ADHD.
Jacobsen, Leslie K; Slotkin, Theodore A; Mencl, W Einar; Frost, Stephen J; Pugh, Kenneth R
2007-12-01
Prenatal exposure to active maternal tobacco smoking elevates risk of cognitive and auditory processing deficits, and of smoking in offspring. Recent preclinical work has demonstrated a sex-specific pattern of reduction in cortical cholinergic markers following prenatal, adolescent, or combined prenatal and adolescent exposure to nicotine, the primary psychoactive component of tobacco smoke. Given the importance of cortical cholinergic neurotransmission to attentional function, we examined auditory and visual selective and divided attention in 181 male and female adolescent smokers and nonsmokers with and without prenatal exposure to maternal smoking. Groups did not differ in age, educational attainment, symptoms of inattention, or years of parent education. A subset of 63 subjects also underwent functional magnetic resonance imaging while performing an auditory and visual selective and divided attention task. Among females, exposure to tobacco smoke during prenatal or adolescent development was associated with reductions in auditory and visual attention performance accuracy that were greatest in female smokers with prenatal exposure (combined exposure). Among males, combined exposure was associated with marked deficits in auditory attention, suggesting greater vulnerability of neurocircuitry supporting auditory attention to insult stemming from developmental exposure to tobacco smoke in males. Activation of brain regions that support auditory attention was greater in adolescents with prenatal or adolescent exposure to tobacco smoke relative to adolescents with neither prenatal nor adolescent exposure to tobacco smoke. These findings extend earlier preclinical work and suggest that, in humans, prenatal and adolescent exposure to nicotine exerts gender-specific deleterious effects on auditory and visual attention, with concomitant alterations in the efficiency of neurocircuitry supporting auditory attention.
ERIC Educational Resources Information Center
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
2000-01-01
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier
2007-08-29
In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.
Sport stacking in auditory and visual attention of grade 3 learners.
Mortimer, J; Krysztofiak, J; Custard, S; McKune, A J
2011-08-01
The effect of sport stacking on auditory and visual attention in 32 Grade 3 children was examined using a randomised, cross-over design. Children were randomly assigned to a sport stacking (n=16) or arts/crafts group (n=16) with these activities performed over 3 wk. (12 30-min. sessions, 4 per week). This was followed by a 3-wk. wash-out period after which there was a cross-over and the 3-wk. intervention repeated, with the sports stacking group performing arts/crafts and the arts/crafts group performing sports stacking. Performance on the Integrated Visual and Auditory Continuous Performance Test, a measure of auditory and visual attention, was assessed before and after each of the 3-wk. interventions for each group. Comparisons indicated that sport stacking resulted in significant improvement in high demand function and fine motor regulation, while it caused a significant reduction in low demand function. Auditory and visual attention adaptations to sport stacking may be specific to the high demand nature of the task.
Liu, Wen-Long; Zhao, Xu; Tan, Jian-Hui; Wang, Juan
2014-09-01
To explore the attention characteristics of children with different clinical subtypes of attention deficit hyperactivity disorder (ADHD) and to provide a basis for clinical intervention. A total of 345 children diagnosed with ADHD were selected and the subtypes were identified. Attention assessment was performed by the intermediate visual and auditory continuous performance test at diagnosis, and the visual and auditory attention characteristics were compared between children with different subtypes. A total of 122 normal children were recruited in the control group and their attention characteristics were compared with those of children with ADHD. The scores of full scale attention quotient (AQ) and full scale response control quotient (RCQ) of children with all three subtypes of ADHD were significantly lower than those of normal children (P<0.01). The score of auditory RCQ was significantly lower than that of visual RCQ in children with ADHD-hyperactive/impulsive subtype (P<0.05). The scores of auditory AQ and speed quotient (SQ) were significantly higher than those of visual AQ and SQ in three subtypes of ADHD children (P<0.01), while the score of visual precaution quotient (PQ) was significantly higher than that of auditory PQ (P<0.01). No significant differences in auditory or visual AQ were observed between the three subtypes of ADHD. The attention function of children with ADHD is worse than that of normal children, and the impairment of visual attention function is severer than that of auditory attention function. The degree of functional impairment of visual or auditory attention shows no significant differences between three subtypes of ADHD.
Jongman, Suzanne R; Roelofs, Ardi; Scheper, Annette R; Meyer, Antje S
2017-05-01
Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual modality, conflicting evidence exists concerning the visual modality. To address the outstanding issue whether the impairment in sustained attention is limited to the auditory domain, or if it is domain-general. Furthermore, to test whether children's sustained attention ability relates to their word-production skills. Groups of 7-9 year olds with SLI (N = 28) and typically developing (TD) children (N = 22) performed a picture-naming task and two sustained attention tasks, namely auditory and visual continuous performance tasks (CPTs). Children with SLI performed worse than TD children on picture naming and on both the auditory and visual CPTs. Moreover, performance on both the CPTs correlated with picture-naming latencies across developmental groups. These results provide evidence for a deficit in both auditory and visual sustained attention in children with SLI. Moreover, the study indicates there is a relationship between domain-general sustained attention and picture-naming performance in both TD and language-impaired children. Future studies should establish whether this relationship is causal. If attention influences language, training of sustained attention may improve language production in children from both developmental groups. © 2016 Royal College of Speech and Language Therapists.
Petrac, D C; Bedwell, J S; Renk, K; Orem, D M; Sims, V
2009-07-01
There have been relatively few studies on the relationship between recent perceived environmental stress and cognitive performance, and the existing studies do not control for state anxiety during the cognitive testing. The current study addressed this need by examining recent self-reported environmental stress and divided attention performance, while controlling for state anxiety. Fifty-four university undergraduates who self-reported a wide range of perceived recent stress (10-item perceived stress scale) completed both single and dual (simultaneous auditory and visual stimuli) continuous performance tests. Partial correlation analysis showed a statistically significant positive correlation between perceived stress and the auditory omission errors from the dual condition, after controlling for state anxiety and auditory omission errors from the single condition (r = 0.41). This suggests that increased environmental stress relates to decreased divided attention performance in auditory vigilance. In contrast, an increase in state anxiety (controlling for perceived stress) was related to a decrease in auditory omission errors from the dual condition (r = - 0.37), which suggests that state anxiety may improve divided attention performance. Results suggest that further examination of the neurobiological consequences of environmental stress on divided attention and other executive functioning tasks is needed.
Auditory and Visual Sustained Attention in Children with Speech Sound Disorder
Murphy, Cristina F. B.; Pagan-Neves, Luciana O.; Wertzner, Haydée F.; Schochat, Eliane
2014-01-01
Although research has demonstrated that children with specific language impairment (SLI) and reading disorder (RD) exhibit sustained attention deficits, no study has investigated sustained attention in children with speech sound disorder (SSD). Given the overlap of symptoms, such as phonological memory deficits, between these different language disorders (i.e., SLI, SSD and RD) and the relationships between working memory, attention and language processing, it is worthwhile to investigate whether deficits in sustained attention also occur in children with SSD. A total of 55 children (18 diagnosed with SSD (8.11±1.231) and 37 typically developing children (8.76±1.461)) were invited to participate in this study. Auditory and visual sustained-attention tasks were applied. Children with SSD performed worse on these tasks; they committed a greater number of auditory false alarms and exhibited a significant decline in performance over the course of the auditory detection task. The extent to which performance is related to auditory perceptual difficulties and probable working memory deficits is discussed. Further studies are needed to better understand the specific nature of these deficits and their clinical implications. PMID:24675815
Shared and distinct factors driving attention and temporal processing across modalities
Berry, Anne S.; Li, Xu; Lin, Ziyong; Lustig, Cindy
2013-01-01
In addition to the classic finding that “sounds are judged longer than lights,” the timing of auditory stimuli is often more precise and accurate than is the timing of visual stimuli. In cognitive models of temporal processing, these modality differences are explained by positing that auditory stimuli more automatically capture and hold attention, more efficiently closing an attentional switch that allows the accumulation of pulses marking the passage of time (Block & Zakay, 1997; Meck, 1991; Penney, 2003). However, attention is a multifaceted construct, and there has been little attempt to determine which aspects of attention may be related to modality effects. We used visual and auditory versions of the Continuous Temporal Expectancy Task (CTET; O'Connell et al., 2009) a timing task previously linked to behavioral and electrophysiological measures of mind-wandering and attention lapses, and tested participants with or without the presence of a video distractor. Performance in the auditory condition was generally superior to that in the visual condition, replicating standard results in the timing literature. The auditory modality was also less affected by declines in sustained attention indexed by declines in performance over time. In contrast, distraction had an equivalent impact on performance in the two modalities. Analysis of individual differences in performance revealed further differences between the two modalities: Poor performance in the auditory condition was primarily related to boredom whereas poor performance in the visual condition was primarily related to distractibility. These results suggest that: 1) challenges to different aspects of attention reveal both modality-specific and nonspecific effects on temporal processing, and 2) different factors drive individual differences when testing across modalities. PMID:23978664
Lalani, Sanam J; Duffield, Tyler C; Trontel, Haley G; Bigler, Erin D; Abildskov, Tracy J; Froehlich, Alyson; Prigge, Molly B D; Travers, Brittany G; Anderson, Jeffrey S; Zielinski, Brandon A; Alexander, Andrew; Lange, Nicholas; Lainhart, Janet E
2018-06-01
Studies have shown that individuals with autism spectrum disorder (ASD) tend to perform significantly below typically developing individuals on standardized measures of attention, even when controlling for IQ. The current study sought to examine within ASD whether anatomical correlates of attention performance differed between those with average to above-average IQ (AIQ group) and those with low-average to borderline ability (LIQ group) as well as in comparison to typically developing controls (TDC). Using automated volumetric analyses, we examined regional volume of classic attention areas including the superior frontal gyrus, anterior cingulate cortex, and precuneus in ASD AIQ (n = 38) and LIQ (n = 18) individuals along with 30 TDC. Auditory attention performance was assessed using subtests of the Test of Memory and Learning (TOMAL) compared among the groups and then correlated with regional brain volumes. Analyses revealed group differences in attention. The three groups did not differ significantly on any auditory attention-related brain volumes; however, trends toward significant size-attention function interactions were observed. Negative correlations were found between the volume of the precuneus and auditory attention performance for the AIQ ASD group, indicating larger volume related to poorer performance. Implications for general attention functioning and dysfunctional neural connectivity in ASD are discussed.
Moreno-García, Inmaculada; Delgado-Pardo, Gracia; Roldán-Blasco, Carmen
2015-03-03
This study assesses attention and response control through visual and auditory stimuli in a primary care pediatric sample. The sample consisted of 191 participants aged between 7 and 13 years old. It was divided into 2 groups: (a) 90 children with ADHD, according to diagnostic (DSM-IV-TR) (APA, 2002) and clinical (ADHD Rating Scale-IV) (DuPaul, Power, Anastopoulos, & Reid, 1998) criteria, and (b) 101 children without a history of ADHD. The aims were: (a) to determine and compare the performance of both groups in attention and response control, (b) to identify attention and response control deficits in the ADHD group. Assessments were carried out using the Integrated Visual and Auditory Continuous Performance Test (IVA/CPT, Sandford & Turner, 2002). Results showed that the ADHD group had visual and auditory attention deficits, F(3, 170) = 14.38; p < .01, deficits in fine motor regulation (Welch´s t-test = 44.768; p < .001) and sensory/motor activity (Welch'st-test = 95.683, p < .001; Welch's t-test = 79.537, p < .001). Both groups exhibited a similar performance in response control, F(3, 170) = .93, p = .43.Children with ADHD showed inattention, mental processing speed deficits, and loss of concentration with visual stimuli. Both groups yielded a better performance in attention with auditory stimuli.
Neural dynamics underlying attentional orienting to auditory representations in short-term memory.
Backer, Kristina C; Binns, Malcolm A; Alain, Claude
2015-01-21
Sounds are ephemeral. Thus, coherent auditory perception depends on "hearing" back in time: retrospectively attending that which was lost externally but preserved in short-term memory (STM). Current theories of auditory attention assume that sound features are integrated into a perceptual object, that multiple objects can coexist in STM, and that attention can be deployed to an object in STM. Recording electroencephalography from humans, we tested these assumptions, elucidating feature-general and feature-specific neural correlates of auditory attention to STM. Alpha/beta oscillations and frontal and posterior event-related potentials indexed feature-general top-down attentional control to one of several coexisting auditory representations in STM. Particularly, task performance during attentional orienting was correlated with alpha/low-beta desynchronization (i.e., power suppression). However, attention to one feature could occur without simultaneous processing of the second feature of the representation. Therefore, auditory attention to memory relies on both feature-specific and feature-general neural dynamics. Copyright © 2015 the authors 0270-6474/15/351307-12$15.00/0.
Pérez, Miguel Ángel; Pérez-Valenzuela, Catherine; Rojas-Thomas, Felipe; Ahumada, Juan; Fuenzalida, Marco; Dagnino-Subiabre, Alexies
2013-08-29
Chronic stress induces dendritic atrophy in the rat primary auditory cortex (A1), a key brain area for auditory attention. The aim of this study was to determine whether repeated restraint stress affects auditory attention and synaptic transmission in A1. Male Sprague-Dawley rats were trained in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance over 80% of correct trials in the 2-ACT were randomly assigned to control and restraint stress experimental groups. To analyze the effects of restraint stress on the auditory attention, trained rats of both groups were subjected to 50 2-ACT trials one day before and one day after of the stress period. A difference score was determined by subtracting the number of correct trials after from those before the stress protocol. Another set of rats was used to study the synaptic transmission in A1. Restraint stress decreased the number of correct trials by 28% compared to the performance of control animals (p < 0.001). Furthermore, stress reduced the frequency of spontaneous inhibitory postsynaptic currents (sIPSC) and miniature IPSC in A1, whereas glutamatergic efficacy was not affected. Our results demonstrate that restraint stress decreased auditory attention and GABAergic synaptic efficacy in A1. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Association of blood antioxidants status with visual and auditory sustained attention.
Shiraseb, Farideh; Siassi, Fereydoun; Sotoudeh, Gity; Qorbani, Mostafa; Rostami, Reza; Sadeghi-Firoozabadi, Vahid; Narmaki, Elham
2015-01-01
A low antioxidants status has been shown to result in oxidative stress and cognitive impairment. Because antioxidants can protect the nervous system, it is expected that a better blood antioxidant status might be related to sustained attention. However, the relationship between the blood antioxidant status and visual and auditory sustained attention has not been investigated. The aim of this study was to evaluate the association of fruits and vegetables intake and the blood antioxidant status with visual and auditory sustained attention in women. This cross-sectional study was performed on 400 healthy women (20-50 years) who attended the sports clubs of Tehran Municipality. Sustained attention was evaluated based on the Integrated Visual and Auditory Continuous Performance Test using the Integrated Visual and Auditory (IVA) software. The 24-hour food recall questionnaire was used for estimating fruits and vegetables intake. Serum total antioxidant capacity (TAC), and erythrocyte superoxide dismutase (SOD) and glutathione peroxidase (GPx) activities were measured in 90 participants. After adjusting for energy intake, age, body mass index (BMI), years of education and physical activity, higher reported fruits, and vegetables intake was associated with better visual and auditory sustained attention (P < 0.001). A high intake of some subgroups of fruits and vegetables (i.e. berries, cruciferous vegetables, green leafy vegetables, and other vegetables) was also associated with better sustained attention (P < 0.02). Serum TAC, and erythrocyte SOD and GPx activities increased with the increase in the tertiles of visual and auditory sustained attention after adjusting for age, years of education, physical activity, energy, BMI, and caffeine intake (P < 0.05). Improved visual and auditory sustained attention is associated with a better blood antioxidant status. Therefore, improvement of the antioxidant status through an appropriate dietary intake can possibly enhance sustained attention.
Further evidence of auditory extinction in aphasia.
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
2013-02-01
Preliminary research (Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Seventeen IWA (M(age) = 53.19 years) and 17 neurologically intact controls (M(age) = 55.18 years) participated. Auditory stimuli were spoken letters presented in a free-field listening environment. Stimuli were presented in single-stimulus stimulation (SSS) or double-simultaneous stimulation (DSS) trials across 5 conditions designed to determine whether extinction is related to binding, inefficient attention resource allocation, or overall deficits in attention. All participants completed all experimental conditions. Significant extinction was demonstrated only by IWA when sounds were different, providing further evidence of auditory extinction. However, binding requirements did not appear to influence the IWA's performance. Results indicate that, for IWA, auditory extinction may not be attributed to a binding deficit or inefficient attention resource allocation because of equivalent performance across all 5 conditions. Rather, overall attentional resources may be influential. Future research in aphasia should explore the effect of the stimulus presentation in addition to the continued study of attention treatment.
Xie, Zilong; Reetzke, Rachel; Chandrasekaran, Bharath
2018-05-24
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (n = 20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.
Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H
2016-07-06
During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we studied the behavioral consequences of adding different types of auditory distractors in a visual selective attention task in wild-type and α-9 nicotinic receptor knock-out (KO) mice. We demonstrate that KO mice perform poorly in the selective attention paradigm and that an intact medial olivocochlear transmission aids in ignoring auditory distractors during attention. Copyright © 2016 the authors 0270-6474/16/367198-12$15.00/0.
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
Examining age-related differences in auditory attention control using a task-switching procedure.
Lawo, Vera; Koch, Iring
2014-03-01
Using a novel task-switching variant of dichotic selective listening, we examined age-related differences in the ability to intentionally switch auditory attention between 2 speakers defined by their sex. In our task, young (M age = 23.2 years) and older adults (M age = 66.6 years) performed a numerical size categorization on spoken number words. The task-relevant speaker was indicated by a cue prior to auditory stimulus onset. The cuing interval was either short or long and varied randomly trial by trial. We found clear performance costs with instructed attention switches. These auditory attention switch costs decreased with prolonged cue-stimulus interval. Older adults were generally much slower (but not more error prone) than young adults, but switching-related effects did not differ across age groups. These data suggest that the ability to intentionally switch auditory attention in a selective listening task is not compromised in healthy aging. We discuss the role of modality-specific factors in age-related differences.
Strait, Dana L.; Kraus, Nina
2011-01-01
Even in the quietest of rooms, our senses are perpetually inundated by a barrage of sounds, requiring the auditory system to adapt to a variety of listening conditions in order to extract signals of interest (e.g., one speaker's voice amidst others). Brain networks that promote selective attention are thought to sharpen the neural encoding of a target signal, suppressing competing sounds and enhancing perceptual performance. Here, we ask: does musical training benefit cortical mechanisms that underlie selective attention to speech? To answer this question, we assessed the impact of selective auditory attention on cortical auditory-evoked response variability in musicians and non-musicians. Outcomes indicate strengthened brain networks for selective auditory attention in musicians in that musicians but not non-musicians demonstrate decreased prefrontal response variability with auditory attention. Results are interpreted in the context of previous work documenting perceptual and subcortical advantages in musicians for the hearing and neural encoding of speech in background noise. Musicians’ neural proficiency for selectively engaging and sustaining auditory attention to language indicates a potential benefit of music for auditory training. Given the importance of auditory attention for the development and maintenance of language-related skills, musical training may aid in the prevention, habilitation, and remediation of individuals with a wide range of attention-based language, listening and learning impairments. PMID:21716636
Comparing Monotic and Diotic Selective Auditory Attention Abilities in Children
ERIC Educational Resources Information Center
Cherry, Rochelle; Rubinstein, Adrienne
2006-01-01
Purpose: Some researchers have assessed ear-specific performance of auditory processing ability using speech recognition tasks with normative data based on diotic administration. The present study investigated whether monotic and diotic administrations yield similar results using the Selective Auditory Attention Test. Method: Seventy-two typically…
Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A
2016-11-01
The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.
Paladini, Rebecca E.; Diana, Lorenzo; Zito, Giuseppe A.; Nyffeler, Thomas; Wyss, Patric; Mosimann, Urs P.; Müri, René M.; Nef, Tobias
2018-01-01
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants’ accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants’ performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield. PMID:29293637
2017-01-01
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation—acoustic frequency—might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. PMID:29109238
Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L
2017-12-13
Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions. Copyright © 2017 Dick et al.
Attention to sound improves auditory reliability in audio-tactile spatial optimal integration.
Vercillo, Tiziana; Gori, Monica
2015-01-01
The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE) model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested participants in an attentional and a non-attentional condition. In the attentional experiment, participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli, conflictual or not conflictual sounds and vibrations arranged along the horizontal axis, were presented sequentially. In the primary task participants had to evaluate in a space bisection task the position of the second stimulus (the probe) with respect to the others (the standards). In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection). Our results showed an enhanced auditory precision (and auditory weights) in the auditory attentional condition with respect to the control non-attentional condition. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.
Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas
2015-12-09
Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory paradigm and using model-based electroencephalography analyses in humans, we thus bridge this gap and reveal behavioral and neural signatures of increased, attention-mediated working memory precision. We further show that the extent of alpha power modulation predicts the degree to which individuals' memory performance benefits from selective attention. Copyright © 2015 the authors 0270-6474/15/3516094-11$15.00/0.
Wahn, Basil; König, Peter
2015-01-01
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
ERIC Educational Resources Information Center
Reinertsen, Gloria M.
A study compared performances on a test of selective auditory attention between students educated in open-space versus closed classroom environments. An open-space classroom environment was defined as having no walls separating it from hallways or other classrooms. It was hypothesized that the incidence of auditory figure-ground (ability to focus…
Switching in the Cocktail Party: Exploring Intentional Control of Auditory Selective Attention
ERIC Educational Resources Information Center
Koch, Iring; Lawo, Vera; Fels, Janina; Vorlander, Michael
2011-01-01
Using a novel variant of dichotic selective listening, we examined the control of auditory selective attention. In our task, subjects had to respond selectively to one of two simultaneously presented auditory stimuli (number words), always spoken by a female and a male speaker, by performing a numerical size categorization. The gender of the…
The neural basis of visual dominance in the context of audio-visual object processing.
Schmid, Carmen; Büchel, Christian; Rose, Michael
2011-03-01
Visual dominance refers to the observation that in bimodal environments vision often has an advantage over other senses in human. Therefore, a better memory performance for visual compared to, e.g., auditory material is assumed. However, the reason for this preferential processing and the relation to the memory formation is largely unknown. In this fMRI experiment, we manipulated cross-modal competition and attention, two factors that both modulate bimodal stimulus processing and can affect memory formation. Pictures and sounds of objects were presented simultaneously in two levels of recognisability, thus manipulating the amount of cross-modal competition. Attention was manipulated via task instruction and directed either to the visual or the auditory modality. The factorial design allowed a direct comparison of the effects between both modalities. The resulting memory performance showed that visual dominance was limited to a distinct task setting. Visual was superior to auditory object memory only when allocating attention towards the competing modality. During encoding, cross-modal competition and attention towards the opponent domain reduced fMRI signals in both neural systems, but cross-modal competition was more pronounced in the auditory system and only in auditory cortex this competition was further modulated by attention. Furthermore, neural activity reduction in auditory cortex during encoding was closely related to the behavioural auditory memory impairment. These results indicate that visual dominance emerges from a less pronounced vulnerability of the visual system against competition from the auditory domain. Copyright © 2010 Elsevier Inc. All rights reserved.
Aydinli, Fatma Esen; Çak, Tuna; Kirazli, Meltem Çiğdem; Çinar, Betül Çiçek; Pektaş, Alev; Çengel, Ebru Kültür; Aksoy, Songül
Attention deficit hyperactivity disorder is a common impairing neuropsychiatric disorder with onset in early childhood. Almost half of the children with attention deficit hyperactivity disorder also experience a variety of motor-related dysfunctions ranging from fine/gross motor control problems to difficulties in maintaining balance. The main purpose of this study was to investigate the effects of distractors two different auditory distractors namely, relaxing music and white noise on upright balance performance in children with attention deficit hyperactivity disorder. We compared upright balance performance and the involvement of different sensory systems in the presence of auditory distractors between school-aged children with attention deficit hyperactivity disorder (n=26) and typically developing controls (n=20). Neurocom SMART Balance Master Dynamic Posturography device was used for the sensory organization test. Sensory organization test was repeated three times for each participant in three different test environments. The balance scores in the silence environment were lower in the attention deficit hyperactivity disorder group but the differences were not statistically significant. In addition to lower balance scores the visual and vestibular ratios were also lower. Auditory distractors affected the general balance performance positively for both groups. More challenging conditions, using an unstable platform with distorted somatosensory signals were more affected. Relaxing music was more effective in the control group, and white noise was more effective in the attention deficit hyperactivity disorder group and the positive effects of white noise became more apparent in challenging conditions. To the best of our knowledge, this is the first study evaluating balance performance in children with attention deficit hyperactivity disorder under the effects of auditory distractors. Although more studies are needed, our results indicate that auditory distractors may have enhancing effects on upright balance performance in children with attention deficit hyperactivity disorder. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Liu, Yung-Ching; Jhuang, Jing-Wun
2012-07-01
A driving simulator study was conducted to evaluate the effects of five in-vehicle warning information displays upon drivers' emergent response and decision performance. These displays include visual display, auditory displays with and without spatial compatibility, hybrid displays in both visual and auditory format with and without spatial compatibility. Thirty volunteer drivers were recruited to perform various tasks that involved driving, stimulus-response, divided attention and stress rating. Results show that for displays of single-modality, drivers benefited more when coping with visual display of warning information than auditory display with or without spatial compatibility. However, auditory display with spatial compatibility significantly improved drivers' performance in reacting to the divided attention task and making accurate S-R task decision. Drivers' best performance results were obtained for hybrid display with spatial compatibility. Hybrid displays enabled drivers to respond the fastest and achieve the best accuracy in both S-R and divided attention tasks. Copyright © 2011 Elsevier Ltd and The Ergonomics Society. All rights reserved.
The modality effect of ego depletion: Auditory task modality reduces ego depletion.
Li, Qiong; Wang, Zhenhong
2016-08-01
An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.
de Jong, Ritske; Toffanin, Paolo; Harbers, Marten
2010-01-01
Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.
Dividing time: concurrent timing of auditory and visual events by young and elderly adults.
McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H
2010-07-01
This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.
Auditory Scene Analysis: An Attention Perspective
2017-01-01
Purpose This review article provides a new perspective on the role of attention in auditory scene analysis. Method A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal hearing are summarized to demonstrate attention effects on auditory perception—from passive processes that organize unattended input to attention effects that act at different levels of the system. Data will show that attention can sharpen stream organization toward behavioral goals, identify auditory events obscured by noise, and limit passive processing capacity. Conclusions A model of attention is provided that illustrates how the auditory system performs multilevel analyses that involve interactions between stimulus-driven input and top-down processes. Overall, these studies show that (a) stream segregation occurs automatically and sets the basis for auditory event formation; (b) attention interacts with automatic processing to facilitate task goals; and (c) information about unattended sounds is not lost when selecting one organization over another. Our results support a neural model that allows multiple sound organizations to be held in memory and accessed simultaneously through a balance of automatic and task-specific processes, allowing flexibility for navigating noisy environments with competing sound sources. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601618 PMID:29049599
Interhemispheric interaction expands attentional capacity in an auditory selective attention task.
Scalf, Paige E; Banich, Marie T; Erickson, Andrew B
2009-04-01
Previous work from our laboratory indicates that interhemispheric interaction (IHI) functionally increases the attentional capacity available to support performance on visual tasks (Banich in The asymmetrical brain, pp 261-302, 2003). Because manipulations of both computational complexity and selection demand alter the benefits of IHI to task performance, we argue that IHI may be a general strategy for meeting increases in attentional demand. Other researchers, however, have suggested that the apparent benefits of IHI to attentional capacity are an epiphenomenon of the organization of the visual system (Fecteau and Enns in Neuropsychologia 43:1412-1428, 2005; Marsolek et al. in Neuropsychologia 40:1983-1999, 2002). In the current experiment, we investigate whether IHI increases attentional capacity outside the visual system by manipulating the selection demands of an auditory temporal pattern-matching task. We find that IHI expands attentional capacity in the auditory system. This suggests that the benefits of requiring IHI derive from a functional increase in attentional capacity rather than the organization of a specific sensory modality.
Oba, Sandra I.; Galvin, John J.; Fu, Qian-Jie
2014-01-01
Auditory training has been shown to significantly improve cochlear implant (CI) users’ speech and music perception. However, it is unclear whether post-training gains in performance were due to improved auditory perception or to generally improved attention, memory and/or cognitive processing. In this study, speech and music perception, as well as auditory and visual memory were assessed in ten CI users before, during, and after training with a non-auditory task. A visual digit span (VDS) task was used for training, in which subjects recalled sequences of digits presented visually. After the VDS training, VDS performance significantly improved. However, there were no significant improvements for most auditory outcome measures (auditory digit span, phoneme recognition, sentence recognition in noise, digit recognition in noise), except for small (but significant) improvements in vocal emotion recognition and melodic contour identification. Post-training gains were much smaller with the non-auditory VDS training than observed in previous auditory training studies with CI users. The results suggest that post-training gains observed in previous studies were not solely attributable to improved attention or memory, and were more likely due to improved auditory perception. The results also suggest that CI users may require targeted auditory training to improve speech and music perception. PMID:23516087
Xia, Jing; Zhang, Wei; Jiang, Yizhou; Li, You; Chen, Qi
2018-05-16
Practice and experiences gradually shape the central nervous system, from the synaptic level to large-scale neural networks. In natural multisensory environment, even when inundated by streams of information from multiple sensory modalities, our brain does not give equal weight to different modalities. Rather, visual information more frequently receives preferential processing and eventually dominates consciousness and behavior, i.e., visual dominance. It remains unknown, however, the supra-modal and modality-specific practice effect during cross-modal selective attention, and moreover whether the practice effect shows similar modality preferences as the visual dominance effect in the multisensory environment. To answer the above two questions, we adopted a cross-modal selective attention paradigm in conjunction with the hybrid fMRI design. Behaviorally, visual performance significantly improved while auditory performance remained constant with practice, indicating that visual attention more flexibly adapted behavior with practice than auditory attention. At the neural level, the practice effect was associated with decreasing neural activity in the frontoparietal executive network and increasing activity in the default mode network, which occurred independently of the modality attended, i.e., the supra-modal mechanisms. On the other hand, functional decoupling between the auditory and the visual system was observed with the progress of practice, which varied as a function of the modality attended. The auditory system was functionally decoupled with both the dorsal and ventral visual stream during auditory attention while was decoupled only with the ventral visual stream during visual attention. To efficiently suppress the irrelevant visual information with practice, auditory attention needs to additionally decouple the auditory system from the dorsal visual stream. The modality-specific mechanisms, together with the behavioral effect, thus support the visual dominance model in terms of the practice effect during cross-modal selective attention. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
2011-03-08
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Wang, Wuyi; Viswanathan, Shivakumar; Lee, Taraz; Grafton, Scott T
2016-01-01
Cortical theta band oscillations (4-8 Hz) in EEG signals have been shown to be important for a variety of different cognitive control operations in visual attention paradigms. However the synchronization source of these signals as defined by fMRI BOLD activity and the extent to which theta oscillations play a role in multimodal attention remains unknown. Here we investigated the extent to which cross-modal visual and auditory attention impacts theta oscillations. Using a simultaneous EEG-fMRI paradigm, healthy human participants performed an attentional vigilance task with six cross-modal conditions using naturalistic stimuli. To assess supramodal mechanisms, modulation of theta oscillation amplitude for attention to either visual or auditory stimuli was correlated with BOLD activity by conjunction analysis. Negative correlation was localized to cortical regions associated with the default mode network and positively with ventral premotor areas. Modality-associated attention to visual stimuli was marked by a positive correlation of theta and BOLD activity in fronto-parietal area that was not observed in the auditory condition. A positive correlation of theta and BOLD activity was observed in auditory cortex, while a negative correlation of theta and BOLD activity was observed in visual cortex during auditory attention. The data support a supramodal interaction of theta activity with of DMN function, and modality-associated processes within fronto-parietal networks related to top-down theta related cognitive control in cross-modal visual attention. On the other hand, in sensory cortices there are opposing effects of theta activity during cross-modal auditory attention.
Aroudi, Ali; Doclo, Simon
2017-07-01
To decode auditory attention from single-trial EEG recordings in an acoustic scenario with two competing speakers, a least-squares method has been recently proposed. This method however requires the clean speech signals of both the attended and the unattended speaker to be available as reference signals. Since in practice only the binaural signals consisting of a reverberant mixture of both speakers and background noise are available, in this paper we explore the potential of using these (unprocessed) signals as reference signals for decoding auditory attention in different acoustic conditions (anechoic, reverberant, noisy, and reverberant-noisy). In addition, we investigate whether it is possible to use these signals instead of the clean attended speech signal for filter training. The experimental results show that using the unprocessed binaural signals for filter training and for decoding auditory attention is feasible with a relatively large decoding performance, although for most acoustic conditions the decoding performance is significantly lower than when using the clean speech signals.
Brain activity associated with selective attention, divided attention and distraction.
Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo
2017-06-01
Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.
Auditory and visual capture during focused visual attention.
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-10-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets presented at a nontarget (invalid) location and possible performance benefits occurring when the target location is (validly) cued. In this study, the authors modulated the degree of attentional focus by presenting endogenous cues with varying reliability and by displaying placeholders indicating the precise areas where the target stimuli could occur. By using not only valid and invalid exogenous cues but also neutral cues that provide temporal but no spatial information, they found performance benefits as well as costs when attention is not strongly focused. The benefits disappear when the attentional focus is increased. These results indicate that there is bottom-up capture of visual attention by irrelevant auditory and visual stimuli that cannot be suppressed by top-down attentional control. PsycINFO Database Record (c) 2009 APA, all rights reserved.
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli
Kamke, Marc R.; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality. PMID:24920945
Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.
Kamke, Marc R; Harris, Jill
2014-01-01
The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.
Stevenson, Ryan A; Schlesinger, Joseph J; Wallace, Mark T
2013-02-01
Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.
2011-01-01
How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107
The impact of visual gaze direction on auditory object tracking.
Pomper, Ulrich; Chait, Maria
2017-07-05
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Auditory-Cortex Short-Term Plasticity Induced by Selective Attention
Jääskeläinen, Iiro P.; Ahveninen, Jyrki
2014-01-01
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458
Crossmodal attention switching: auditory dominance in temporal discrimination tasks.
Lukas, Sarah; Philipp, Andrea M; Koch, Iring
2014-11-01
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.
Rand, Kristina M.; Creem-Regehr, Sarah H.; Thompson, William B.
2015-01-01
The ability to navigate without getting lost is an important aspect of quality of life. In five studies, we evaluated how spatial learning is affected by the increased demands of keeping oneself safe while walking with degraded vision (mobility monitoring). We proposed that safe low-vision mobility requires attentional resources, providing competition for those needed to learn a new environment. In Experiments 1 and 2 participants navigated along paths in a real-world indoor environment with simulated degraded vision or normal vision. Memory for object locations seen along the paths was better with normal compared to degraded vision. With degraded vision, memory was better when participants were guided by an experimenter (low monitoring demands) versus unguided (high monitoring demands). In Experiments 3 and 4, participants walked while performing an auditory task. Auditory task performance was superior with normal compared to degraded vision. With degraded vision, auditory task performance was better when guided compared to unguided. In Experiment 5, participants performed both the spatial learning and auditory tasks under degraded vision. Results showed that attention mediates the relationship between mobility-monitoring demands and spatial learning. These studies suggest that more attention is required and spatial learning is impaired when navigating with degraded viewing. PMID:25706766
Auditory global-local processing: effects of attention and musical experience.
Ouimet, Tia; Foster, Nicholas E V; Hyde, Krista L
2012-10-01
In vision, global (whole) features are typically processed before local (detail) features ("global precedence effect"). However, the distinction between global and local processing is less clear in the auditory domain. The aims of the present study were to investigate: (i) the effects of directed versus divided attention, and (ii) the effect musical training on auditory global-local processing in 16 adult musicians and 16 non-musicians. Participants were presented with short nine-tone melodies, each comprised of three triplet sequences (three-tone units). In a "directed attention" task, participants were asked to focus on either the global or local pitch pattern and had to determine if the pitch pattern went up or down. In a "divided attention" task, participants judged whether the target pattern (up or down) was present or absent. Overall, global structure was perceived faster and more accurately than local structure. The global precedence effect was observed regardless of whether attention was directed to a specific level or divided between levels. Musicians performed more accurately than non-musicians overall, but non-musicians showed a more pronounced global advantage. This study provides evidence for an auditory global precedence effect across attention tasks, and for differences in auditory global-local processing associated with musical experience.
Kamke, Marc R; Van Luyn, Jeanette; Constantinescu, Gabriella; Harris, Jill
2014-01-01
Evidence suggests that deafness-induced changes in visual perception, cognition and attention may compensate for a hearing loss. Such alterations, however, may also negatively influence adaptation to a cochlear implant. This study investigated whether involuntary attentional capture by salient visual stimuli is altered in children who use a cochlear implant. Thirteen experienced implant users (aged 8-16 years) and age-matched normally hearing children were presented with a rapid sequence of simultaneous visual and auditory events. Participants were tasked with detecting numbers presented in a specified color and identifying a change in the tonal frequency whilst ignoring irrelevant visual distractors. Compared to visual distractors that did not possess the target-defining characteristic, target-colored distractors were associated with a decrement in visual performance (response time and accuracy), demonstrating a contingent capture of involuntary attention. Visual distractors did not, however, impair auditory task performance. Importantly, detection performance for the visual and auditory targets did not differ between the groups. These results suggest that proficient cochlear implant users demonstrate normal capture of visuospatial attention by stimuli that match top-down control settings.
The ability to tap to a beat relates to cognitive, linguistic, and perceptual skills
Tierney, Adam T.; Kraus, Nina
2013-01-01
Reading-impaired children have difficulty tapping to a beat. Here we tested whether this relationship between reading ability and synchronized tapping holds in typically-developing adolescents. We also hypothesized that tapping relates to two other abilities. First, since auditory-motor synchronization requires monitoring of the relationship between motor output and auditory input, we predicted that subjects better able to tap to the beat would perform better on attention tests. Second, since auditory-motor synchronization requires fine temporal precision within the auditory system for the extraction of a sound’s onset time, we predicted that subjects better able to tap to the beat would be less affected by backward masking, a measure of temporal precision within the auditory system. As predicted, tapping performance related to reading, attention, and backward masking. These results motivate future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and basic auditory processing as well. PMID:23400117
Changes in otoacoustic emissions during selective auditory and visual attention
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2015-01-01
Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703
The role of working memory in auditory selective attention.
Dalton, Polly; Santangelo, Valerio; Spence, Charles
2009-11-01
A growing body of research now demonstrates that working memory plays an important role in controlling the extent to which irrelevant visual distractors are processed during visual selective attention tasks (e.g., Lavie, Hirst, De Fockert, & Viding, 2004). Recently, it has been shown that the successful selection of tactile information also depends on the availability of working memory (Dalton, Lavie, & Spence, 2009). Here, we investigate whether working memory plays a role in auditory selective attention. Participants focused their attention on short continuous bursts of white noise (targets) while attempting to ignore pulsed bursts of noise (distractors). Distractor interference in this auditory task, as measured in terms of the difference in performance between congruent and incongruent distractor trials, increased significantly under high (vs. low) load in a concurrent working-memory task. These results provide the first evidence demonstrating a causal role for working memory in reducing interference by irrelevant auditory distractors.
Rinne, Teemu; Muers, Ross S; Salo, Emma; Slater, Heather; Petkov, Christopher I
2017-06-01
The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio-visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio-visual selective attention modulates the primate brain, identify sources for "lost" attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. © The Author 2017. Published by Oxford University Press.
Muers, Ross S.; Salo, Emma; Slater, Heather; Petkov, Christopher I.
2017-01-01
Abstract The cross-species correspondences and differences in how attention modulates brain responses in humans and animal models are poorly understood. We trained 2 monkeys to perform an audio–visual selective attention task during functional magnetic resonance imaging (fMRI), rewarding them to attend to stimuli in one modality while ignoring those in the other. Monkey fMRI identified regions strongly modulated by auditory or visual attention. Surprisingly, auditory attention-related modulations were much more restricted in monkeys than humans performing the same tasks during fMRI. Further analyses ruled out trivial explanations, suggesting that labile selective-attention performance was associated with inhomogeneous modulations in wide cortical regions in the monkeys. The findings provide initial insights into how audio–visual selective attention modulates the primate brain, identify sources for “lost” attention effects in monkeys, and carry implications for modeling the neurobiology of human cognition with nonhuman animals. PMID:28419201
Attention distributed across sensory modalities enhances perceptual performance
Mishra, Jyoti; Gazzaley, Adam
2012-01-01
This study investigated the interaction between top-down attentional control and multisensory processing in humans. Using semantically congruent and incongruent audiovisual stimulus streams, we found target detection to be consistently improved in the setting of distributed audiovisual attention versus focused visual attention. This performance benefit was manifested as faster reaction times for congruent audiovisual stimuli, and as accuracy improvements for incongruent stimuli, resulting in a resolution of stimulus interference. Electrophysiological recordings revealed that these behavioral enhancements were associated with reduced neural processing of both auditory and visual components of the audiovisual stimuli under distributed vs. focused visual attention. These neural changes were observed at early processing latencies, within 100–300 ms post-stimulus onset, and localized to auditory, visual, and polysensory temporal cortices. These results highlight a novel neural mechanism for top-down driven performance benefits via enhanced efficacy of sensory neural processing during distributed audiovisual attention relative to focused visual attention. PMID:22933811
Transient human auditory cortex activation during volitional attention shifting
Uhlig, Christian Harm; Gutschalk, Alexander
2017-01-01
While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues. PMID:28273110
How visual cues for when to listen aid selective auditory attention.
Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G
2012-06-01
Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.
Characterizing the roles of alpha and theta oscillations in multisensory attention.
Keller, Arielle S; Payne, Lisa; Sekuler, Robert
2017-05-01
Cortical alpha oscillations (8-13Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4-7Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta's association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Characterizing the roles of alpha and theta oscillations in multisensory attention
Keller, Arielle S.; Payne, Lisa; Sekuler, Robert
2017-01-01
Cortical alpha oscillations (8–13 Hz) appear to play a role in suppressing distractions when just one sensory modality is being attended, but do they also contribute when attention is distributed over multiple sensory modalities? For an answer, we examined cortical oscillations in human subjects who were dividing attention between auditory and visual sequences. In Experiment 1, subjects performed an oddball task with auditory, visual, or simultaneous audiovisual sequences in separate blocks, while the electroencephalogram was recorded using high-density scalp electrodes. Alpha oscillations were present continuously over posterior regions while subjects were attending to auditory sequences. This supports the idea that the brain suppresses processing of visual input in order to advantage auditory processing. During a divided-attention audiovisual condition, an oddball (a rare, unusual stimulus) occurred in either the auditory or the visual domain, requiring that attention be divided between the two modalities. Fronto-central theta band (4–7 Hz) activity was strongest in this audiovisual condition, when subjects monitored auditory and visual sequences simultaneously. Theta oscillations have been associated with both attention and with short-term memory. Experiment 2 sought to distinguish these possible roles of fronto-central theta activity during multisensory divided attention. Using a modified version of the oddball task from Experiment 1, Experiment 2 showed that differences in theta power among conditions were independent of short-term memory load. Ruling out theta’s association with short-term memory, we conclude that fronto-central theta activity is likely a marker of multisensory divided attention. PMID:28259771
Dai, Lengshi; Shinn-Cunningham, Barbara G
2016-01-01
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics and task demands.
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.
2008-01-01
Unexpected auditory stimuli are potent distractors, able to break through selective attention and disrupt performance in an unrelated visual task. This study examined the processing fate of novel sounds by examining the extent to which their semantic content is analyzed and whether the outcome of this processing can impact on subsequent behavior.…
Control of Auditory Attention in Children With Specific Language Impairment.
Victorino, Kristen R; Schwartz, Richard G
2015-08-01
Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a cross-modal word recognition task. Twenty participants with SLI (ages 9-12 years) and 20 age-matched peers with typical language development (TLD) listened to words through headphones and were instructed to attend to the words in 1 ear while ignoring the words in the other ear. They were simultaneously presented with pictures and asked to make a lexical decision about whether the pictures and auditory words were the same or different. Accuracy and reaction time were measured in 5 conditions, in which the stimulus in the unattended channel was manipulated. The groups performed with similar accuracy. Compared with their peers with TLD, children with SLI had slower reaction times overall and different within-group patterns of performance by condition. Children with TLD showed efficient inhibitory control in conditions that required active suppression of competing stimuli. Participants with SLI had difficulty exerting control over their auditory attention in all conditions, with particular difficulty inhibiting distractors of all types.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2016-06-01
Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.
Selective Attention and Sensory Modality in Aging: Curses and Blessings.
Van Gerven, Pascal W M; Guerreiro, Maria J S
2016-01-01
The notion that selective attention is compromised in older adults as a result of impaired inhibitory control is well established. Yet it is primarily based on empirical findings covering the visual modality. Auditory and especially, cross-modal selective attention are remarkably underexposed in the literature on aging. In the past 5 years, we have attempted to fill these voids by investigating performance of younger and older adults on equivalent tasks covering all four combinations of visual or auditory target, and visual or auditory distractor information. In doing so, we have demonstrated that older adults are especially impaired in auditory selective attention with visual distraction. This pattern of results was not mirrored by the results from our psychophysiological studies, however, in which both enhancement of target processing and suppression of distractor processing appeared to be age equivalent. We currently conclude that: (1) age-related differences of selective attention are modality dependent; (2) age-related differences of selective attention are limited; and (3) it remains an open question whether modality-specific age differences in selective attention are due to impaired distractor inhibition, impaired target enhancement, or both. These conclusions put the longstanding inhibitory deficit hypothesis of aging in a new perspective.
Berti, Stefan
2013-01-01
Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechanism is capable of distracting ongoing task performance. In contrast, it is open whether transient detection—which does not rely on predictive coding mechanisms—can trigger behavioral distraction, too. In the present study, the effect of rare auditory changes on visual task performance is tested in an auditory-visual cross-modal distraction paradigm. The rare changes are either embedded within a continuous standard stimulation (triggering deviance detection) or are presented within an otherwise silent situation (triggering transient detection). In the event-related brain potentials, deviants elicited the mismatch negativity (MMN) while transients elicited an enhanced N1 component, mirroring pre-attentive change detection in both conditions but on the basis of different neuro-cognitive processes. These sensory components are followed by attention related ERP components including the P3a and the reorienting negativity (RON). This demonstrates that both types of changes trigger switches of attention. Finally, distraction of task performance is observable, too, but the impact of deviants is higher compared to transients. These findings suggest different routes of distraction allowing for the automatic processing of a wide range of potentially relevant changes in the environment as a pre-requisite for adaptive behavior. PMID:23874278
Merikanto, Ilona; Lahti, Tuuli; Castaneda, Anu E; Tuulio-Henriksson, Annamari; Aalto-Setälä, Terhi; Suvisaari, Jaana; Partonen, Timo
2012-10-01
Seasonal variations in mood and behavior are common among the general population and may have a deteriorating effect on cognitive functions. In this study the effect of seasonal affective disorder (SAD-like symptoms) on cognitive test performance were evaluated in more detail. The data were derived from the study Mental Health in Early Adulthood in Finland. Participants (n = 481) filled in a modified Seasonal Pattern Assessment Questionnaire (SPAQ) and performed cognitive tests in verbal and visual skills, attention and general intelligence. SAD-like symptoms, especially regarding the seasonal variations in weight and appetite, had a significant effect on working memory (Digit Span Backward, P = 0.008) and auditory attention and short-term memory (Digit Span Forward, P = 0.004). The seasonal variations in sleep duration and mood had an effect on auditory attention and short-term memory (Digit Span Forward, P = 0.02 and P = 0.0002, respectively). The seasonal variations in social activity and energy level had no effect. Seasonal changes in mood, appetite and weight have an impairing effect on auditory attention and processing speed. If performance tests are not to repeated in different seasons, attention needs to be given to the most appropriate season in which to test.
Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O
2008-09-16
Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.
Jorratt, Pascal; Delano, Paul H; Delgado, Carolina; Dagnino-Subiabre, Alexies; Terreros, Gonzalo
2017-01-01
The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through olivocochlear (OC) neurons. Medial OC neurons make cholinergic synapses with outer hair cells (OHCs) through nicotinic receptors constituted by α9 and α10 subunits. One of the physiological functions of the α9 nicotinic receptor subunit (α9-nAChR) is the suppression of auditory distractors during selective attention to visual stimuli. In a recent study we demonstrated that the behavioral performance of alpha-9 nicotinic receptor knock-out (KO) mice is altered during selective attention to visual stimuli with auditory distractors since they made less correct responses and more omissions than wild type (WT) mice. As the inhibition of the behavioral responses to irrelevant stimuli is an important mechanism of the selective attention processes, behavioral errors are relevant measures that can reflect altered inhibitory control. Errors produced during a cued attention task can be classified as premature, target and perseverative errors. Perseverative responses can be considered as an inability to inhibit the repetition of an action already planned, while premature responses can be considered as an index of the ability to wait or retain an action. Here, we studied premature, target and perseverative errors during a visual attention task with auditory distractors in WT and KO mice. We found that α9-KO mice make fewer perseverative errors with longer latencies than WT mice in the presence of auditory distractors. In addition, although we found no significant difference in the number of target error between genotypes, KO mice made more short-latency target errors than WT mice during the presentation of auditory distractors. The fewer perseverative error made by α9-KO mice could be explained by a reduced motivation for reward and an increased impulsivity during decision making with auditory distraction in KO mice.
Tune, Sarah; Wöstmann, Malte; Obleser, Jonas
2018-02-11
In recent years, hemispheric lateralisation of alpha power has emerged as a neural mechanism thought to underpin spatial attention across sensory modalities. Yet, how healthy ageing, beginning in middle adulthood, impacts the modulation of lateralised alpha power supporting auditory attention remains poorly understood. In the current electroencephalography study, middle-aged and older adults (N = 29; ~40-70 years) performed a dichotic listening task that simulates a challenging, multitalker scenario. We examined the extent to which the modulation of 8-12 Hz alpha power would serve as neural marker of listening success across age. With respect to the increase in interindividual variability with age, we examined an extensive battery of behavioural, perceptual and neural measures. Similar to findings on younger adults, middle-aged and older listeners' auditory spatial attention induced robust lateralisation of alpha power, which synchronised with the speech rate. Notably, the observed relationship between this alpha lateralisation and task performance did not co-vary with age. Instead, task performance was strongly related to an individual's attentional and working memory capacity. Multivariate analyses revealed a separation of neural and behavioural variables independent of age. Our results suggest that in age-varying samples as the present one, the lateralisation of alpha power is neither a sufficient nor necessary neural strategy for an individual's auditory spatial attention, as higher age might come with increased use of alternative, compensatory mechanisms. Our findings emphasise that explaining interindividual variability will be key to understanding the role of alpha oscillations in auditory attention in the ageing listener. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Rao, Aparna; Rishiq, Dania; Yu, Luodi; Zhang, Yang; Abrams, Harvey
The objectives of this study were to investigate the effects of hearing aid use and the effectiveness of ReadMyQuips (RMQ), an auditory training program, on speech perception performance and auditory selective attention using electrophysiological measures. RMQ is an audiovisual training program designed to improve speech perception in everyday noisy listening environments. Participants were adults with mild to moderate hearing loss who were first-time hearing aid users. After 4 weeks of hearing aid use, the experimental group completed RMQ training in 4 weeks, and the control group received listening practice on audiobooks during the same period. Cortical late event-related potentials (ERPs) and the Hearing in Noise Test (HINT) were administered at prefitting, pretraining, and post-training to assess effects of hearing aid use and RMQ training. An oddball paradigm allowed tracking of changes in P3a and P3b ERPs to distractors and targets, respectively. Behavioral measures were also obtained while ERPs were recorded from participants. After 4 weeks of hearing aid use but before auditory training, HINT results did not show a statistically significant change, but there was a significant P3a reduction. This reduction in P3a was correlated with improvement in d prime (d') in the selective attention task. Increased P3b amplitudes were also correlated with improvement in d' in the selective attention task. After training, this correlation between P3b and d' remained in the experimental group, but not in the control group. Similarly, HINT testing showed improved speech perception post training only in the experimental group. The criterion calculated in the auditory selective attention task showed a reduction only in the experimental group after training. ERP measures in the auditory selective attention task did not show any changes related to training. Hearing aid use was associated with a decrement in involuntary attention switch to distractors in the auditory selective attention task. RMQ training led to gains in speech perception in noise and improved listener confidence in the auditory selective attention task.
Impact of auditory selective attention on verbal short-term memory and vocabulary development.
Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial
2009-05-01
This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing of item and serial order information because recent studies have shown this distinction to be critical when exploring relations between STM and lexical development. Multiple regression and variance partitioning analyses highlighted two variables as determinants of vocabulary development: (a) a serial order processing variable shared by STM order recall and a selective attention task for sequence information and (b) an attentional variable shared by selective attention measures targeting item or sequence information. The current study highlights the need for integrative STM models, accounting for conjoined influences of attentional capacities and serial order processing capacities on STM performance and the establishment of the lexical language network.
An intervention approach for children with teacher- and parent-identified attentional difficulties.
Semrud-Clikeman, M; Nielsen, K H; Clinton, A; Sylvester, L; Parle, N; Connor, R T
1999-01-01
Using a multimodal and multi-informant method for diagnosis, we selected 33 children by teacher and parent nomination for attention and work completion problems that met DSM-IV criteria for attention-deficit/hyperactivity disorder (ADHD). Of the 33 children in this group, 21 participated in the initial intervention, and 12 were placed in an ADHD control group and received the intervention after pre- and posttesting. A similarly selected group of 21 children without difficulties in attention and work completion served as a control group. Each child was assessed on pre- and posttest measures of visual and auditory attention. After an 18-week intervention period that included attention and problem-solving training, all children in the intervention and control groups were retested on visual and auditory tasks. Children in both ADHD groups showed significantly poorer initial performance on the visual attention task. Whereas the ADHD intervention group showed commensurate performance to the nondisabled control group after training, the ADHD control group did not show significant improvement over the same period. Auditory attention was poorer compared to the control group for both ADHD groups initially and improved only for the ADHD intervention group. These findings are discussed as a possible intervention for children with difficulties in strategy selection in a classroom setting.
Neural effects of cognitive control load on auditory selective attention
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R.; Mangalathu, Jain; Desai, Anjali
2014-01-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210 msec, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. PMID:24946314
Can spectro-temporal complexity explain the autistic pattern of performance on auditory tasks?
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
2006-01-01
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material (pure tones) and/or low-level operations (detection, labelling, chord disembedding, detection of pitch changes) show a superior level of performance and shorter ERP latencies. In contrast, tasks involving spectrally- and temporally-dynamic material and/or complex operations (evaluation, attention) are poorly performed by autistics, or generate inferior ERP activity or brain activation. Neural complexity required to perform auditory tasks may therefore explain pattern of performance and activation of autistic individuals during auditory tasks.
Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.
Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B
2018-02-01
Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G
2017-03-01
We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.
The effects of divided attention on auditory priming.
Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W
2007-09-01
Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.
Attention is required for maintenance of feature binding in visual working memory
Heider, Maike; Husain, Masud
2013-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory—but not necessarily other aspects of working memory. PMID:24266343
Attention is required for maintenance of feature binding in visual working memory.
Zokaei, Nahid; Heider, Maike; Husain, Masud
2014-01-01
Working memory and attention are intimately connected. However, understanding the relationship between the two is challenging. Currently, there is an important controversy about whether objects in working memory are maintained automatically or require resources that are also deployed for visual or auditory attention. Here we investigated the effects of loading attention resources on precision of visual working memory, specifically on correct maintenance of feature-bound objects, using a dual-task paradigm. Participants were presented with a memory array and were asked to remember either direction of motion of random dot kinematograms of different colour, or orientation of coloured bars. During the maintenance period, they performed a secondary visual or auditory task, with varying levels of load. Following a retention period, they adjusted a coloured probe to match either the motion direction or orientation of stimuli with the same colour in the memory array. This allowed us to examine the effects of an attention-demanding task performed during maintenance on precision of recall on the concurrent working memory task. Systematic increase in attention load during maintenance resulted in a significant decrease in overall working memory performance. Changes in overall performance were specifically accompanied by an increase in feature misbinding errors: erroneous reporting of nontarget motion or orientation. Thus in trials where attention resources were taxed, participants were more likely to respond with nontarget values rather than simply making random responses. Our findings suggest that resources used during attention-demanding visual or auditory tasks also contribute to maintaining feature-bound representations in visual working memory-but not necessarily other aspects of working memory.
Soltanparast, Sanaz; Jafari, Zahra; Sameni, Seyed Jalal; Salehi, Masoud
2014-01-01
The purpose of the present study was to evaluate the psychometric properties (validity and reliability) of the Persian version of the Sustained Auditory Attention Capacity Test in children with attention deficit hyperactivity disorder. The Persian version of the Sustained Auditory Attention Capacity Test was constructed to assess sustained auditory attention using the method provided by Feniman and colleagues (2007). In this test, comments were provided to assess the child's attentional deficit by determining inattention and impulsiveness error, the total scores of the sustained auditory attention capacity test and attention span reduction index. In the present study for determining the validity and reliability of in both Rey Auditory Verbal Learning test and the Persian version of the Sustained Auditory Attention Capacity Test (SAACT), 46 normal children and 41 children with Attention Deficit Hyperactivity (ADHD), all right-handed and aged between 7 and 11 of both genders, were evaluated. In determining convergent validity, a negative significant correlation was found between the three parts of the Rey Auditory Verbal Learning test (first, fifth, and immediate recall) and all indicators of the SAACT except attention span reduction. By comparing the test scores between the normal and ADHD groups, discriminant validity analysis showed significant differences in all indicators of the test except for attention span reduction (p< 0.001). The Persian version of the Sustained Auditory Attention Capacity test has good validity and reliability, that matches other reliable tests, and it can be used for the identification of children with attention deficits and if they suspected to have Attention Deficit Hyperactivity Disorder.
Control of Auditory Attention in Children With Specific Language Impairment
Schwartz, Richard G.
2015-01-01
Purpose Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a cross-modal word recognition task. Method Twenty participants with SLI (ages 9–12 years) and 20 age-matched peers with typical language development (TLD) listened to words through headphones and were instructed to attend to the words in 1 ear while ignoring the words in the other ear. They were simultaneously presented with pictures and asked to make a lexical decision about whether the pictures and auditory words were the same or different. Accuracy and reaction time were measured in 5 conditions, in which the stimulus in the unattended channel was manipulated. Results The groups performed with similar accuracy. Compared with their peers with TLD, children with SLI had slower reaction times overall and different within-group patterns of performance by condition. Conclusions Children with TLD showed efficient inhibitory control in conditions that required active suppression of competing stimuli. Participants with SLI had difficulty exerting control over their auditory attention in all conditions, with particular difficulty inhibiting distractors of all types. PMID:26262428
The influence of an auditory-memory attention-demanding task on postural control in blind persons.
Melzer, Itshak; Damry, Elad; Landau, Anat; Yagev, Ronit
2011-05-01
In order to evaluate the effect of an auditory-memory attention-demanding task on balance control, nine blind adults were compared to nine age-gender-matched sighted controls. This issue is particularly relevant for the blind population in which functional assessment of postural control has to be revealed through "real life" motor and cognitive function. The study aimed to explore whether an auditory-memory attention-demanding cognitive task would influence postural control in blind persons and compare this with blindfolded sighted persons. Subjects were instructed to minimize body sway during narrow base upright standing on a single force platform under two conditions: 1) standing still (single task); 2) as in 1) while performing an auditory-memory attention-demanding cognitive task (dual task). Subjects in both groups were required to stand blindfolded with their eyes closed. Center of Pressure displacement data were collected and analyzed using summary statistics and stabilogram-diffusion analysis. Blind and sighted subjects had similar postural sway in eyes closed condition. However, for dual compared to single task, sighted subjects show significant decrease in postural sway while blind subjects did not. The auditory-memory attention-demanding cognitive task had no interference effect on balance control on blind subjects. It seems that sighted individuals used auditory cues to compensate for momentary loss of vision, whereas blind subjects did not. This may suggest that blind and sighted people use different sensorimotor strategies to achieve stability. Copyright © 2010 Elsevier Ltd. All rights reserved.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2018-04-25
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
Assessing attentional systems in children with Attention Deficit Hyperactivity Disorder.
Casagrande, Maria; Martella, Diana; Ruggiero, Maria Cleonice; Maccari, Lisa; Paloscia, Claudio; Rosa, Caterina; Pasini, Augusto
2012-01-01
The aim of this study was to evaluate the efficiency and interactions of attentional systems in children with Attention Deficit Hyperactivity Disorder (ADHD) by considering the effects of reinforcement and auditory warning on each component of attention. Thirty-six drug-naïve children (18 children with ADHD/18 typically developing children) performed two revised versions of the Attentional Network Test, which assess the efficiency of alerting, orienting, and executive systems. In feedback trials, children received feedback about their accuracy, whereas in the no-feedback trials, feedback was not given. In both conditions, children with ADHD performed more slowly than did typically developing children. They also showed impairments in the ability to disengage attention and in executive functioning, which improved when alertness was increased by administering the auditory warning. The performance of the attentional networks appeared to be modulated by the absence or the presence of reinforcement. We suggest that the observed executive system deficit in children with ADHD could depend on their low level of arousal rather than being an independent disorder. © The Author 2011. Published by Oxford University Press. All rights reserved.
Choi, Inyong; Wang, Le; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2014-01-01
Many studies have shown that attention modulates the cortical representation of an auditory scene, emphasizing an attended source while suppressing competing sources. Yet, individual differences in the strength of this attentional modulation and their relationship with selective attention ability are poorly understood. Here, we ask whether differences in how strongly attention modulates cortical responses reflect differences in normal-hearing listeners’ selective auditory attention ability. We asked listeners to attend to one of three competing melodies and identify its pitch contour while we measured cortical electroencephalographic responses. The three melodies were either from widely separated pitch ranges (“easy trials”), or from a narrow, overlapping pitch range (“hard trials”). The melodies started at slightly different times; listeners attended either the leading or lagging melody. Because of the timing of the onsets, the leading melody drew attention exogenously. In contrast, attending the lagging melody required listeners to direct top-down attention volitionally. We quantified how attention amplified auditory N1 response to the attended melody and found large individual differences in the N1 amplification, even though only correctly answered trials were used to quantify the ERP gain. Importantly, listeners with the strongest amplification of N1 response to the lagging melody in the easy trials were the best performers across other types of trials. Our results raise the possibility that individual differences in the strength of top-down gain control reflect inherent differences in the ability to control top-down attention. PMID:24821552
Neural Responses to Complex Auditory Rhythms: The Role of Attending
Chapin, Heather L.; Zanto, Theodore; Jantzen, Kelly J.; Kelso, Scott J. A.; Steinberg, Fred; Large, Edward W.
2010-01-01
The aim of this study was to explore the role of attention in pulse and meter perception using complex rhythms. We used a selective attention paradigm in which participants attended to either a complex auditory rhythm or a visually presented word list. Performance on a reproduction task was used to gauge whether participants were attending to the appropriate stimulus. We hypothesized that attention to complex rhythms – which contain no energy at the pulse frequency – would lead to activations in motor areas involved in pulse perception. Moreover, because multiple repetitions of a complex rhythm are needed to perceive a pulse, activations in pulse-related areas would be seen only after sufficient time had elapsed for pulse perception to develop. Selective attention was also expected to modulate activity in sensory areas specific to the modality. We found that selective attention to rhythms led to increased BOLD responses in basal ganglia, and basal ganglia activity was observed only after the rhythms had cycled enough times for a stable pulse percept to develop. These observations suggest that attention is needed to recruit motor activations associated with the perception of pulse in complex rhythms. Moreover, attention to the auditory stimulus enhanced activity in an attentional sensory network including primary auditory cortex, insula, anterior cingulate, and prefrontal cortex, and suppressed activity in sensory areas associated with attending to the visual stimulus. PMID:21833279
Irsik, Vanessa C; Vanden Bosch der Nederlanden, Christina M; Snyder, Joel S
2016-11-01
Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Harris, Jill; Kamke, Marc R
2014-11-01
Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Entrainment to an auditory signal: Is attention involved?
Kunert, Richard; Jongman, Suzanne R
2017-01-01
Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Hartmeyer, Steffen; Grzeschik, Ramona; Wolbers, Thomas; Wiener, Jan M.
2017-01-01
Route learning is a common navigation task affected by cognitive aging. Here we present a novel experimental paradigm to investigate whether age-related declines in executive control of attention contributes to route learning deficits. A young and an older participant group was repeatedly presented with a route through a virtual maze comprised of 12 decision points (DP) and non-decision points (non-DP). To investigate attentional engagement with the route learning task, participants had to respond to auditory probes at both DP and non-DP. Route knowledge was assessed by showing participants screenshots or landmarks from DPs and non-DPs and asking them to indicate the movement direction required to continue the route. Results demonstrate better performance for DPs than for non-DPs and slower responses to auditory probes at DPs compared to non-DPs. As expected we found slower route learning and slower responses to the auditory probes in the older participant group. Interestingly, differences in response times to the auditory probes between DPs and non-DPs can predict the success of route learning in both age groups and may explain slower knowledge acquisition in the older participant group. PMID:28775689
Eramudugolla, Ranmalee; Mattingley, Jason B
2008-01-01
Patients with unilateral spatial neglect following right hemisphere damage are impaired in detecting contralesional targets in both visual and haptic search tasks, and often show a graded improvement in detection performance for more ipsilesional spatial locations. In audition, multiple simultaneous sounds are most effectively perceived if they are distributed along the frequency dimension. Thus, attention to spectro-temporal features alone can allow detection of a target sound amongst multiple simultaneous distracter sounds, regardless of whether these sounds are spatially separated. Spatial bias in attention associated with neglect should not affect auditory search based on spectro-temporal features of a sound target. We report that a right brain damaged patient with neglect demonstrated a significant gradient favouring the ipsilesional side on a visual search task as well as an auditory search task in which the target was a frequency modulated tone amongst steady distractor tones. No such asymmetry was apparent in the auditory search performance of a control patient with a right hemisphere lesion but no neglect. The results suggest that the spatial bias in attention exhibited by neglect patients affects stimulus processing even when spatial information is irrelevant to the task.
Neural effects of cognitive control load on auditory selective attention.
Sabri, Merav; Humphries, Colin; Verber, Matthew; Liebenthal, Einat; Binder, Jeffrey R; Mangalathu, Jain; Desai, Anjali
2014-08-01
Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention. Copyright © 2014 Elsevier Ltd. All rights reserved.
Humes, Larry E; Lee, Jae Hee; Coughlin, Maureen P
2006-11-01
In this study, two experiments were conducted on auditory selective and divided attention in which the listening task involved the identification of words in sentences spoken by one talker while a second talker produced a very similar competing sentence. Ten young normal-hearing (YNH) and 13 elderly hearing-impaired (EHI) listeners participated in each experiment. The type of attention cue used was the main difference between experiments. Across both experiments, several consistent trends were observed. First, in eight of the nine divided-attention tasks across both experiments, the EHI subjects performed significantly worse than the YNH subjects. By comparison, significant differences in performance between age groups were only observed on three of the nine selective-attention tasks. Finally, there were consistent individual differences in performance across both experiments. Correlational analyses performed on the data from the 13 older adults suggested that the individual differences in performance were associated with individual differences in memory (digit span). Among the elderly, differences in age or differences in hearing loss did not contribute to the individual differences observed in either experiment.
Auditory orienting of attention: Effects of cues and verbal workload with children and adults.
Phélip, Marion; Donnot, Julien; Vauclair, Jacques
2016-01-01
The use of tone cues has an improving effect on the auditory orienting of attention for children as for adults. Verbal cues, on the contrary, do not seem to orient attention as efficiently before the age of 9 years. However, several studies have reported inconsistent effects of orienting attention on ear asymmetries. Multiple factors are questioned, such as the role of verbal workload. Indeed, the semantic nature of the dichotic pairs and their load of processing may explain orienting of attention performance. Thus, by controlling for the role of verbal workload, the present experiment aimed to evaluate the development of capacities for the auditory orienting of attention. Right-handed, 6- to 12-year-olds and adults were recruited to complete either a tone cue or a verbal cue dichotic listening task in the identification of familiar words or nonsense words. A factorial design analysis of variance showed a significant right-ear advantage for all the participants and for all the types of stimuli. A major developmental effect was observed in which verbal cues played an important role: they allowed the 6- to 8-year-olds to improve their performance of identification in the left ear. These effects were taken as evidence of the implication of top-down processes in cognitive flexibility across development.
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians.
Clayton, Kameron K; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, "cocktail-party" like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the "cocktail party problem".
Executive Function, Visual Attention and the Cocktail Party Problem in Musicians and Non-Musicians
Clayton, Kameron K.; Swaminathan, Jayaganesh; Yazdanbakhsh, Arash; Zuk, Jennifer; Patel, Aniruddh D.; Kidd, Gerald
2016-01-01
The goal of this study was to investigate how cognitive factors influence performance in a multi-talker, “cocktail-party” like environment in musicians and non-musicians. This was achieved by relating performance in a spatial hearing task to cognitive processing abilities assessed using measures of executive function (EF) and visual attention in musicians and non-musicians. For the spatial hearing task, a speech target was presented simultaneously with two intelligible speech maskers that were either colocated with the target (0° azimuth) or were symmetrically separated from the target in azimuth (at ±15°). EF assessment included measures of cognitive flexibility, inhibition control and auditory working memory. Selective attention was assessed in the visual domain using a multiple object tracking task (MOT). For the MOT task, the observers were required to track target dots (n = 1,2,3,4,5) in the presence of interfering distractor dots. Musicians performed significantly better than non-musicians in the spatial hearing task. For the EF measures, musicians showed better performance on measures of auditory working memory compared to non-musicians. Furthermore, across all individuals, a significant correlation was observed between performance on the spatial hearing task and measures of auditory working memory. This result suggests that individual differences in performance in a cocktail party-like environment may depend in part on cognitive factors such as auditory working memory. Performance in the MOT task did not differ between groups. However, across all individuals, a significant correlation was found between performance in the MOT and spatial hearing tasks. A stepwise multiple regression analysis revealed that musicianship and performance on the MOT task significantly predicted performance on the spatial hearing task. Overall, these findings confirm the relationship between musicianship and cognitive factors including domain-general selective attention and working memory in solving the “cocktail party problem”. PMID:27384330
Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva
2018-01-01
Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p < 0.05, and r = 0.76, p = 0.01, respectively, in a sample of 20 children with APD diagnosis. The standard APD battery identified a larger proportion of participants as having APD, than an attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both modalities is required. Revising diagnostic criteria to incorporate attention tests and the inattentive type of APD in the test battery, provides additional useful data to clinicians to ensure careful interpretation of APD assessments. PMID:29441033
Selective impairment of auditory selective attention under concurrent cognitive load.
Dittrich, Kerstin; Stahl, Christoph
2012-06-01
Load theory predicts that concurrent cognitive load impairs selective attention. For visual stimuli, it has been shown that this impairment can be selective: Distraction was specifically increased when the stimulus material used in the cognitive load task matches that of the selective attention task. Here, we report four experiments that demonstrate such selective load effects for auditory selective attention. The effect of two different cognitive load tasks on two different auditory Stroop tasks was examined, and selective load effects were observed: Interference in a nonverbal-auditory Stroop task was increased under concurrent nonverbal-auditory cognitive load (compared with a no-load condition), but not under concurrent verbal-auditory cognitive load. By contrast, interference in a verbal-auditory Stroop task was increased under concurrent verbal-auditory cognitive load but not under nonverbal-auditory cognitive load. This double-dissociation pattern suggests the existence of different and separable verbal and nonverbal processing resources in the auditory domain.
The ability for cocaine and cocaine-associated cues to compete for attention
Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin
2017-01-01
In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441
A Behavioral Study of Distraction by Vibrotactile Novelty
ERIC Educational Resources Information Center
Parmentier, Fabrice B. R.; Ljungberg, Jessica K.; Elsley, Jane V.; Lindkvist, Markus
2011-01-01
Past research has demonstrated that the occurrence of unexpected task-irrelevant changes in the auditory or visual sensory channels captured attention in an obligatory fashion, hindering behavioral performance in ongoing auditory or visual categorization tasks and generating orientation and re-orientation electrophysiological responses. We report…
Wei, Qing; Li, Ling; Zhu, Xiao-Dong; Qin, Ling; Mo, Yan-Lin; Liang, Zheng-You; Deng, Jia-Li; Tao, Su-Ping
2017-01-01
This study evaluated the short-term effects of intensity-modulated radiotherapy (IMRT) and cisplatin concurrent chemo-radiotherapy (CCRT) on attention in patients with nasopharyngeal cancer (NPC). Timely detection and early prevention of cognitive decline are important in cancer patients, because long-term cognitive effects may be permanent and irreversible. Thirty-eight NPC patients treated with IMRT (17/38) or CCRT (21/38) and 38 healthy controls were recruited for the study. Neuropsychological tests were administered to each patient before treatment initiation and within a week after treatment completion. Changes in attention performance over time were evaluated using difference values (D-values). Decreased attention was already observable in patients with NPC prior to treatment. Baseline quotient scores for auditory attention, auditory and visual vigilance, and auditory speed were lower in patients treated with CCRT than in healthy controls (P=0.037, P=0.001, P=0.007, P=0.032, respectively). Auditory stamina D-values were higher in patients treated with IMRT alone (P=0.042), while full-scale response control quotient D-values were lower in patients treated with CCRT (P=0.030) than in healthy controls. Gender, depression, education, and sleep quality were each related to decreased attention and response control. Our results showed that IMRT had no negative acute effects on attention in NPC patients, while CCRT decreased response control. PMID:28947979
ERIC Educational Resources Information Center
Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael
2010-01-01
Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…
2017-05-05
Directed Attention Mediated by Real -Time fMRI Neurofeedback presented at/published to 2017 Radiological Society of North America Conference in...DATE Sherwood - p.1 Self-regulation of the primary auditory cortex attention via directed attention mediated by real -time fMRI neurofeedback M S...auditory cortex hyperactivity by self-regulation of the primary auditory cortex (A 1) based on real -time functional magnetic resonance imaging neurofeedback
Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo
2015-01-01
Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention. PMID:25745395
Moisala, Mona; Salmela, Viljami; Salo, Emma; Carlson, Synnöve; Vuontela, Virve; Salonen, Oili; Alho, Kimmo
2015-01-01
Using functional magnetic resonance imaging (fMRI), we measured brain activity of human participants while they performed a sentence congruence judgment task in either the visual or auditory modality separately, or in both modalities simultaneously. Significant performance decrements were observed when attention was divided between the two modalities compared with when one modality was selectively attended. Compared with selective attention (i.e., single tasking), divided attention (i.e., dual-tasking) did not recruit additional cortical regions, but resulted in increased activity in medial and lateral frontal regions which were also activated by the component tasks when performed separately. Areas involved in semantic language processing were revealed predominantly in the left lateral prefrontal cortex by contrasting incongruent with congruent sentences. These areas also showed significant activity increases during divided attention in relation to selective attention. In the sensory cortices, no crossmodal inhibition was observed during divided attention when compared with selective attention to one modality. Our results suggest that the observed performance decrements during dual-tasking are due to interference of the two tasks because they utilize the same part of the cortex. Moreover, semantic dual-tasking did not appear to recruit additional brain areas in comparison with single tasking, and no crossmodal inhibition was observed during intermodal divided attention.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.
Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael
2014-01-01
Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608
ERIC Educational Resources Information Center
Semanchik, Karen
This teacher's guide presents a course for training hearing-impaired students to listen to, create, and perform music. It emphasizes development of individual skills and group participation, encouraging students to contribute a wide variety of auditory and musical abilities and experiences while developing auditory acuity and attention. A variety…
Narrative abilities, memory and attention in children with a specific language impairment.
Duinmeijer, Iris; de Jong, Jan; Scheper, Annette
2012-01-01
While narrative tasks have proven to be valid measures for detecting language disorders, measuring communicative skills and predicting future academic performance, research into the comparability of different narrative tasks has shown that outcomes are dependent on the type of task used. Although many of the studies detecting task differences touch upon the fact that tasks place differential demands on cognitive abilities like auditory attention and memory, few studies have related specific narrative tasks to these cognitive abilities. Examining this relation is especially warranted for children with specific language impairment (SLI), who are characterized by language problems, but often have problems in other cognitive domains as well. In the current research, a comparison was made between a story retelling task (The Bus Story) and a story generation task (The Frog Story) in a group of children with SLI (n= 34) and a typically developing group (n= 38) from the same age range. In addition to the two narrative tasks, sustained auditory attention (TEA-Ch) and verbal working memory (WISC digit span and the Dutch version of the CVLT-C word list recall) were measured. Correlations were computed between the narrative, the memory and the attention scores. A group comparison showed that the children with SLI scored significantly worse than the typically developing children on several narrative measures as well as on sustained auditory attention and verbal working memory. A within-subjects comparison of the scores on the two narrative tasks showed a contrast between the tasks on several narrative measures. Furthermore, correlational analyses showed that, on the level of plot structure, the story generation task correlated with sustained auditory attention, while the story retelling task correlated with word list recall. Mean length of utterance (MLU) on the other hand correlated with digit span but not with sustained auditory attention. While children with SLI have problems with narratives in general, their performance is also dependent on the specific elicitation task used for research or diagnostics. Various narrative tasks generate different scores and are differentially correlated to cognitive skills like attention and memory, making the selection of a given task crucial in the clinical setting. © 2012 Royal College of Speech and Language Therapists.
Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.
Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S
2013-02-01
Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.
ERIC Educational Resources Information Center
Zelanti, Pierre S.; Droit-Volet, Sylvie
2012-01-01
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…
Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko
2012-01-01
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654
Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J
2015-06-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
2015-01-01
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
Effect of attentional load on audiovisual speech perception: evidence from ERPs.
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech.
Prefrontal Hemodynamics of Physical Activity and Environmental Complexity During Cognitive Work.
McKendrick, Ryan; Mehta, Ranjana; Ayaz, Hasan; Scheldrup, Melissa; Parasuraman, Raja
2017-02-01
The aim of this study was to assess performance and cognitive states during cognitive work in the presence of physical work and in natural settings. Authors of previous studies have examined the interaction between cognitive and physical work, finding performance decrements in working memory. Neuroimaging has revealed increases and decreases in prefrontal oxygenated hemoglobin during the interaction of cognitive and physical work. The effect of environment on cognitive-physical dual tasking has not been previously considered. Thirteen participants were monitored with wireless functional near-infrared spectroscopy (fNIRS) as they performed an auditory 1-back task while sitting, walking indoors, and walking outdoors. Relative to sitting and walking indoors, auditory working memory performance declined when participants were walking outdoors. Sitting during the auditory 1-back task increased oxygenated hemoglobin and decreased deoxygenated hemoglobin in bilateral prefrontal cortex. Walking reduced the total hemoglobin available to bilateral prefrontal cortex. An increase in environmental complexity reduced oxygenated hemoglobin and increased deoxygenated hemoglobin in bilateral prefrontal cortex. Wireless fNIRS is capable of monitoring cognitive states in naturalistic environments. Selective attention and physical work compete with executive processing. During executive processing loading of selective attention and physical work results in deactivation of bilateral prefrontal cortex and degraded working memory performance, indicating that physical work and concomitant selective attention may supersede executive processing in the distribution of mental resources. This research informs decision-making procedures in work where working memory, physical activity, and attention interact. Where working memory is paramount, precautions should be taken to eliminate competition from physical work and selective attention.
Developmental Trends in Recall of Central and Incidental Auditory
ERIC Educational Resources Information Center
Hallahan, Daniel P.; And Others
1974-01-01
An auditory recall task involving central and incidental stimuli designed to correspond to processes used in selective attention, was presented to elementary school students. Older children and girls performed better than younger children and boys, especially when animals were the relevant and food the irrelevant stimuli. (DP)
Effects of visual working memory on brain information processing of irrelevant auditory stimuli.
Qu, Jiagui; Rizak, Joshua D; Zhao, Lun; Li, Minghong; Ma, Yuanye
2014-01-01
Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM) has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP) following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC) may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-10-01
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
Short-term plasticity in auditory cognition.
Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko
2007-12-01
Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.
A New Test of Attention in Listening (TAIL) Predicts Auditory Performance
Zhang, Yu-Xuan; Barry, Johanna G.; Moore, David R.; Amitay, Sygal
2012-01-01
Attention modulates auditory perception, but there are currently no simple tests that specifically quantify this modulation. To fill the gap, we developed a new, easy-to-use test of attention in listening (TAIL) based on reaction time. On each trial, two clearly audible tones were presented sequentially, either at the same or different ears. The frequency of the tones was also either the same or different (by at least two critical bands). When the task required same/different frequency judgments, presentation at the same ear significantly speeded responses and reduced errors. A same/different ear (location) judgment was likewise facilitated by keeping tone frequency constant. Perception was thus influenced by involuntary orienting of attention along the task-irrelevant dimension. When information in the two stimulus dimensions were congruent (same-frequency same-ear, or different-frequency different-ear), response was faster and more accurate than when they were incongruent (same-frequency different-ear, or different-frequency same-ear), suggesting the involvement of executive control to resolve conflicts. In total, the TAIL yielded five independent outcome measures: (1) baseline reaction time, indicating information processing efficiency, (2) involuntary orienting of attention to frequency and (3) location, and (4) conflict resolution for frequency and (5) location. Processing efficiency and conflict resolution accounted for up to 45% of individual variances in the low- and high-threshold variants of three psychoacoustic tasks assessing temporal and spectral processing. Involuntary orientation of attention to the irrelevant dimension did not correlate with perceptual performance on these tasks. Given that TAIL measures are unlikely to be limited by perceptual sensitivity, we suggest that the correlations reflect modulation of perceptual performance by attention. The TAIL thus has the power to identify and separate contributions of different components of attention to auditory perception. PMID:23300934
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
2015-08-01
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Effects of total sleep deprivation on divided attention performance
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21–30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss. PMID:29166387
Effects of total sleep deprivation on divided attention performance.
Chua, Eric Chern-Pin; Fang, Eric; Gooley, Joshua J
2017-01-01
Dividing attention across two tasks performed simultaneously usually results in impaired performance on one or both tasks. Most studies have found no difference in the dual-task cost of dividing attention in rested and sleep-deprived states. We hypothesized that, for a divided attention task that is highly cognitively-demanding, performance would show greater impairment during exposure to sleep deprivation. A group of 30 healthy males aged 21-30 years was exposed to 40 h of continuous wakefulness in a laboratory setting. Every 2 h, subjects completed a divided attention task comprising 3 blocks in which an auditory Go/No-Go task was 1) performed alone (single task); 2) performed simultaneously with a visual Go/No-Go task (dual task); and 3) performed simultaneously with both a visual Go/No-Go task and a visually-guided motor tracking task (triple task). Performance on all tasks showed substantial deterioration during exposure to sleep deprivation. A significant interaction was observed between task load and time since wake on auditory Go/No-Go task performance, with greater impairment in response times and accuracy during extended wakefulness. Our results suggest that the ability to divide attention between multiple tasks is impaired during exposure to sleep deprivation. These findings have potential implications for occupations that require multi-tasking combined with long work hours and exposure to sleep loss.
Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa
2017-02-01
The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.
A psychophysiological evaluation of the perceived urgency of auditory warning signals
NASA Technical Reports Server (NTRS)
Burt, J. L.; Bartolome, D. S.; Burdette, D. W.; Comstock, J. R. Jr
1995-01-01
One significant concern that pilots have about cockpit auditory warnings is that the signals presently used lack a sense of priority. The relationship between auditory warning sound parameters and perceived urgency is, therefore, an important topic of enquiry in aviation psychology. The present investigation examined the relationship among subjective assessments of urgency, reaction time, and brainwave activity with three auditory warning signals. Subjects performed a tracking task involving automated and manual conditions, and were presented with auditory warnings having various levels of perceived and situational urgency. Subjective assessments revealed that subjects were able to rank warnings on an urgency scale, but rankings were altered after warnings were mapped to a situational urgency scale. Reaction times differed between automated and manual tracking task conditions, and physiological data showed attentional differences in response to perceived and situational warning urgency levels. This study shows that the use of physiological measures sensitive to attention and arousal, in conjunction with behavioural and subjective measures, may lead to the design of auditory warnings that produce a sense of urgency in an operator that matches the urgency of the situation.
Effect of attentional load on audiovisual speech perception: evidence from ERPs
Alsius, Agnès; Möttönen, Riikka; Sams, Mikko E.; Soto-Faraco, Salvador; Tiippana, Kaisa
2014-01-01
Seeing articulatory movements influences perception of auditory speech. This is often reflected in a shortened latency of auditory event-related potentials (ERPs) generated in the auditory cortex. The present study addressed whether this early neural correlate of audiovisual interaction is modulated by attention. We recorded ERPs in 15 subjects while they were presented with auditory, visual, and audiovisual spoken syllables. Audiovisual stimuli consisted of incongruent auditory and visual components known to elicit a McGurk effect, i.e., a visually driven alteration in the auditory speech percept. In a Dual task condition, participants were asked to identify spoken syllables whilst monitoring a rapid visual stream of pictures for targets, i.e., they had to divide their attention. In a Single task condition, participants identified the syllables without any other tasks, i.e., they were asked to ignore the pictures and focus their attention fully on the spoken syllables. The McGurk effect was weaker in the Dual task than in the Single task condition, indicating an effect of attentional load on audiovisual speech perception. Early auditory ERP components, N1 and P2, peaked earlier to audiovisual stimuli than to auditory stimuli when attention was fully focused on syllables, indicating neurophysiological audiovisual interaction. This latency decrement was reduced when attention was loaded, suggesting that attention influences early neural processing of audiovisual speech. We conclude that reduced attention weakens the interaction between vision and audition in speech. PMID:25076922
Wester, Anne E; Verster, Joris C; Volkerts, Edmund R; Böcker, Koen B E; Kenemans, J Leon
2010-09-01
Driving is a complex task and is susceptible to inattention and distraction. Moreover, alcohol has a detrimental effect on driving performance, possibly due to alcohol-induced attention deficits. The aim of the present study was to assess the effects of alcohol on simulated driving performance and attention orienting and allocation, as assessed by event-related potentials (ERPs). Thirty-two participants completed two test runs in the Divided Attention Steering Simulator (DASS) with blood alcohol concentrations (BACs) of 0.00%, 0.02%, 0.05%, 0.08% and 0.10%. Sixteen participants performed the second DASS test run with a passive auditory oddball to assess alcohol effects on involuntary attention shifting. Sixteen other participants performed the second DASS test run with an active auditory oddball to assess alcohol effects on dual-task performance and active attention allocation. Dose-dependent impairments were found for reaction times, the number of misses and steering error, even more so in dual-task conditions, especially in the active oddball group. ERP amplitudes to novel irrelevant events were also attenuated in a dose-dependent manner. The P3b amplitude to deviant target stimuli decreased with blood alcohol concentration only in the dual-task condition. It is concluded that alcohol increases distractibility and interference from secondary task stimuli, as well as reduces attentional capacity and dual-task integrality.
Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J
2014-01-01
In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.
Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin
2012-01-01
Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.
Preservation of crossmodal selective attention in healthy aging
Hugenschmidt, Christina E.; Peiffer, Ann M.; McCoy, Thomas P.; Hayasaka, Satoru; Laurienti, Paul J.
2010-01-01
The goal of the present study was to determine if older adults benefited from attention to a specific sensory modality in a voluntary attention task and evidenced changes in voluntary or involuntary attention when compared to younger adults. Suppressing and enhancing effects of voluntary attention were assessed using two cued forced-choice tasks, one that asked participants to localize and one that asked them to categorize visual and auditory targets. Involuntary attention was assessed using the same tasks, but with no attentional cues. The effects of attention were evaluated using traditional comparisons of means and Cox proportional hazards models. All analyses showed that older adults benefited behaviorally from selective attention in both visual and auditory conditions, including robust suppressive effects of attention. Of note, the performance of the older adults was commensurate with that of younger adults in almost all analyses, suggesting that older adults can successfully engage crossmodal attention processes. Thus, age-related increases in distractibility across sensory modalities are likely due to mechanisms other than deficits in attentional processing. PMID:19404621
Focused and shifting attention in children with heavy prenatal alcohol exposure.
Mattson, Sarah N; Calarco, Katherine E; Lang, Aimée R
2006-05-01
Attention deficits are a hallmark of the teratogenic effects of alcohol. However, characterization of these deficits remains inconclusive. Children with heavy prenatal alcohol exposure and nonexposed controls were evaluated using a paradigm consisting of three conditions: visual focus, auditory focus, and auditory-visual shift of attention. For the focus conditions, participants responded manually to visual or auditory targets. For the shift condition, participants alternated responses between visual targets and auditory targets. For the visual focus condition, alcohol-exposed children had lower accuracy and slower reaction time for all intertarget intervals (ITIs), while on the auditory focus condition, alcohol-exposed children were less accurate but displayed slower reaction time only on the longest ITI. Finally, for the shift condition, the alcohol-exposed group was accurate but had slowed reaction times. These results indicate that children with heavy prenatal alcohol exposure have pervasive deficits in visual focused attention and deficits in maintaining auditory attention over time. However, no deficits were noted in the ability to disengage and reengage attention when required to shift attention between visual and auditory stimuli, although reaction times to shift were slower. Copyright (c) 2006 APA, all rights reserved.
Predictors of change in life skills in schizophrenia after cognitive remediation.
Kurtz, Matthew M; Seltzer, James C; Fujimoto, Marco; Shagan, Dana S; Wexler, Bruce E
2009-02-01
Few studies have investigated predictors of response to cognitive remediation interventions in patients with schizophrenia. Predictor studies to date have selected treatment outcome measures that were either part of the remediation intervention itself or closely linked to the intervention with few studies investigating factors that predict generalization to measures of everyday life-skills as an index of treatment-related improvement. In the current study we investigated the relationship between four measures of neurocognitive function, crystallized verbal ability, auditory sustained attention and working memory, verbal learning and memory, and problem-solving, two measures of symptoms, total positive and negative symptoms, and the process variables of treatment intensity and duration, to change on a performance-based measure of everyday life-skills after a year of computer-assisted cognitive remediation offered as part of intensive outpatient rehabilitation treatment. Thirty-six patients with schizophrenia or schizoaffective disorder were studied. Results of a linear regression model revealed that auditory attention and working memory predicted a significant amount of the variance in change in performance-based measures of everyday life skills after cognitive remediation, even when variance for all other neurocognitive variables in the model was controlled. Stepwise regression revealed that auditory attention and working memory predicted change in everyday life-skills across the trial even when baseline life-skill scores, symptoms and treatment process variables were controlled. These findings emphasize the importance of sustained auditory attention and working memory for benefiting from extended programs of cognitive remediation.
Samar, Vincent J.; Berger, Lauren
2017-01-01
Individuals deaf from early age often outperform hearing individuals in the visual periphery on attention-dependent dorsal stream tasks (e.g., spatial localization or movement detection), but sometimes show central visual attention deficits, usually on ventral stream object identification tasks. It has been proposed that early deafness adaptively redirects attentional resources from central to peripheral vision to monitor extrapersonal space in the absence of auditory cues, producing a more evenly distributed attention gradient across visual space. However, little direct evidence exists that peripheral advantages are functionally tied to central deficits, rather than determined by independent mechanisms, and previous studies using several attention tasks typically report peripheral advantages or central deficits, not both. To test the general altered attentional gradient proposal, we employed a novel divided attention paradigm that measured target localization performance along a gradient from parafoveal to peripheral locations, independent of concurrent central object identification performance in prelingually deaf and hearing groups who differed in access to auditory input. Deaf participants without cochlear implants (No-CI), with cochlear implants (CI), and hearing participants identified vehicles presented centrally, and concurrently reported the location of parafoveal (1.4°) and peripheral (13.3°) targets among distractors. No-CI participants but not CI participants showed a central identification accuracy deficit. However, all groups displayed equivalent target localization accuracy at peripheral and parafoveal locations and nearly parallel parafoveal-peripheral gradients. Furthermore, the No-CI group’s central identification deficit remained after statistically controlling peripheral performance; conversely, the parafoveal and peripheral group performance equivalencies remained after controlling central identification accuracy. These results suggest that, in the absence of auditory input, reduced central attentional capacity is not necessarily associated with enhanced peripheral attentional capacity or with flattening of a general attention gradient. Our findings converge with earlier studies suggesting that a general graded trade-off of attentional resources across the visual field does not adequately explain the complex task-dependent spatial distribution of deaf-hearing performance differences reported in the literature. Rather, growing evidence suggests that the spatial distribution of attention-mediated performance in deaf people is determined by sophisticated cross-modal plasticity mechanisms that recruit specific sensory and polymodal cortex to achieve specific compensatory processing goals. PMID:28559861
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children.
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10-40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89-98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities.
Neural Biomarkers for Dyslexia, ADHD, and ADD in the Auditory Cortex of Children
Serrallach, Bettina; Groß, Christine; Bernhofs, Valdis; Engelmann, Dorte; Benner, Jan; Gündert, Nadine; Blatow, Maria; Wengenroth, Martina; Seitz, Angelika; Brunner, Monika; Seither, Stefan; Parncutt, Richard; Schneider, Peter; Seither-Preisler, Annemarie
2016-01-01
Dyslexia, attention deficit hyperactivity disorder (ADHD), and attention deficit disorder (ADD) show distinct clinical profiles that may include auditory and language-related impairments. Currently, an objective brain-based diagnosis of these developmental disorders is still unavailable. We investigated the neuro-auditory systems of dyslexic, ADHD, ADD, and age-matched control children (N = 147) using neuroimaging, magnetencephalography and psychoacoustics. All disorder subgroups exhibited an oversized left planum temporale and an abnormal interhemispheric asynchrony (10–40 ms) of the primary auditory evoked P1-response. Considering right auditory cortex morphology, bilateral P1 source waveform shapes, and auditory performance, the three disorder subgroups could be reliably differentiated with outstanding accuracies of 89–98%. We therefore for the first time provide differential biomarkers for a brain-based diagnosis of dyslexia, ADHD, and ADD. The method allowed not only allowed for clear discrimination between two subtypes of attentional disorders (ADHD and ADD), a topic controversially discussed for decades in the scientific community, but also revealed the potential for objectively identifying comorbid cases. Noteworthy, in children playing a musical instrument, after three and a half years of training the observed interhemispheric asynchronies were reduced by about 2/3, thus suggesting a strong beneficial influence of music experience on brain development. These findings might have far-reaching implications for both research and practice and enable a profound understanding of the brain-related etiology, diagnosis, and musically based therapy of common auditory-related developmental disorders and learning disabilities. PMID:27471442
Happiness increases distraction by auditory deviant stimuli.
Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R
2016-08-01
Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.
Menashe, Shay
2017-01-01
The main aim of the present study was to determine whether adult dyslexic readers demonstrate the "Asynchrony Theory" (Breznitz [Reading Fluency: Synchronization of Processes, Lawrence Erlbaum and Associates, Mahwah, NJ, USA, 2006]) when selective attention is studied. Event-related potentials (ERPs) and behavioral parameters were collected from nonimpaired readers group and dyslexic readers group performing alphabetic and nonalphabetic tasks. The dyslexic readers group was found to demonstrate asynchrony between the auditory and the visual modalities when it came to processing alphabetic stimuli. These findings were found both for behavioral and ERPs parameters. Unlike the dyslexic readers, the nonimpaired readers showed synchronized speed of processing in the auditory and the visual modalities while processing alphabetic stimuli. The current study suggests that established reading is dependent on a synchronization between the auditory and the visual modalities even when it comes to selective attention.
Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex.
Downer, Joshua D; Rapone, Brittany; Verhein, Jessica; O'Connor, Kevin N; Sutter, Mitchell L
2017-05-24
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations ( r noise ) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on r noise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in r noise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations ( r noise ) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on r noise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. Copyright © 2017 the authors 0270-6474/17/375378-15$15.00/0.
Feature-Selective Attention Adaptively Shifts Noise Correlations in Primary Auditory Cortex
2017-01-01
Sensory environments often contain an overwhelming amount of information, with both relevant and irrelevant information competing for neural resources. Feature attention mediates this competition by selecting the sensory features needed to form a coherent percept. How attention affects the activity of populations of neurons to support this process is poorly understood because population coding is typically studied through simulations in which one sensory feature is encoded without competition. Therefore, to study the effects of feature attention on population-based neural coding, investigations must be extended to include stimuli with both relevant and irrelevant features. We measured noise correlations (rnoise) within small neural populations in primary auditory cortex while rhesus macaques performed a novel feature-selective attention task. We found that the effect of feature-selective attention on rnoise depended not only on the population tuning to the attended feature, but also on the tuning to the distractor feature. To attempt to explain how these observed effects might support enhanced perceptual performance, we propose an extension of a simple and influential model in which shifts in rnoise can simultaneously enhance the representation of the attended feature while suppressing the distractor. These findings present a novel mechanism by which attention modulates neural populations to support sensory processing in cluttered environments. SIGNIFICANCE STATEMENT Although feature-selective attention constitutes one of the building blocks of listening in natural environments, its neural bases remain obscure. To address this, we developed a novel auditory feature-selective attention task and measured noise correlations (rnoise) in rhesus macaque A1 during task performance. Unlike previous studies showing that the effect of attention on rnoise depends on population tuning to the attended feature, we show that the effect of attention depends on the tuning to the distractor feature as well. We suggest that these effects represent an efficient process by which sensory cortex simultaneously enhances relevant information and suppresses irrelevant information. PMID:28432139
The Influence of Selective and Divided Attention on Audiovisual Integration in Children.
Yang, Weiping; Ren, Yanna; Yang, Dan Ou; Yuan, Xue; Wu, Jinglong
2016-01-24
This article aims to investigate whether there is a difference in audiovisual integration in school-aged children (aged 6 to 13 years; mean age = 9.9 years) between the selective attention condition and divided attention condition. We designed a visual and/or auditory detection task that included three blocks (divided attention, visual-selective attention, and auditory-selective attention). The results showed that the response to bimodal audiovisual stimuli was faster than to unimodal auditory or visual stimuli under both divided attention and auditory-selective attention conditions. However, in the visual-selective attention condition, no significant difference was found between the unimodal visual and bimodal audiovisual stimuli in response speed. Moreover, audiovisual behavioral facilitation effects were compared between divided attention and selective attention (auditory or visual attention). In doing so, we found that audiovisual behavioral facilitation was significantly difference between divided attention and selective attention. The results indicated that audiovisual integration was stronger in the divided attention condition than that in the selective attention condition in children. Our findings objectively support the notion that attention can modulate audiovisual integration in school-aged children. Our study might offer a new perspective for identifying children with conditions that are associated with sustained attention deficit, such as attention-deficit hyperactivity disorder. © The Author(s) 2016.
Garg, Arun; Schwartz, Daniel; Stevens, Alexander A.
2007-01-01
What happens in vision related cortical areas when congenitally blind (CB) individuals orient attention to spatial locations? Previous neuroimaging of sighted individuals has found overlapping activation in a network of frontoparietal areas including frontal eye-fields (FEF), during both overt (with eye movement) and covert (without eye movement) shifts of spatial attention. Since voluntary eye movement planning seems irrelevant in CB, their FEF neurons should be recruited for alternative functions if their attentional role in sighted individuals is only due to eye movement planning. Recent neuroimaging of the blind has also reported activation in medial occipital areas, normally associated with visual processing, during a diverse set of non-visual tasks, but their response to attentional shifts remains poorly understood. Here, we used event-related fMRI to explore FEF and medial occipital areas in CB individuals and sighted controls with eyes closed (SC) performing a covert attention orienting task, using endogenous verbal cues and spatialized auditory targets. We found robust stimulus-locked FEF activation of all CB subjects, similar but stronger than in SC, suggesting that FEF plays a role in endogenous orienting of covert spatial attention even in individuals in whom voluntary eye movements are irrelevant. We also found robust activation in bilateral medial occipital cortex in CB but not in SC subjects. The response decreased below baseline following endogenous verbal cues but increased following auditory targets, suggesting that the medial occipital area in CB does not directly engage during cued orienting of attention but may be recruited for processing of spatialized auditory targets. PMID:17397882
Visual and auditory steady-state responses in attention-deficit/hyperactivity disorder.
Khaleghi, Ali; Zarafshan, Hadi; Mohammadi, Mohammad Reza
2018-05-22
We designed a study to investigate the patterns of the steady-state visual evoked potential (SSVEP) and auditory steady-state response (ASSR) in adolescents with attention-deficit/hyperactivity disorder (ADHD) when performing a motor response inhibition task. Thirty 12- to 18-year-old adolescents with ADHD and 30 healthy control adolescents underwent an electroencephalogram (EEG) examination during steady-state stimuli when performing a stop-signal task. Then, we calculated the amplitude and phase of the steady-state responses in both visual and auditory modalities. Results showed that adolescents with ADHD had a significantly poorer performance in the stop-signal task during both visual and auditory stimuli. The SSVEP amplitude of the ADHD group was larger than that of the healthy control group in most regions of the brain, whereas the ASSR amplitude of the ADHD group was smaller than that of the healthy control group in some brain regions (e.g., right hemisphere). In conclusion, poorer task performance (especially inattention) and neurophysiological results in ADHD demonstrate a possible impairment in the interconnection of the association cortices in the parietal and temporal lobes and the prefrontal cortex. Also, the motor control problems in ADHD may arise from neural deficits in the frontoparietal and occipitoparietal systems and other brain structures such as cerebellum.
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
2016-10-01
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria
2016-01-01
Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884
Ruggles, Dorea; Shinn-Cunningham, Barbara
2011-06-01
Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.
The effect of divided attention on novices and experts in laparoscopic task performance.
Ghazanfar, Mudassar Ali; Cook, Malcolm; Tang, Benjie; Tait, Iain; Alijani, Afshin
2015-03-01
Attention is important for the skilful execution of surgery. The surgeon's attention during surgery is divided between surgery and outside distractions. The effect of this divided attention has not been well studied previously. We aimed to compare the effect of dividing attention of novices and experts on a laparoscopic task performance. Following ethical approval, 25 novices and 9 expert surgeons performed a standardised peg transfer task in a laboratory setup under three randomly assigned conditions: silent as control condition and two standardised auditory distracting tasks requiring response (easy and difficult) as study conditions. Human reliability assessment was used for surgical task analysis. Primary outcome measures were correct auditory responses, task time, number of surgical errors and instrument movements. Secondary outcome measures included error rate, error probability and hand specific differences. Non-parametric statistics were used for data analysis. 21109 movements and 9036 total errors were analysed. Novices had increased mean task completion time (seconds) (171 ± 44SD vs. 149 ± 34, p < 0.05), number of total movements (227 ± 27 vs. 213 ± 26, p < 0.05) and number of errors (127 ± 51 vs. 96 ± 28, p < 0.05) during difficult study conditions compared to control. The correct responses to auditory stimuli were less frequent in experts (68 %) compared to novices (80 %). There was a positive correlation between error rate and error probability in novices (r (2) = 0.533, p < 0.05) but not in experts (r (2) = 0.346, p > 0.05). Divided attention conditions in theatre environment require careful consideration during surgical training as the junior surgeons are less able to focus their attention during these conditions.
Auditory skills of figure-ground and closure in air traffic controllers.
Villar, Anna Carolina Nascimento Waack Braga; Pereira, Liliane Desgualdo
2017-12-04
To investigate the auditory skills of closure and figure-ground and factors associated with health, communication, and attention in air traffic controllers, and compare these variables with those of other civil and military servants. Study participants were sixty adults with normal audiometric thresholds divided into two groups matched for age and gender: study group (SG), comprising 30 air traffic controllers and control group (CG), composed of 30 other military and civil servants. All participants were asked a number of questions regarding their health, communication, and attention, and underwent the Speech-in-Noise Test (SIN) to assess their closure skills and the Synthetic Sentence Identification Test - Ipsilateral Competitive Message (SSI-ICM) in monotic listening to evaluate their figure-ground abilities. Data were compared using nonparametric statistical tests and logistic regression analysis. More individuals in the SG reported fatigue and/or burnout and work-related stress and showed better performance than that of individuals in the CG for the figure-ground ability. Both groups performed similarly and satisfactorily in the other hearing tests. The odds ratio for participants belonging in the SG was 5.59 and 1.24 times regarding work-related stress and SSI-ICM (right ear), respectively. Results for the variables auditory closure, self-reported health, attention, and communication were similar in both groups. The SG presented significantly better performance in auditory figure-ground compared with that of the CG. Self-reported stress and right-ear SSI-ICM were significant predictors of individuals belonging to the SG.
Yasuda, Kazuhiro; Iimura, Naoyuki; Iwata, Hiroyasu
2014-01-01
The objective of the present study was to determine whether increased attentional demands influence the assessment of ankle joint proprioceptive ability in young adults. We used a dual-task condition, in which participants performed an ankle ipsilateral position-matching task with and without a secondary serial auditory subtraction task during target angle encoding. Two experiments were performed with two different cohorts: one in which the auditory subtraction task was easy (experiment 1a) and one in which it was difficult (experiment 1b). The results showed that, compared with the single-task condition, participants had higher absolute error under dual-task conditions in experiment 1b. The reduction in position-matching accuracy with an attentionally demanding cognitive task suggests that allocation of attentional resources toward a difficult second task can lead to compromised ankle proprioceptive performance. Therefore, these findings indicate that the difficulty level of the cognitive task might be the possible critical factor that decreased accuracy of position-matching task. We conclude that increased attentional demand with difficult cognitive task does influence the assessment of ankle joint proprioceptive ability in young adults when measured using an ankle ipsilateral position-matching task. PMID:24523966
Dutke, Stephan; Jaitner, Thomas; Berse, Timo; Barenberg, Jonathan
2014-02-01
Research on effects of acute physical exercise on performance in a concurrent cognitive task has generated equivocal evidence. Processing efficiency theory predicts that concurrent physical exercise can increase resource requirements for sustaining cognitive performance even when the level of performance is unaffected. This hypothesis was tested in a dual-task experiment. Sixty young adults worked on a primary auditory attention task and a secondary interval production task while cycling on a bicycle ergometer. Physical load (cycling) and cognitive load of the primary task were manipulated. Neither physical nor cognitive load affected primary task performance, but both factors interacted on secondary task performance. Sustaining primary task performance under increased physical and/or cognitive load increased resource consumption as indicated by decreased secondary task performance. Results demonstrated that physical exercise effects on cognition might be underestimated when only single task performance is the focus.
Skills for Academic Improvement: A Guide for How-to-Study Counselors.
1982-06-01
auditory neurological system beyond the car. Auditory perception consists of essentially eight components, which are: 1. Auditory attention ...34daydreaming" or difficulty following lectures in different classes may be an indication of problems with auditory attention . 2. Sound localization...says. The counselor must listen not only attentively to what the cadet says, but must learn to listen perceptively for what the cadet really means. The
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-05-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament questionnaires (S. C. McDevitt & W. B. Carey, 1995), the authors classified participants according to temperament composites of reactivity and task orientation. In a preliminary study, tasks were equated across age and modality using d' matching procedures. In the main experiment, 48 children and 48 adults performed these calibrated tasks. The auditory task proved more difficult for both children and adults. Intermodal relations changed with age: Performance across modality was significantly correlated for children but not for adults. Although temperament did not significantly predict performance in adults, it did for children. The temperament effects observed in children--specifically in those with the composite of reactivity--occurred in connection with the auditory task and in a manner consistent with theoretical predictions derived from optimal stimulation theory. Copyright (c) 2007 APA, all rights reserved.
Modality-specificity of Selective Attention Networks.
Stewart, Hannah J; Amitay, Sygal
2015-01-01
To establish the modality specificity and generality of selective attention networks. Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled "general attention." The third component was labeled "auditory attention," as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as "spatial orienting" and "spatial conflict," respectively-they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task-all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific.
Visual selective attention in amnestic mild cognitive impairment.
McLaughlin, Paula M; Anderson, Nicole D; Rich, Jill B; Chertkow, Howard; Murtha, Susan J E
2014-11-01
Subtle deficits in visual selective attention have been found in amnestic mild cognitive impairment (aMCI). However, few studies have explored performance on visual search paradigms or the Simon task, which are known to be sensitive to disease severity in Alzheimer's patients. Furthermore, there is limited research investigating how deficiencies can be ameliorated with exogenous support (auditory cues). Sixteen individuals with aMCI and 14 control participants completed 3 experimental tasks that varied in demand and cue availability: visual search-alerting, visual search-orienting, and Simon task. Visual selective attention was influenced by aMCI, auditory cues, and task characteristics. Visual search abilities were relatively consistent across groups. The aMCI participants were impaired on the Simon task when working memory was required, but conflict resolution was similar to controls. Spatially informative orienting cues improved response times, whereas spatially neutral alerting cues did not influence performance. Finally, spatially informative auditory cues benefited the aMCI group more than controls in the visual search task, specifically at the largest array size where orienting demands were greatest. These findings suggest that individuals with aMCI have working memory deficits and subtle deficiencies in orienting attention and rely on exogenous information to guide attention. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Sustained selective attention to competing amplitude-modulations in human auditory cortex.
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
2014-01-01
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.
Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex
Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander
2014-01-01
Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525
Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.
Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta
2009-01-01
In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.
ERIC Educational Resources Information Center
Pearson, Deborah A.; Santos, Cynthia W.; Casat, Charles D.; Lane, David M.; Jerger, Susan W.; Roache, John D.; Loveland, Katherine A.; Lachar, David; Faria, Laura P.; Payne, Christa D.; Cleveland, Lynne A.
2004-01-01
Objective: Cognitive effects of stimulant medication were investigated in children with mental retardation (MR) and attention-deficit/hyperactivity disorder (ADHD). Method: Performance on tasks tapping sustained attention, visual and auditory selective attention, inhibition, and immediate memory was assessed for 24 children (mean age 10.9 years)…
Tracking the voluntary control of auditory spatial attention with event-related brain potentials.
Störmer, Viola S; Green, Jessica J; McDonald, John J
2009-03-01
A lateralized event-related potential (ERP) component elicited by attention-directing cues (ADAN) has been linked to frontal-lobe control but is often absent when spatial attention is deployed in the auditory modality. Here, we tested the hypothesis that ERP activity associated with frontal-lobe control of auditory spatial attention is distributed bilaterally by comparing ERPs elicited by attention-directing cues and neutral cues in a unimodal auditory task. This revealed an initial ERP positivity over the anterior scalp and a later ERP negativity over the parietal scalp. Distributed source analysis indicated that the anterior positivity was generated primarily in bilateral prefrontal cortices, whereas the more posterior negativity was generated in parietal and temporal cortices. The anterior ERP positivity likely reflects frontal-lobe attentional control, whereas the subsequent ERP negativity likely reflects anticipatory biasing of activity in auditory cortex.
Auditory processing during deep propofol sedation and recovery from unconsciousness.
Koelsch, Stefan; Heinke, Wolfgang; Sammler, Daniela; Olthoff, Derk
2006-08-01
Using evoked potentials, this study investigated effects of deep propofol sedation, and effects of recovery from unconsciousness, on the processing of auditory information with stimuli suited to elicit a physical MMN, and a (music-syntactic) ERAN. Levels of sedation were assessed using the Bispectral Index (BIS) and the Modified Observer's Assessment of Alertness and Sedation Scale (MOAAS). EEG-measurements were performed during wakefulness, deep propofol sedation (MOAAS 2-3, mean BIS=68), and a recovery period. Between deep sedation and recovery period, the infusion rate of propofol was increased to achieve unconsciousness (MOAAS 0-1, mean BIS=35); EEG measurements of recovery period were performed after subjects regained consciousness. During deep sedation, the physical MMN was markedly reduced, but still significant. No ERAN was observed in this level. A clear P3a was elicited during deep sedation by those deviants, which were task-relevant during the awake state. As soon as subjects regained consciousness during the recovery period, a normal MMN was elicited. By contrast, the P3a was absent in the recovery period, and the P3b was markedly reduced. Results indicate that the auditory sensory memory (as indexed by the physical MMN) is still active, although strongly reduced, during deep sedation (MOAAS 2-3). The presence of the P3a indicates that attention-related processes are still operating during this level. Processes of syntactic analysis appear to be abolished during deep sedation. After propofol-induced anesthesia, the auditory sensory memory appears to operate normal as soon as subjects regain consciousness, whereas the attention-related processes indexed by P3a and P3b are markedly impaired. Results inform about effects of sedative drugs on auditory and attention-related mechanisms. The findings are important because these mechanisms are prerequisites for auditory awareness, auditory learning and memory, as well as language perception during anesthesia.
Rizzo, John-Ross; Raghavan, Preeti; McCrery, J R; Oh-Park, Mooyeon; Verghese, Joe
2015-04-01
To evaluate the effect of a novel divided attention task-walking under auditory constraints-on gait performance in older adults and to determine whether this effect was moderated by cognitive status. Validation cohort. General community. Ambulatory older adults without dementia (N=104). Not applicable. In this pilot study, we evaluated walking under auditory constraints in 104 older adults who completed 3 pairs of walking trials on a gait mat under 1 of 3 randomly assigned conditions: 1 pair without auditory stimulation and 2 pairs with emotionally charged auditory stimulation with happy or sad sounds. The mean age of subjects was 80.6±4.9 years, and 63% (n=66) were women. The mean velocity during normal walking was 97.9±20.6cm/s, and the mean cadence was 105.1±9.9 steps/min. The effect of walking under auditory constraints on gait characteristics was analyzed using a 2-factorial analysis of variance with a 1-between factor (cognitively intact and minimal cognitive impairment groups) and a 1-within factor (type of auditory stimuli). In both happy and sad auditory stimulation trials, cognitively intact older adults (n=96) showed an average increase of 2.68cm/s in gait velocity (F1.86,191.71=3.99; P=.02) and an average increase of 2.41 steps/min in cadence (F1.75,180.42=10.12; P<.001) as compared with trials without auditory stimulation. In contrast, older adults with minimal cognitive impairment (Blessed test score, 5-10; n=8) showed an average reduction of 5.45cm/s in gait velocity (F1.87,190.83=5.62; P=.005) and an average reduction of 3.88 steps/min in cadence (F1.79,183.10=8.21; P=.001) under both auditory stimulation conditions. Neither baseline fall history nor performance of activities of daily living accounted for these differences. Our results provide preliminary evidence of the differentiating effect of emotionally charged auditory stimuli on gait performance in older individuals with minimal cognitive impairment compared with those without minimal cognitive impairment. A divided attention task using emotionally charged auditory stimuli might be able to elicit compensatory improvement in gait performance in cognitively intact older individuals, but lead to decompensation in those with minimal cognitive impairment. Further investigation is needed to compare gait performance under this task to gait on other dual-task paradigms and to separately examine the effect of physiological aging versus cognitive impairment on gait during walking under auditory constraints. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash
2015-01-01
The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490
Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area
Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs
2010-01-01
Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269
Sanju, Himanshu Kumar; Kumar, Prawin
2016-10-01
Introduction Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.
Zhang, Dan; Hong, Bo; Gao, Shangkai; Röder, Brigitte
2017-05-01
While the behavioral dynamics as well as the functional network of sustained and transient attention have extensively been studied, their underlying neural mechanisms have most often been investigated in separate experiments. In the present study, participants were instructed to perform an audio-visual spatial attention task. They were asked to attend to either the left or the right hemifield and to respond to deviant transient either auditory or visual stimuli. Steady-state visual evoked potentials (SSVEPs) elicited by two task irrelevant pattern reversing checkerboards flickering at 10 and 15 Hz in the left and the right hemifields, respectively, were used to continuously monitor the locus of spatial attention. The amplitude and phase of the SSVEPs were extracted for single trials and were separately analyzed. Sustained attention to one hemifield (spatial attention) as well as to the auditory modality (intermodal attention) increased the inter-trial phase locking of the SSVEP responses, whereas briefly presented visual and auditory stimuli decreased the single-trial SSVEP amplitude between 200 and 500 ms post-stimulus. This transient change of the single-trial amplitude was restricted to the SSVEPs elicited by the reversing checkerboard in the spatially attended hemifield and thus might reflect a transient re-orienting of attention towards the brief stimuli. Thus, the present results demonstrate independent, but interacting neural mechanisms of sustained and transient attentional orienting.
The Effect of Visual Perceptual Load on Auditory Awareness in Autism Spectrum Disorder
ERIC Educational Resources Information Center
Tillmann, Julian; Olguin, Andrea; Tuomainen, Jyrki; Swettenham, John
2015-01-01
Recent work on visual selective attention has shown that individuals with Autism Spectrum Disorder (ASD) demonstrate an increased perceptual capacity. The current study examined whether increasing visual perceptual load also has less of an effect on auditory awareness in children with ASD. Participants performed either a high- or low load version…
Sensory coding and cognitive processing of sound in Veterans with blast exposure
Bressler, Scott; Goldberg, Hannah; Shinn-Cunningham, Barbara
2017-01-01
Recent anecdotal reports from VA audiology clinics as well as a few published studies have identified a sub-population of Service Members seeking treatment for problems communicating in everyday, noisy listening environments despite having normal to near-normal hearing thresholds. Because of their increased risk of exposure to dangerous levels of prolonged noise and transient explosive blast events, communication problems in these soldiers could be due to either hearing loss (traditional or “hidden”) in the auditory sensory periphery or from blast-induced injury to cortical networks associated with attention. We found that out of the 14 blast-exposed Service Members recruited for this study, 12 had hearing thresholds in the normal to near-normal range. A majority of these participants reported having problems specifically related to failures with selective attention. Envelope following responses (EFRs) measuring neural coding fidelity of the auditory brainstem to suprathreshold sounds were similar between blast-exposed and non-blast controls. Blast-exposed subjects performed substantially worse than non-blast controls in an auditory selective attention task in which listeners classified the melodic contour (rising, falling, or “zig-zagging”) of one of three simultaneous, competing tone sequences. Salient pitch and spatial differences made for easy segregation of the three concurrent melodies. Poor performance in the blast-exposed subjects was associated with weaker evoked response potentials (ERPs) in frontal EEG channels, as well as a failure of attention to enhance the neural responses evoked by a sequence when it was the target compared to when it was a distractor. These results suggest that communication problems in these listeners cannot be explained by compromised sensory representations in the auditory periphery, but rather point to lingering blast-induced damage to cortical networks implicated in the control of attention. Because all study participants also suffered from post-traumatic disorder (PTSD), follow-up studies are required to tease apart the contributions of PTSD and blast-induced injury on cognitive performance. PMID:27815131
Attentional capture by taboo words: A functional view of auditory distraction.
Röer, Jan P; Körner, Ulrike; Buchner, Axel; Bell, Raoul
2017-06-01
It is well established that task-irrelevant, to-be-ignored speech adversely affects serial short-term memory (STM) for visually presented items compared with a quiet control condition. However, there is an ongoing debate about whether the semantic content of the speech has the capacity to capture attention and to disrupt memory performance. In the present article, we tested whether taboo words are more difficult to ignore than neutral words. Taboo words or neutral words were presented as (a) steady state sequences in which the same distractor word was repeated, (b) changing state sequences in which different distractor words were presented, and (c) auditory deviant sequences in which a single distractor word deviated from a sequence of repeated words. Experiments 1 and 2 showed that taboo words disrupted performance more than neutral words. This taboo effect did not habituate and it did not differ between individuals with high and low working memory capacity. In Experiments 3 and 4, in which only a single deviant taboo word was presented, no taboo effect was obtained. These results do not support the idea that the processing of the auditory distractors' semantic content is the result of occasional attention switches to the auditory modality. Instead, the overall pattern of results is more in line with a functional view of auditory distraction, according to which the to-be-ignored modality is routinely monitored for potentially important stimuli (e.g., self-relevant or threatening information), the detection of which draws processing resources away from the primary task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Generality and specificity in the effects of musical expertise on perception and cognition.
Carey, Daniel; Rosen, Stuart; Krishnan, Saloni; Pearce, Marcus T; Shepherd, Alex; Aydelott, Jennifer; Dick, Frederic
2015-04-01
Performing musicians invest thousands of hours becoming experts in a range of perceptual, attentional, and cognitive skills. The duration and intensity of musicians' training - far greater than that of most educational or rehabilitation programs - provides a useful model to test the extent to which skills acquired in one particular context (music) generalize to different domains. Here, we asked whether the instrument-specific and more instrument-general skills acquired during professional violinists' and pianists' training would generalize to superior performance on a wide range of analogous (largely non-musical) skills, when compared to closely matched non-musicians. Violinists and pianists outperformed non-musicians on fine-grained auditory psychophysical measures, but surprisingly did not differ from each other, despite the different demands of their instruments. Musician groups did differ on a tuning system perception task: violinists showed clearest biases towards the tuning system specific to their instrument, suggesting that long-term experience leads to selective perceptual benefits given a training-relevant context. However, we found only weak evidence of group differences in non-musical skills, with musicians differing marginally in one measure of sustained auditory attention, but not significantly on auditory scene analysis or multi-modal sequencing measures. Further, regression analyses showed that this sustained auditory attention metric predicted more variance in one auditory psychophysical measure than did musical expertise. Our findings suggest that specific musical expertise may yield distinct perceptual outcomes within contexts close to the area of training. Generalization of expertise to relevant cognitive domains may be less clear, particularly where the task context is non-musical. Copyright © 2014 Elsevier B.V. All rights reserved.
Age-equivalent top-down modulation during cross-modal selective attention.
Guerreiro, Maria J S; Anguera, Joaquin A; Mishra, Jyoti; Van Gerven, Pascal W M; Gazzaley, Adam
2014-12-01
Selective attention involves top-down modulation of sensory cortical areas, such that responses to relevant information are enhanced whereas responses to irrelevant information are suppressed. Suppression of irrelevant information, unlike enhancement of relevant information, has been shown to be deficient in aging. Although these attentional mechanisms have been well characterized within the visual modality, little is known about these mechanisms when attention is selectively allocated across sensory modalities. The present EEG study addressed this issue by testing younger and older participants in three different tasks: Participants attended to the visual modality and ignored the auditory modality, attended to the auditory modality and ignored the visual modality, or passively perceived information presented through either modality. We found overall modulation of visual and auditory processing during cross-modal selective attention in both age groups. Top-down modulation of visual processing was observed as a trend toward enhancement of visual information in the setting of auditory distraction, but no significant suppression of visual distraction when auditory information was relevant. Top-down modulation of auditory processing, on the other hand, was observed as suppression of auditory distraction when visual stimuli were relevant, but no significant enhancement of auditory information in the setting of visual distraction. In addition, greater visual enhancement was associated with better recognition of relevant visual information, and greater auditory distractor suppression was associated with a better ability to ignore auditory distraction. There were no age differences in these effects, suggesting that when relevant and irrelevant information are presented through different sensory modalities, selective attention remains intact in older age.
Modality-specificity of Selective Attention Networks
Stewart, Hannah J.; Amitay, Sygal
2015-01-01
Objective: To establish the modality specificity and generality of selective attention networks. Method: Forty-eight young adults completed a battery of four auditory and visual selective attention tests based upon the Attention Network framework: the visual and auditory Attention Network Tests (vANT, aANT), the Test of Everyday Attention (TEA), and the Test of Attention in Listening (TAiL). These provided independent measures for auditory and visual alerting, orienting, and conflict resolution networks. The measures were subjected to an exploratory factor analysis to assess underlying attention constructs. Results: The analysis yielded a four-component solution. The first component comprised of a range of measures from the TEA and was labeled “general attention.” The third component was labeled “auditory attention,” as it only contained measures from the TAiL using pitch as the attended stimulus feature. The second and fourth components were labeled as “spatial orienting” and “spatial conflict,” respectively—they were comprised of orienting and conflict resolution measures from the vANT, aANT, and TAiL attend-location task—all tasks based upon spatial judgments (e.g., the direction of a target arrow or sound location). Conclusions: These results do not support our a-priori hypothesis that attention networks are either modality specific or supramodal. Auditory attention separated into selectively attending to spatial and non-spatial features, with the auditory spatial attention loading onto the same factor as visual spatial attention, suggesting spatial attention is supramodal. However, since our study did not include a non-spatial measure of visual attention, further research will be required to ascertain whether non-spatial attention is modality-specific. PMID:26635709
Modalities of memory: is reading lips like hearing voices?
Maidment, David W; Macken, Bill; Jones, Dylan M
2013-12-01
Functional similarities in verbal memory performance across presentation modalities (written, heard, lipread) are often taken to point to a common underlying representational form upon which the modalities converge. We show here instead that the pattern of performance depends critically on presentation modality and different mechanisms give rise to superficially similar effects across modalities. Lipread recency is underpinned by different mechanisms to auditory recency, and while the effect of an auditory suffix on an auditory list is due to the perceptual grouping of the suffix with the list, the corresponding effect with lipread speech is due to misidentification of the lexical content of the lipread suffix. Further, while a lipread suffix does not disrupt auditory recency, an auditory suffix does disrupt recency for lipread lists. However, this effect is due to attentional capture ensuing from the presentation of an unexpected auditory event, and is evident both with verbal and nonverbal auditory suffixes. These findings add to a growing body of evidence that short-term verbal memory performance is determined by modality-specific perceptual and motor processes, rather than by the storage and manipulation of phonological representations. Copyright © 2013 Elsevier B.V. All rights reserved.
Alderson, R Matt; Kasper, Lisa J; Patros, Connor H G; Hudec, Kristen L; Tarle, Stephanie J; Lea, Sarah E
2015-01-01
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.
Stojmenova, Kristina; Sodnik, Jaka
2018-07-04
There are 3 standardized versions of the Detection Response Task (DRT), 2 using visual stimuli (remote DRT and head-mounted DRT) and one using tactile stimuli. In this article, we present a study that proposes and validates a type of auditory signal to be used as DRT stimulus and evaluate the proposed auditory version of this method by comparing it with the standardized visual and tactile version. This was a within-subject design study performed in a driving simulator with 24 participants. Each participant performed 8 2-min-long driving sessions in which they had to perform 3 different tasks: driving, answering to DRT stimuli, and performing a cognitive task (n-back task). Presence of additional cognitive load and type of DRT stimuli were defined as independent variables. DRT response times and hit rates, n-back task performance, and pupil size were observed as dependent variables. Significant changes in pupil size for trials with a cognitive task compared to trials without showed that cognitive load was induced properly. Each DRT version showed a significant increase in response times and a decrease in hit rates for trials with a secondary cognitive task compared to trials without. Similar and significantly better results in differences in response times and hit rates were obtained for the auditory and tactile version compared to the visual version. There were no significant differences in performance rate between the trials without DRT stimuli compared to trials with and among the trials with different DRT stimuli modalities. The results from this study show that the auditory DRT version, using the signal implementation suggested in this article, is sensitive to the effects of cognitive load on driver's attention and is significantly better than the remote visual and tactile version for auditory-vocal cognitive (n-back) secondary tasks.
Auditory memory function in expert chess players.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
Impact of Auditory Selective Attention on Verbal Short-Term Memory and Vocabulary Development
ERIC Educational Resources Information Center
Majerus, Steve; Heiligenstein, Lucie; Gautherot, Nathalie; Poncelet, Martine; Van der Linden, Martial
2009-01-01
This study investigated the role of auditory selective attention capacities as a possible mediator of the well-established association between verbal short-term memory (STM) and vocabulary development. A total of 47 6- and 7-year-olds were administered verbal immediate serial recall and auditory attention tasks. Both task types probed processing…
Music training relates to the development of neural mechanisms of selective auditory attention.
Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina
2015-04-01
Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta
2015-01-01
Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap separating two successive stimuli necessary for a subject to report their temporal order correctly, thus the relation 'before-after'. Neuropsychological evidence has indicated elevated TOT values (corresponding to deteriorated time perception) in different clinical groups, such as aphasic patients, dyslexic subjects or children with specific language impairment. To test relationships between elevated TOT and declined cognitive functions in brain-injured patients suffering from post-stroke aphasia. We tested 30 aphasic patients (13 male, 17 female), aged between 50 and 81 years. TIP comprised assessment of TOT. Auditory comprehension was assessed with the selected language tests, i.e. Token Test, Phoneme Discrimination Test (PDT) and Voice-Onset-Time Test (VOT), while two aspects of attentional resources (i.e. alertness and vigilance) were measured using the Test of Attentional Performance (TAP) battery. Significant correlations were indicated between elevated values of TOT and deteriorated performance on all applied language tests. Moreover, significant correlations were evidenced between elevated TOT and alertness. Finally, positive correlations were found between particular language tests, i.e. (1) Token Test and PDT; (2) Token Test and VOT Test; and (3) PDT and VOT Test, as well as between PDT and both attentional tasks. These results provide further clinical evidence supporting the thesis that TIP constitutes the core process incorporated in both language and attentional resources. The novel value of the present study is the indication for the first time in Slavic language users a clear coexistence of the 'timing-auditory comprehension-attention' relationships. © 2015 Royal College of Speech and Language Therapists.
Takegata, Rika; Brattico, Elvira; Tervaniemi, Mari; Varyagina, Olga; Näätänen, Risto; Winkler, István
2005-09-01
The role of attention in conjoining features of an object has been a topic of much debate. Studies using the mismatch negativity (MMN), an index of detecting acoustic deviance, suggested that the conjunctions of auditory features are preattentively represented in the brain. These studies, however, used sequentially presented sounds and thus are not directly comparable with visual studies of feature integration. Therefore, the current study presented an array of spatially distributed sounds to determine whether the auditory features of concurrent sounds are correctly conjoined without focal attention directed to the sounds. Two types of sounds differing from each other in timbre and pitch were repeatedly presented together while subjects were engaged in a visual n-back working-memory task and ignored the sounds. Occasional reversals of the frequent pitch-timbre combinations elicited MMNs of a very similar amplitude and latency irrespective of the task load. This result suggested preattentive integration of auditory features. However, performance in a subsequent target-search task with the same stimuli indicated the occurrence of illusory conjunctions. The discrepancy between the results obtained with and without focal attention suggests that illusory conjunctions may occur during voluntary access to the preattentively encoded object representations.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults.
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-12-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes.
Attentional Demand of a Virtual Reality-Based Reaching Task in Nondisabled Older Adults
Chen, Yi-An; Chung, Yu-Chen; Proffitt, Rachel; Wade, Eric; Winstein, Carolee
2015-01-01
Attention during exercise is known to affect performance; however, the attentional demand inherent to virtual reality (VR)-based exercise is not well understood. We used a dual-task paradigm to compare the attentional demands of VR-based and non-VR-based (conventional, real-world) exercise: 22 non-disabled older adults performed a primary reaching task to virtual and real targets in a counterbalanced block order while verbally responding to an unanticipated auditory tone in one third of the trials. The attentional demand of the primary reaching task was inferred from the voice response time (VRT) to the auditory tone. Participants' engagement level and task experience were also obtained using questionnaires. The virtual target condition was more attention demanding (significantly longer VRT) than the real target condition. Secondary analyses revealed a significant interaction between engagement level and target condition on attentional demand. For participants who were highly engaged, attentional demand was high and independent of target condition. However, for those who were less engaged, attentional demand was low and depended on target condition (i.e., virtual > real). These findings add important knowledge to the growing body of research pertaining to the development and application of technology-enhanced exercise for elders and for rehabilitation purposes. PMID:27004233
Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.
2018-01-01
Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861
An fMRI study of multimodal selective attention in schizophrenia
Mayer, Andrew R.; Hanlon, Faith M.; Teshiba, Terri M.; Klimaj, Stefan D.; Ling, Josef M.; Dodd, Andrew B.; Calhoun, Vince D.; Bustillo, Juan R.; Toulouse, Trent
2015-01-01
Background Studies have produced conflicting evidence regarding whether cognitive control deficits in patients with schizophrenia result from dysfunction within the cognitive control network (CCN; top-down) and/or unisensory cortex (bottom-up). Aims To investigate CCN and sensory cortex involvement during multisensory cognitive control in patients with schizophrenia. Method Patients with schizophrenia and healthy controls underwent functional magnetic resonance imaging while performing a multisensory Stroop task involving auditory and visual distracters. Results Patients with schizophrenia exhibited an overall pattern of response slowing, and these behavioural deficits were associated with a pattern of patient hyperactivation within auditory, sensorimotor and posterior parietal cortex. In contrast, there were no group differences in functional activation within prefrontal nodes of the CCN, with small effect sizes observed (incongruent–congruent trials). Patients with schizophrenia also failed to upregulate auditory cortex with concomitant increased attentional demands. Conclusions Results suggest a prominent role for dysfunction within auditory, sensorimotor and parietal areas relative to prefrontal CCN nodes during multisensory cognitive control. PMID:26382953
Prescriptive Teaching from the DTLA.
ERIC Educational Resources Information Center
Banas, Norma; Wills, I. H.
1979-01-01
The article (Part 2 of a series) discusses the Auditory Attention Span for Unrelated Words and the Visual Attention Span for Objects subtests of the Detroit Tests of Learning Aptitude. Skills measured and related factors influencing performance are among aspects considered. Suggestions for remediating deficits and capitalizing on strengths are…
Seibold, Julia C; Nolden, Sophie; Oberem, Josefa; Fels, Janina; Koch, Iring
2018-06-01
In an auditory attention-switching paradigm, participants heard two simultaneously spoken number-words, each presented to one ear, and decided whether the target number was smaller or larger than 5 by pressing a left or right key. An instructional cue in each trial indicated which feature had to be used to identify the target number (e.g., female voice). Auditory attention-switch costs were found when this feature changed compared to when it repeated in two consecutive trials. Earlier studies employing this paradigm showed mixed results when they examined whether such cued auditory attention-switches can be prepared actively during the cue-stimulus interval. This study systematically assessed which preconditions are necessary for the advance preparation of auditory attention-switches. Three experiments were conducted that controlled for cue-repetition benefits, modality switches between cue and stimuli, as well as for predictability of the switch-sequence. Only in the third experiment, in which predictability for an attention-switch was maximal due to a pre-instructed switch-sequence and predictable stimulus onsets, active switch-specific preparation was found. These results suggest that the cognitive system can prepare auditory attention-switches, and this preparation seems to be triggered primarily by the memorised switching-sequence and valid expectations about the time of target onset.
Bottom-up influences of voice continuity in focusing selective auditory attention
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2015-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the “unit” on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings. PMID:24633644
Bottom-up influences of voice continuity in focusing selective auditory attention.
Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara
2014-01-01
Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.
Wetzel, Nicole; Widmann, Andreas; Berti, Stefan; Schröger, Erich
2006-10-01
This study investigated auditory involuntary and voluntary attention in children aged 6-8, 10-12 and young adults. The strength of distracting stimuli (20% and 5% pitch changes) and the amount of allocation of attention were varied. In an auditory distraction paradigm event-related potentials (ERPs) and behavioral data were measured from subjects either performing a sound duration discrimination task or watching a silent video. Pitch changed sounds caused prolonged reaction times and decreased hit rates in all age groups. Larger distractors (20%) caused stronger distraction in children, but not in adults. The amplitudes of mismatch negativity (MMN), P3a, and reorienting negativity (RON) were modulated by age and by voluntary attention. P3a was additionally affected by distractor strength. Maturational changes were also observed in the amplitudes of P1 (decreasing with age) and N1 (increasing with age). P2-modulation by voluntary attention was opposite in young children and adults. Results suggest quantitative and qualitative changes in auditory voluntary and involuntary attention and distraction during development. The processing steps involved in distraction (pre-attentive change detection, attention switch, reorienting) are functional in children aged 6-8 but reveal characteristic differences to those of young adults. In general, distractibility as indicated by behavioral and ERP measures decreases from childhood to adulthood. Behavioral and ERP markers for different processing stages involved in voluntary and involuntary attention reveal characteristic developmental changes from childhood to young adulthood.
Thalamic and parietal brain morphology predicts auditory category learning.
Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas
2014-01-01
Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.
Krizman, Jennifer; Skoe, Erika; Marian, Viorica; Kraus, Nina
2014-01-01
Auditory processing is presumed to be influenced by cognitive processes – including attentional control – in a top-down manner. In bilinguals, activation of both languages during daily communication hones inhibitory skills, which subsequently bolster attentional control. We hypothesize that the heightened attentional demands of bilingual communication strengthens connections between cognitive (i.e., attentional control) and auditory processing, leading to greater across-trial consistency in the auditory evoked response (i.e., neural consistency) in bilinguals. To assess this, we collected passively-elicited auditory evoked responses to the syllable [da] and separately obtained measures of attentional control and language ability in adolescent Spanish-English bilinguals and English monolinguals. Bilinguals demonstrated enhanced attentional control and more consistent brainstem and cortical responses. In bilinguals, but not monolinguals, brainstem consistency tracked with language proficiency and attentional control. We interpret these enhancements in neural consistency as the outcome of strengthened attentional control that emerged from experience communicating in two languages. PMID:24413593
Auditory Attentional Control and Selection during Cocktail Party Listening
Hill, Kevin T.
2010-01-01
In realistic auditory environments, people rely on both attentional control and attentional selection to extract intelligible signals from a cluttered background. We used functional magnetic resonance imaging to examine auditory attention to natural speech under such high processing-load conditions. Participants attended to a single talker in a group of 3, identified by the target talker's pitch or spatial location. A catch-trial design allowed us to distinguish activity due to top-down control of attention versus attentional selection of bottom-up information in both the spatial and spectral (pitch) feature domains. For attentional control, we found a left-dominant fronto-parietal network with a bias toward spatial processing in dorsal precentral sulcus and superior parietal lobule, and a bias toward pitch in inferior frontal gyrus. During selection of the talker, attention modulated activity in left intraparietal sulcus when using talker location and in bilateral but right-dominant superior temporal sulcus when using talker pitch. We argue that these networks represent the sources and targets of selective attention in rich auditory environments. PMID:19574393
Reaction time and accuracy in individuals with aphasia during auditory vigilance tasks.
Laures, Jacqueline S
2005-11-01
Research indicates that attentional deficits exist in aphasic individuals. However, relatively little is known about auditory vigilance performance in individuals with aphasia. The current study explores reaction time (RT) and accuracy in 10 aphasic participants and 10 nonbrain-damaged controls during linguistic and nonlinguistic auditory vigilance tasks. Findings indicate that the aphasic group was less accurate during both tasks than the control group, but was not slower in their accurate responses. Further examination of the data revealed variability in the aphasic participants' RT contributing to the lower accuracy scores.
Auditory attention strategy depends on target linguistic properties and spatial configurationa)
McCloy, Daniel R.; Lee, Adrian K. C.
2015-01-01
Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011
Attention and Vigilance in Children with Down Syndrome
ERIC Educational Resources Information Center
Trezise, Kim L.; Gray, Kylie M.; Sheppard, Dianne M.
2008-01-01
Background: Down syndrome (DS) has been the focus of much cognitive and developmental research; however, there is a gap in knowledge regarding sustained attention, particularly across different sensory domains. This research examined the hypothesis that children with DS would demonstrate superior visual rather than auditory performance on a…
Li, Chunlin; Chen, Kewei; Han, Hongbin; Chui, Dehua; Wu, Jinglong
2012-01-01
Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues) remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals) are unclear. Using fMRI (magnetic resonance imaging), we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN), which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF) during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC) was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen. PMID:23166800
Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
2013-02-16
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K
2013-11-01
Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.
Harrison, Neil R; Woodhouse, Rob
2016-05-01
Previous research has demonstrated that threatening, compared to neutral pictures, can bias attention towards non-emotional auditory targets. Here we investigated which subcomponents of attention contributed to the influence of emotional visual stimuli on auditory spatial attention. Participants indicated the location of an auditory target, after brief (250 ms) presentation of a spatially non-predictive peripheral visual cue. Responses to targets were faster at the location of the preceding visual cue, compared to at the opposite location (cue validity effect). The cue validity effect was larger for targets following pleasant and unpleasant cues compared to neutral cues, for right-sided targets. For unpleasant cues, the crossmodal cue validity effect was driven by delayed attentional disengagement, and for pleasant cues, it was driven by enhanced engagement. We conclude that both pleasant and unpleasant visual cues influence the distribution of attention across modalities and that the associated attentional mechanisms depend on the valence of the visual cue.
Auditory memory function in expert chess players
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
2015-01-01
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. Results: The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Conclusion: Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time. PMID:26793666
Nikouei Mahani, Mohammad-Ali; Haghgoo, Hojjat Allah; Azizi, Solmaz; Nili Ahmadabadi, Majid
2016-01-01
In our daily life, we continually exploit already learned multisensory associations and form new ones when facing novel situations. Improving our associative learning results in higher cognitive capabilities. We experimentally and computationally studied the learning performance of healthy subjects in a visual-auditory sensory associative learning task across active learning, attention cueing learning, and passive learning modes. According to our results, the learning mode had no significant effect on learning association of congruent pairs. In addition, subjects' performance in learning congruent samples was not correlated with their vigilance score. Nevertheless, vigilance score was significantly correlated with the learning performance of the non-congruent pairs. Moreover, in the last block of the passive learning mode, subjects significantly made more mistakes in taking non-congruent pairs as associated and consciously reported lower confidence. These results indicate that attention and activity equally enhanced visual-auditory associative learning for non-congruent pairs, while false alarm rate in the passive learning mode did not decrease after the second block. We investigated the cause of higher false alarm rate in the passive learning mode by using a computational model, composed of a reinforcement learning module and a memory-decay module. The results suggest that the higher rate of memory decay is the source of making more mistakes and reporting lower confidence in non-congruent pairs in the passive learning mode.
Useful field of view predicts driving in the presence of distracters.
Wood, Joanne M; Chaparro, Alex; Lacherez, Philippe; Hickson, Louise
2012-04-01
The Useful Field of View (UFOV) test has been shown to be highly effective in predicting crash risk among older adults. An important question which we examined in this study is whether this association is due to the ability of the UFOV to predict difficulties in attention-demanding driving situations that involve either visual or auditory distracters. Participants included 92 community-living adults (mean age 73.6 ± 5.4 years; range 65-88 years) who completed all three subtests of the UFOV involving assessment of visual processing speed (subtest 1), divided attention (subtest 2), and selective attention (subtest 3); driving safety risk was also classified using the UFOV scoring system. Driving performance was assessed separately on a closed-road circuit while driving under three conditions: no distracters, visual distracters, and auditory distracters. Driving outcome measures included road sign recognition, hazard detection, gap perception, time to complete the course, and performance on the distracter tasks. Those rated as safe on the UFOV (safety rating categories 1 and 2), as well as those responding faster than the recommended cut-off on the selective attention subtest (350 msec), performed significantly better in terms of overall driving performance and also experienced less interference from distracters. Of the three UFOV subtests, the selective attention subtest best predicted overall driving performance in the presence of distracters. Older adults who were rated as higher risk on the UFOV, particularly on the selective attention subtest, demonstrated poorest driving performance in the presence of distracters. This finding suggests that the selective attention subtest of the UFOV may be differentially more effective in predicting driving difficulties in situations of divided attention which are commonly associated with crashes.
Neurocognitive screening of lead-exposed andean adolescents and young adults.
Counter, S Allen; Buchanan, Leo H; Ortega, Fernando
2009-01-01
This study was designed to assess the utility of two psychometric tests with putative minimal cultural bias for use in field screening of lead (Pb)-exposed Ecuadorian Andean workers. Specifically, the study evaluated the effectiveness in Pb-exposed adolescents and young adults of a nonverbal reasoning test standardized for younger children, and compared the findings with performance on a test of auditory memory. The Raven Coloured Progressive Matrices (RCPM) was used as a test of nonverbal intelligence, and the Digit Span subtest of the Wechsler IV intelligence scale was used to assess auditory memory/attention. The participants were 35 chronically Pb-exposed Pb-glazing workers, aged 12-21 yr. Blood lead (PbB) levels for the study group ranged from 3 to 86 microg/dl, with 65.7% of the group at and above 10 microg/dl. Zinc protoporphyrin heme ratios (ZPP/heme) ranged from 38 to 380 micromol/mol, with 57.1% of the participants showing abnormal ZPP/heme (>69 micromol/mol). ZPP/heme was significantly correlated with PbB levels, suggesting chronic Pb exposure. Performance on the RCPM was less than average on the U.S., British, and Puerto Rican norms, but average on the Peruvian norms. Significant inverse associations between PbB/ZPP concentrations and RCPM standard scores using the U.S., Puerto Rican, and Peruvian norms were observed, indicating decreasing RCPM test performance with increasing PbB and ZPP levels. RCPM scores were significantly correlated with performance on the Digit Span test for auditory memory. Mean Digit Span scale score was less than average, suggesting auditory memory/attention deficits. In conclusion, both the RCPM and Digit Span tests were found to be effective instruments for field screening of visual-spatial reasoning and auditory memory abilities, respectively, in Pb-exposed Andean adolescents and young adults.
Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex
Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie
2013-01-01
Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225
Attentional Gain Control of Ongoing Cortical Speech Representations in a “Cocktail Party”
Kerlin, Jess R.; Shahin, Antoine J.; Miller, Lee M.
2010-01-01
Normal listeners possess the remarkable perceptual ability to select a single speech stream among many competing talkers. However, few studies of selective attention have addressed the unique nature of speech as a temporally extended and complex auditory object. We hypothesized that sustained selective attention to speech in a multi-talker environment would act as gain control on the early auditory cortical representations of speech. Using high-density electroencephalography and a template-matching analysis method, we found selective gain to the continuous speech content of an attended talker, greatest at a frequency of 4–8 Hz, in auditory cortex. In addition, the difference in alpha power (8–12 Hz) at parietal sites across hemispheres indicated the direction of auditory attention to speech, as has been previously found in visual tasks. The strength of this hemispheric alpha lateralization, in turn, predicted an individual’s attentional gain of the cortical speech signal. These results support a model of spatial speech stream segregation, mediated by a supramodal attention mechanism, enabling selection of the attended representation in auditory cortex. PMID:20071526
Valéry, Benoît; Scannella, Sébastien; Peysakhovich, Vsevolod; Barone, Pascal; Causse, Mickaël
2017-07-01
In the aeronautics field, some authors have suggested that an aircraft's attitude sonification could be used by pilots to cope with spatial disorientation situations. Such a system is currently used by blind pilots to control the attitude of their aircraft. However, given the suspected higher auditory attentional capacities of blind people, the possibility for sighted individuals to use this system remains an open question. For example, its introduction may overload the auditory channel, which may in turn alter the responsiveness of pilots to infrequent but critical auditory warnings. In this study, two groups of pilots (blind versus sighted) performed a simulated flight experiment consisting of successive aircraft maneuvers, on the sole basis of an aircraft sonification. Maneuver difficulty was varied while we assessed flight performance along with subjective and electroencephalographic (EEG) measures of workload. The results showed that both groups of participants reached target-attitudes with a good accuracy. However, more complex maneuvers increased subjective workload and impaired brain responsiveness toward unexpected auditory stimuli as demonstrated by lower N1 and P3 amplitudes. Despite that the EEG signal showed a clear reorganization of the brain in the blind participants (higher alpha power), the brain responsiveness to unexpected auditory stimuli was not significantly different between the two groups. The results suggest that an auditory display might provide useful additional information to spatially disoriented pilots with normal vision. However, its use should be restricted to critical situations and simple recovery or guidance maneuvers. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-modal perceptual load: the impact of modality and individual differences.
Sandhu, Rajwant; Dyson, Benjamin James
2016-05-01
Visual distractor processing tends to be more pronounced when the perceptual load (PL) of a task is low compared to when it is high [perpetual load theory (PLT); Lavie in J Exp Psychol Hum Percept Perform 21(3):451-468, 1995]. While PLT is well established in the visual domain, application to cross-modal processing has produced mixed results, and the current study was designed in an attempt to improve previous methodologies. First, we assessed PLT using response competition, a typical metric from the uni-modal domain. Second, we looked at the impact of auditory load on visual distractors, and of visual load on auditory distractors, within the same individual. Third, we compared individual uni- and cross-modal selective attention abilities, by correlating performance with the visual Attentional Network Test (ANT). Fourth, we obtained a measure of the relative processing efficiency between vision and audition, to investigate whether processing ease influences the extent of distractor processing. Although distractor processing was evident during both attend auditory and attend visual conditions, we found that PL did not modulate processing of either visual or auditory distractors. We also found support for a correlation between the uni-modal (visual) ANT and our cross-modal task but only when the distractors were visual. Finally, although auditory processing was more impacted by visual distractors, our measure of processing efficiency only accounted for this asymmetry in the auditory high-load condition. The results are discussed with respect to the continued debate regarding the shared or separate nature of processing resources across modalities.
Emotional context enhances auditory novelty processing in superior temporal gyrus.
Domínguez-Borràs, Judith; Trautmann, Sina-Alexa; Erhard, Peter; Fehr, Thorsten; Herrmann, Manfred; Escera, Carles
2009-07-01
Visualizing emotionally loaded pictures intensifies peripheral reflexes toward sudden auditory stimuli, suggesting that the emotional context may potentiate responses elicited by novel events in the acoustic environment. However, psychophysiological results have reported that attentional resources available to sounds become depleted, as attention allocation to emotional pictures increases. These findings have raised the challenging question of whether an emotional context actually enhances or attenuates auditory novelty processing at a central level in the brain. To solve this issue, we used functional magnetic resonance imaging to first identify brain activations induced by novel sounds (NOV) when participants made a color decision on visual stimuli containing both negative (NEG) and neutral (NEU) facial expressions. We then measured modulation of these auditory responses by the emotional load of the task. Contrary to what was assumed, activation induced by NOV in superior temporal gyrus (STG) was enhanced when subjects responded to faces with a NEG emotional expression compared with NEU ones. Accordingly, NOV yielded stronger behavioral disruption on subjects' performance in the NEG context. These results demonstrate that the emotional context modulates the excitability of auditory and possibly multimodal novelty cerebral regions, enhancing acoustic novelty processing in a potentially harming environment.
Effects of smoking marijuana on focal attention and brain blood flow.
O'Leary, Daniel S; Block, Robert I; Koeppel, Julie A; Schultz, Susan K; Magnotta, Vincent A; Ponto, Laura Boles; Watkins, G Leonard; Hichwa, Richard D
2007-04-01
Using an attention task to control cognitive state, we previously found that smoking marijuana changes regional cerebral blood flow (rCBF). The present study measured rCBF during tasks requiring attention to left and right ears in different conditions. Twelve occasional marijuana users (mean age 23.5 years) were imaged with PET using [15O]water after smoking marijuana or placebo cigarettes as they performed a reaction time (RT) baseline task, and a dichotic listening task with attend-right- and attend-left-ear instructions. Smoking marijuana, but not placebo, resulted in increased normalized rCBF in orbital frontal cortex, anterior cingulate, temporal pole, insula, and cerebellum. RCBF was reduced in visual and auditory cortices. These changes occurred in all three tasks and replicated our earlier studies. They appear to reflect the direct effects of marijuana on the brain. Smoking marijuana lowered rCBF in auditory cortices compared to placebo but did not alter the normal pattern of attention-related rCBF asymmetry (i.e., greater rCBF in the temporal lobe contralateral to the direction of attention) that was also observed after placebo. These data indicate that marijuana has dramatic direct effects on rCBF, but causes relatively little change in the normal pattern of task-related rCBF on this auditory focused attention task. Copyright 2007 John Wiley & Sons, Ltd.
Interaction of attentional and motor control processes in handwriting.
Brown, T L; Donnenwirth, E E
1990-01-01
The interaction between attentional capacity, motor control processes, and strategic adaptations to changing task demands was investigated in handwriting, a continuous (rather than discrete) skilled performance. Twenty-four subjects completed 12 two-minute handwriting samples under instructions stressing speeded handwriting, normal handwriting, or highly legible handwriting. For half of the writing samples, a concurrent auditory monitoring task was imposed. Subjects copied either familiar (English) or unfamiliar (Latin) passages. Writing speed, legibility ratings, errors in writing and in the secondary auditory task, and a derived measure of the average number of characters held in short-term memory during each sample ("planning unit size") were the dependent variables. The results indicated that the ability to adapt to instructions stressing speed or legibility was substantially constrained by the concurrent listening task and by text familiarity. Interactions between instructions, task concurrence, and text familiarity in the legibility ratings, combined with further analyses of planning unit size, indicated that information throughput from temporary storage mechanisms to motor processes mediated the loss of flexibility effect. Overall, the results suggest that strategic adaptations of a skilled performance to changing task circumstances are sensitive to concurrent attentional demands and that departures from "normal" or "modal" performance require attention.
Auditory and Visual Capture during Focused Visual Attention
ERIC Educational Resources Information Center
Koelewijn, Thomas; Bronkhorst, Adelbert; Theeuwes, Jan
2009-01-01
It is well known that auditory and visual onsets presented at a particular location can capture a person's visual attention. However, the question of whether such attentional capture disappears when attention is focused endogenously beforehand has not yet been answered. Moreover, previous studies have not differentiated between capture by onsets…
Neuronal correlates of visual and auditory alertness in the DMT and ketamine model of psychosis.
Daumann, J; Wagner, D; Heekeren, K; Neukirch, A; Thiel, C M; Gouzoulis-Mayfrank, E
2010-10-01
Deficits in attentional functions belong to the core cognitive symptoms in schizophrenic patients. Alertness is a nonselective attention component that refers to a state of general readiness that improves stimulus processing and response initiation. The main goal of the present study was to investigate cerebral correlates of alertness in the human 5HT(2A) agonist and N-methyl-D-aspartic acid (NMDA) antagonist model of psychosis. Fourteen healthy volunteers participated in a randomized double-blind, cross-over event-related functional magnetic resonance imaging (fMRI) study with dimethyltryptamine (DMT) and S-ketamine. A target detection task with cued and uncued trials in both the visual and the auditory modality was used. Administration of DMT led to decreased blood oxygenation level-dependent response during performance of an alertness task, particularly in extrastriate regions during visual alerting and in temporal regions during auditory alerting. In general, the effects for the visual modality were more pronounced. In contrast, administration of S-ketamine led to increased cortical activation in the left insula and precentral gyrus in the auditory modality. The results of the present study might deliver more insight into potential differences and overlapping pathomechanisms in schizophrenia. These conclusions must remain preliminary and should be explored by further fMRI studies with schizophrenic patients performing modality-specific alertness tasks.
Cortical response variability as a developmental index of selective auditory attention
Strait, Dana L.; Slater, Jessica; Abecassis, Victor; Kraus, Nina
2014-01-01
Attention induces synchronicity in neuronal firing for the encoding of a given stimulus at the exclusion of others. Recently, we reported decreased variability in scalp-recorded cortical evoked potentials to attended compared with ignored speech in adults. Here we aimed to determine the developmental time course for this neural index of auditory attention. We compared cortical auditory-evoked variability with attention across three age groups: preschoolers, school-aged children and young adults. Results reveal an increased impact of selective auditory attention on cortical response variability with development. Although all three age groups have equivalent response variability to attended speech, only school-aged children and adults have a distinction between attend and ignore conditions. Preschoolers, on the other hand, demonstrate no impact of attention on cortical responses, which we argue reflects the gradual emergence of attention within this age range. Outcomes are interpreted in the context of the behavioral relevance of cortical response variability and its potential to serve as a developmental index of cognitive skill. PMID:24267508
Cross-Modal Attention-Switching Is Impaired in Autism Spectrum Disorders
ERIC Educational Resources Information Center
Reed, Phil; McCarthy, Julia
2012-01-01
This investigation aimed to determine if children with ASD are impaired in their ability to switch attention between different tasks, and whether performance is further impaired when required to switch across two separate modalities (visual and auditory). Eighteen children with ASD (9-13 years old) were compared with 18 typically-developing…
Söderlund, Göran B W; Björk, Christer; Gustafsson, Peik
2016-01-01
Recent research has shown that acoustic white noise (80 dB) can improve task performance in people with attention deficits and/or Attention Deficit Hyperactivity Disorder (ADHD). This is attributed to the phenomenon of stochastic resonance in which a certain amount of noise can improve performance in a brain that is not working at its optimum. We compare here the effect of noise exposure with the effect of stimulant medication on cognitive task performance in ADHD. The aim of the present study was to compare the effects of auditory noise exposure with stimulant medication for ADHD children on a cognitive test battery. A group of typically developed children (TDC) took the same tests as a comparison. Twenty children with ADHD of combined or inattentive subtypes and twenty TDC matched for age and gender performed three different tests (word recall, spanboard and n-back task) during exposure to white noise (80 dB) and in a silent condition. The ADHD children were tested with and without central stimulant medication. In the spanboard- and the word recall tasks, but not in the 2-back task, white noise exposure led to significant improvements for both non-medicated and medicated ADHD children. No significant effects of medication were found on any of the three tasks. This pilot study shows that exposure to white noise resulted in a task improvement that was larger than the one with stimulant medication thus opening up the possibility of using auditory noise as an alternative, non-pharmacological treatment of cognitive ADHD symptoms.
Boerma, Tessel; Leseman, Paul; Wijnen, Frank; Blom, Elma
2017-01-01
Background: The language profiles of children with language impairment (LI) and bilingual children can show partial, and possibly temporary, overlap. The current study examined the persistence of this overlap over time. Furthermore, we aimed to better understand why the language profiles of these two groups show resemblance, testing the hypothesis that the language difficulties of children with LI reflect a weakened ability to maintain attention to the stream of linguistic information. Consequent incomplete processing of language input may lead to delays that are similar to those originating from reductions in input frequency. Methods: Monolingual and bilingual children with and without LI (N = 128), aged 5–8 years old, participated in this study. Dutch receptive vocabulary and grammatical morphology were assessed at three waves. In addition, auditory and visual sustained attention were tested at wave 1. Mediation analyses were performed to examine relationships between LI, sustained attention, and language skills. Results: Children with LI and bilingual children were outperformed by their typically developing (TD) and monolingual peers, respectively, on vocabulary and morphology at all three waves. The vocabulary difference between monolinguals and bilinguals decreased over time. In addition, children with LI had weaker auditory and visual sustained attention skills relative to TD children, while no differences between monolinguals and bilinguals emerged. Auditory sustained attention mediated the effect of LI on vocabulary and morphology in both the monolingual and bilingual groups of children. Visual sustained attention only acted as a mediator in the bilingual group. Conclusion: The findings from the present study indicate that the overlap between the language profiles of children with LI and bilingual children is particularly large for vocabulary in early (pre)school years and reduces over time. Results furthermore suggest that the overlap may be explained by the weakened ability of children with LI to sustain their attention to auditory stimuli, interfering with how well incoming language is processed. PMID:28785235
Larson, Eric; Lee, Adrian K C
2014-01-01
Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.
Gonzalez, Jose; Soma, Hirokazu; Sekine, Masashi; Yu, Wenwei
2012-06-09
Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user's mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject's EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual "pop-out" or enhance effect. Also, the NASA TLX, the EEG's Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.
The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.
Findlen, Ursula M; Roup, Christina M
2016-01-01
Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.
Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto
2016-01-01
A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention. PMID:26924959
Neuronal effects of nicotine during auditory selective attention in schizophrenia.
Smucny, Jason; Olincy, Ann; Rojas, Donald C; Tregellas, Jason R
2016-01-01
Although nicotine has been shown to improve attention deficits in schizophrenia, the neurobiological mechanisms underlying this effect are poorly understood. We hypothesized that nicotine would modulate attention-associated neuronal response in schizophrenia patients in the ventral parietal cortex (VPC), hippocampus, and anterior cingulate based on previous findings in control subjects. To test this hypothesis, the present study examined response in these regions in a cohort of nonsmoking patients and healthy control subjects using an auditory selective attention task with environmental noise distractors during placebo and nicotine administration. In agreement with our hypothesis, significant diagnosis (Control vs. Patient) X drug (Placebo vs. Nicotine) interactions were observed in the VPC and hippocampus. The interaction was driven by task-associated hyperactivity in patients (relative to healthy controls) during placebo administration, and decreased hyperactivity in patients after nicotine administration (relative to placebo). No significant interaction was observed in the anterior cingulate. Task-associated hyperactivity of the VPC predicted poor task performance in patients during placebo. Poor task performance also predicted symptoms in patients as measured by the Brief Psychiatric Rating Scale. These results are the first to suggest that nicotine may modulate brain activity in a selective attention-dependent manner in schizophrenia. © 2015 Wiley Periodicals, Inc.
Kornilov, Sergey A; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L; Magnuson, James S
2014-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.
Yoncheva, Yuliya; Maurer, Urs; Zevin, Jason D; McCandliss, Bruce D
2014-08-15
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective attention to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by manipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data-driven source localization analyses revealed that selective attention to phonology led to significantly greater recruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings suggest a key role for selective attention in on-line phonological computations. Furthermore, these findings motivate future research on the role that neural mechanisms of attention may play in phonological awareness impairments thought to underlie developmental reading disabilities. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Tang, Xiaoyu; Li, Chunlin; Li, Qi; Gao, Yulin; Yang, Weiping; Yang, Jingjing; Ishikawa, Soushirou; Wu, Jinglong
2013-10-11
Utilizing the high temporal resolution of event-related potentials (ERPs), we examined how visual spatial or temporal cues modulated the auditory stimulus processing. The visual spatial cue (VSC) induces orienting of attention to spatial locations; the visual temporal cue (VTC) induces orienting of attention to temporal intervals. Participants were instructed to respond to auditory targets. Behavioral responses to auditory stimuli following VSC were faster and more accurate than those following VTC. VSC and VTC had the same effect on the auditory N1 (150-170 ms after stimulus onset). The mean amplitude of the auditory P1 (90-110 ms) in VSC condition was larger than that in VTC condition, and the mean amplitude of late positivity (300-420 ms) in VTC condition was larger than that in VSC condition. These findings suggest that modulation of auditory stimulus processing by visually induced spatial or temporal orienting of attention were different, but partially overlapping. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Enhanced attention-dependent activity in the auditory cortex of older musicians.
Zendel, Benjamin Rich; Alain, Claude
2014-01-01
Musical training improves auditory processing abilities, which correlates with neuro-plastic changes in exogenous (input-driven) and endogenous (attention-dependent) components of auditory event-related potentials (ERPs). Evidence suggests that musicians, compared to non-musicians, experience less age-related decline in auditory processing abilities. Here, we investigated whether lifelong musicianship mitigates exogenous or endogenous processing by measuring auditory ERPs in younger and older musicians and non-musicians while they either attended to auditory stimuli or watched a muted subtitled movie of their choice. Both age and musical training-related differences were observed in the exogenous components; however, the differences between musicians and non-musicians were similar across the lifespan. These results suggest that exogenous auditory ERPs are enhanced in musicians, but decline with age at the same rate. On the other hand, attention-related activity, modeled in the right auditory cortex using a discrete spatiotemporal source analysis, was selectively enhanced in older musicians. This suggests that older musicians use a compensatory strategy to overcome age-related decline in peripheral and exogenous processing of acoustic information. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of the auditory system
Litovsky, Ruth
2015-01-01
Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262
ERIC Educational Resources Information Center
Dalebout, Susan D.; And Others
1991-01-01
Twelve children (ages 7-8) with attention deficit hyperactive disorder were administered the Selective Auditory Attention Test twice: after the administration of methylphenidate and after the administration of a placebo. Results revealed no simple drug effect but a strong order effect. (Author/JDD)
Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.
2013-01-01
Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
2013-06-01
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
Liebel, Spencer W; Nelson, Jason M
2017-12-01
We investigated the auditory and visual working memory functioning in college students with attention-deficit/hyperactivity disorder, learning disabilities, and clinical controls. We examined the role attention-deficit/hyperactivity disorder subtype status played in working memory functioning. The unique influence that both domains of working memory have on reading and math abilities was investigated. A sample of 268 individuals seeking postsecondary education comprise four groups of the present study: 110 had an attention-deficit/hyperactivity disorder diagnosis only, 72 had a learning disability diagnosis only, 35 had comorbid attention-deficit/hyperactivity disorder and learning disability diagnoses, and 60 individuals without either of these disorders comprise a clinical control group. Participants underwent a comprehensive neuropsychological evaluation, and licensed psychologists employed a multi-informant, multi-method approach in obtaining diagnoses. In the attention-deficit/hyperactivity disorder only group, there was no difference between auditory and visual working memory functioning, t(100) = -1.57, p = .12. In the learning disability group, however, auditory working memory functioning was significantly weaker compared with visual working memory, t(71) = -6.19, p < .001, d = -0.85. Within the attention-deficit/hyperactivity disorder only group, there were no auditory or visual working memory functioning differences between participants with either a predominantly inattentive type or a combined type diagnosis. Visual working memory did not incrementally contribute to the prediction of academic achievement skills. Individuals with attention-deficit/hyperactivity disorder did not demonstrate significant working memory differences compared with clinical controls. Individuals with a learning disability demonstrated weaker auditory working memory than individuals in either the attention-deficit/hyperactivity or clinical control groups. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias
2017-10-10
Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.
Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".
Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David
2013-01-23
Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.
Franceschini, Sandro; Trevisan, Piergiorgio; Ronconi, Luca; Bertoni, Sara; Colmar, Susan; Double, Kit; Facoetti, Andrea; Gori, Simone
2017-07-19
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
2010-01-01
Background We investigated the processing of task-irrelevant and unexpected novel sounds and its modulation by working-memory load in children aged 9-10 and in adults. Environmental sounds (novels) were embedded amongst frequently presented standard sounds in an auditory-visual distraction paradigm. Each sound was followed by a visual target. In two conditions, participants evaluated the position of a visual stimulus (0-back, low load) or compared the position of the current stimulus with the one two trials before (2-back, high load). Processing of novel sounds were measured with reaction times, hit rates and the auditory event-related brain potentials (ERPs) Mismatch Negativity (MMN), P3a, Reorienting Negativity (RON) and visual P3b. Results In both memory load conditions novels impaired task performance in adults whereas they improved performance in children. Auditory ERPs reflect age-related differences in the time-window of the MMN as children showed a positive ERP deflection to novels whereas adults lack an MMN. The attention switch towards the task irrelevant novel (reflected by P3a) was comparable between the age groups. Adults showed more efficient reallocation of attention (reflected by RON) under load condition than children. Finally, the P3b elicited by the visual target stimuli was reduced in both age groups when the preceding sound was a novel. Conclusion Our results give new insights in the development of novelty processing as they (1) reveal that task-irrelevant novel sounds can result in contrary effects on the performance in a visual primary task in children and adults, (2) show a positive ERP deflection to novels rather than an MMN in children, and (3) reveal effects of auditory novels on visual target processing. PMID:20929535
Exploring auditory neglect: Anatomo-clinical correlations of auditory extinction.
Tissieres, Isabel; Crottaz-Herbette, Sonia; Clarke, Stephanie
2018-05-23
The key symptoms of auditory neglect include left extinction on tasks of dichotic and/or diotic listening and rightward shift in locating sounds. The anatomical correlates of the latter are relatively well understood, but no systematic studies have examined auditory extinction. Here, we performed a systematic study of anatomo-clinical correlates of extinction by using dichotic and/or diotic listening tasks. In total, 20 patients with right hemispheric damage (RHD) and 19 with left hemispheric damage (LHD) performed dichotic and diotic listening tasks. Either task consists of the simultaneous presentation of word pairs; in the dichotic task, 1 word is presented to each ear, and in the diotic task, each word is lateralized by means of interaural time differences and presented to one side. RHD was associated with exclusively contralesional extinction in dichotic or diotic listening, whereas in selected cases, LHD led to contra- or ipsilesional extinction. Bilateral symmetrical extinction occurred in RHD or LHD, with dichotic or diotic listening. The anatomical correlates of these extinction profiles offer an insight into the organisation of the auditory and attentional systems. First, left extinction in dichotic versus diotic listening involves different parts of the right hemisphere, which explains the double dissociation between these 2 neglect symptoms. Second, contralesional extinction in the dichotic task relies on homologous regions in either hemisphere. Third, ipsilesional extinction in dichotic listening after LHD was associated with lesions of the intrahemispheric white matter, interrupting callosal fibres outside their midsagittal or periventricular trajectory. Fourth, bilateral symmetrical extinction was associated with large parieto-fronto-temporal LHD or smaller parieto-temporal RHD, which suggests that divided attention, supported by the right hemisphere, and auditory streaming, supported by the left, likely play a critical role. Copyright © 2018. Published by Elsevier Masson SAS.
Pomplun, M; Reingold, E M; Shen, J
2001-09-01
In three experiments, participants' visual span was measured in a comparative visual search task in which they had to detect a local match or mismatch between two displays presented side by side. Experiment 1 manipulated the difficulty of the comparative visual search task by contrasting a mismatch detection task with a substantially more difficult match detection task. In Experiment 2, participants were tested in a single-task condition involving only the visual task and a dual-task condition in which they concurrently performed an auditory task. Finally, in Experiment 3, participants performed two dual-task conditions, which differed in the difficulty of the concurrent auditory task. Both the comparative search task difficulty (Experiment 1) and the divided attention manipulation (Experiments 2 and 3) produced strong effects on visual span size.
Giuliano, Ryan J; Karns, Christina M; Neville, Helen J; Hillyard, Steven A
2014-12-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual's capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70-90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals.
Kornilov, Sergey A.; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L.; Magnuson, James S.
2015-01-01
We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n=23) and typically developing (n=16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder. PMID:25350759
Development of Auditory Selective Attention: Why Children Struggle to Hear in Noisy Environments
ERIC Educational Resources Information Center
Jones, Pete R.; Moore, David R.; Amitay, Sygal
2015-01-01
Children's hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4-11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were…
ERIC Educational Resources Information Center
Jongman, Suzanne R.; Roelofs, Ardi; Scheper, Annette R.; Meyer, Antje S.
2017-01-01
Background: Children with specific language impairment (SLI) have problems not only with language performance but also with sustained attention, which is the ability to maintain alertness over an extended period of time. Although there is consensus that this ability is impaired with respect to processing stimuli in the auditory perceptual…
ERIC Educational Resources Information Center
Curtindale, Lori; Laurie-Rose, Cynthia; Bennett-Murphy, Laura; Hull, Sarah
2007-01-01
Applying optimal stimulation theory, the present study explored the development of sustained attention as a dynamic process. It examined the interaction of modality and temperament over time in children and adults. Second-grade children and college-aged adults performed auditory and visual vigilance tasks. Using the Carey temperament…
Evoked potential correlates of selective attention with multi-channel auditory inputs
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.
1975-01-01
Ten subjects were presented with random, rapid sequences of four auditory tones which were separated in pitch and apparent spatial position. The N1 component of the auditory vertex evoked potential (EP) measured relative to a baseline was observed to increase with attention. It was concluded that the N1 enhancement reflects a finely tuned selective attention to one stimulus channel among several concurrent, competing channels. This EP enhancement probably increases with increased information load on the subject.
A novel hybrid auditory BCI paradigm combining ASSR and P300.
Kaongoen, Netiwit; Jo, Sungho
2017-03-01
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Is cross-modal integration of emotional expressions independent of attentional resources?
Vroomen, J; Driver, J; de Gelder, B
2001-12-01
In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.
Auditory Scene Analysis: An Attention Perspective
ERIC Educational Resources Information Center
Sussman, Elyse S.
2017-01-01
Purpose: This review article provides a new perspective on the role of attention in auditory scene analysis. Method: A framework for understanding how attention interacts with stimulus-driven processes to facilitate task goals is presented. Previously reported data obtained through behavioral and electrophysiological measures in adults with normal…
Reproduction of auditory and visual standards in monochannel cochlear implant users.
Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna
2004-01-01
The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.
Linde, L; Edland, A; Bergström, M
1999-05-01
One purpose of this study was to compare attention in the evening (22:00 h), in the late night (04:00 h), in the morning (10:00 h) and in the afternoon (16:00 h) during a period of complete wakefulness beginning at 08:00 h with a mean daytime performance without sleep deprivation. Another purpose was to investigate sleep deprivation effects on a multi-attribute decision-making task with and without time pressure. Twelve sleep-deprived male students were compared with 12 male non-sleep-deprived students. Both groups were tested five times with an auditory attention and a symbol coding task. Significant declines (p < 0.05) in mean level of performance on the auditory attention task were found at 04:00, 10:00 and 16:00 h for subjects forced to the vigil. However, the effect of the sleep deprivation manifested itself even more in increased between-subject dispersions. There were no differences between time pressure and no time pressure on the decision-making task and no significant differences between sleep-deprived and non-sleep-deprived subjects in decision strategies. In fact, the pattern of decision strategies among the sleep-deprived subject was more similar to a pattern of decision strategies typical for non-stressful conditions than the pattern of decision strategies among the non-sleep-deprived subjects. This result may have been due to the fact that the sleep loss acted as a dearouser. Here too, however, the variances differed significantly among sleep-deprived and non-sleep-deprived subjects, indicating that the sleep-deprived subjects were more variable in their decision strategy pattern than the control group.
Talk, Andrew C.; Grasby, Katrina L.; Rawson, Tim; Ebejer, Jane L.
2016-01-01
Loss of function of the hippocampus or frontal cortex is associated with reduced performance on memory tasks, in which subjects are incidentally exposed to cues at specific places in the environment and are subsequently asked to recollect the location at which the cue was experienced. Here, we examined the roles of the rodent hippocampus and frontal cortex in cue-directed attention during encoding of memory for the location of a single incidentally experienced cue. During a spatial sensory preconditioning task, rats explored an elevated platform while an auditory cue was incidentally presented at one corner. The opposite corner acted as an unpaired control location. The rats demonstrated recollection of location by avoiding the paired corner after the auditory cue was in turn paired with shock. Damage to either the dorsal hippocampus or the frontal cortex impaired this memory ability. However, we also found that hippocampal lesions enhanced attention directed towards the cue during the encoding phase, while frontal cortical lesions reduced cue-directed attention. These results suggest that the deficit in spatial sensory preconditioning caused by frontal cortical damage may be mediated by inattention to the location of cues during the latent encoding phase, while deficits following hippocampal damage must be related to other mechanisms such as generation of neural plasticity. PMID:27999366
Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis
2014-01-01
Human subjects performed in several behavioral conditions requiring, or not requiring, selective attention to visual stimuli. Specifically, the attentional task was to recognize strings of digits that had been presented visually. A nonlinear version of the stimulus-frequency otoacoustic emission (SFOAE), called the nSFOAE, was collected during the visual presentation of the digits. The segment of the physiological response discussed here occurred during brief silent periods immediately following the SFOAE-evoking stimuli. For all subjects tested, the physiological-noise magnitudes were substantially weaker (less noisy) during the tasks requiring the most visual attention. Effect sizes for the differences were >2.0. Our interpretation is that cortico-olivo influences adjusted the magnitude of efferent activation during the SFOAE-evoking stimulation depending upon the attention task in effect, and then that magnitude of efferent activation persisted throughout the silent period where it also modulated the physiological noise present. Because the results were highly similar to those obtained when the behavioral conditions involved auditory attention, similar mechanisms appear to operate both across modalities and within modalities. Supplementary measurements revealed that the efferent activation was spectrally global, as it was for auditory attention. PMID:24732070
EEG phase reset due to auditory attention: an inverse time-scale approach.
Low, Yin Fen; Strauss, Daniel J
2009-08-01
We propose a novel tool to evaluate the electroencephalograph (EEG) phase reset due to auditory attention by utilizing an inverse analysis of the instantaneous phase for the first time. EEGs were acquired through auditory attention experiments with a maximum entropy stimulation paradigm. We examined single sweeps of auditory late response (ALR) with the complex continuous wavelet transform. The phase in the frequency band that is associated with auditory attention (6-10 Hz, termed as theta-alpha border) was reset to the mean phase of the averaged EEGs. The inverse transform was applied to reconstruct the phase-modified signal. We found significant enhancement of the N100 wave in the reconstructed signal. Analysis of the phase noise shows the effects of phase jittering on the generation of the N100 wave implying that a preferred phase is necessary to generate the event-related potential (ERP). Power spectrum analysis shows a remarkable increase of evoked power but little change of total power after stabilizing the phase of EEGs. Furthermore, by resetting the phase only at the theta border of no attention data to the mean phase of attention data yields a result that resembles attention data. These results show strong connections between EEGs and ERP, in particular, we suggest that the presentation of an auditory stimulus triggers the phase reset process at the theta-alpha border which leads to the emergence of the N100 wave. It is concluded that our study reinforces other studies on the importance of the EEG in ERP genesis.
Yoncheva; Maurer, Urs; Zevin, Jason; McCandliss, Bruce
2015-01-01
Selective attention to phonology, i.e., the ability to attend to sub-syllabic units within spoken words, is a critical precursor to literacy acquisition. Recent functional magnetic resonance imaging evidence has demonstrated that a left-lateralized network of frontal, temporal, and posterior language regions, including the visual word form area, supports this skill. The current event-related potential (ERP) study investigated the temporal dynamics of selective attention to phonology during spoken word perception. We tested the hypothesis that selective atten tion to phonology dynamically modulates stimulus encoding by recruiting left-lateralized processes specifically while the information critical for performance is unfolding. Selective attention to phonology was captured by ma nipulating listening goals: skilled adult readers attended to either rhyme or melody within auditory stimulus pairs. Each pair superimposed rhyming and melodic information ensuring identical sensory stimulation. Selective attention to phonology produced distinct early and late topographic ERP effects during stimulus encoding. Data- driven source localization analyses revealed that selective attention to phonology led to significantly greater re cruitment of left-lateralized posterior and extensive temporal regions, which was notably concurrent with the rhyme-relevant information within the word. Furthermore, selective attention effects were specific to auditory stimulus encoding and not observed in response to cues, arguing against the notion that they reflect sustained task setting. Collectively, these results demonstrate that selective attention to phonology dynamically engages a left-lateralized network during the critical time-period of perception for achieving phonological analysis goals. These findings support the key role of selective attention to phonology in the development of literacy and motivate future research on the neural bases of the interaction between phonological awareness and literacy, deemed central to both typical and atypical reading development. PMID:24746955
Dynamic oscillatory processes governing cued orienting and allocation of auditory attention
Ahveninen, Jyrki; Huang, Samantha; Belliveau, John W.; Chang, Wei-Tang; Hämäläinen, Matti
2013-01-01
In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/ electroencephalography (MEG/EEG) source estimates. During consecutive trials, subjects were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear, to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 ms after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the temporoparietal junction, anterior insula, and inferior frontal (IFC) cortices to the right frontal eye fields. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 ms after the cue onset, possibly reflecting crossmodal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral prefrontal cortex and anterior insula/IFC beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued vs. novelty-triggered orienting (600–1400 ms). Our results reveal sustained oscillatory patterns associated with voluntary engagement of auditory spatial attention, with the frontoparietal and temporal gamma increases being best predictors of subsequent behavioral performance. PMID:23915050
Seymour, Jenessa L; Low, Kathy A; Maclin, Edward L; Chiarelli, Antonio M; Mathewson, Kyle E; Fabiani, Monica; Gratton, Gabriele; Dye, Matthew W G
2017-01-01
Theories of brain plasticity propose that, in the absence of input from the preferred sensory modality, some specialized brain areas may be recruited when processing information from other modalities, which may result in improved performance. The Useful Field of View task has previously been used to demonstrate that early deafness positively impacts peripheral visual attention. The current study sought to determine the neural changes associated with those deafness-related enhancements in visual performance. Based on previous findings, we hypothesized that recruitment of posterior portions of Brodmann area 22, a brain region most commonly associated with auditory processing, would be correlated with peripheral selective attention as measured using the Useful Field of View task. We report data from severe to profoundly deaf adults and normal-hearing controls who performed the Useful Field of View task while cortical activity was recorded using the event-related optical signal. Behavioral performance, obtained in a separate session, showed that deaf subjects had lower thresholds (i.e., better performance) on the Useful Field of View task. The event-related optical data indicated greater activity for the deaf adults than for the normal-hearing controls during the task in the posterior portion of Brodmann area 22 in the right hemisphere. Furthermore, the behavioral thresholds correlated significantly with this neural activity. This work provides further support for the hypothesis that cross-modal plasticity in deaf individuals appears in higher-order auditory cortices, whereas no similar evidence was obtained for primary auditory areas. It is also the only neuroimaging study to date that has linked deaf-related changes in the right temporal lobe to visual task performance outside of the imaging environment. The event-related optical signal is a valuable technique for studying cross-modal plasticity in deaf humans. The non-invasive and relatively quiet characteristics of this technique have great potential utility in research with clinical populations such as deaf children and adults who have received cochlear or auditory brainstem implants. Copyright © 2016 Elsevier B.V. All rights reserved.
Aurally aided visual search performance in a dynamic environment
NASA Astrophysics Data System (ADS)
McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.
2008-04-01
Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.
Attention Training with Auditory Hallucinations: A Case Study
ERIC Educational Resources Information Center
Valmaggia, Lucia R.; Bouman, Theo K.; Schuurman, Laura
2007-01-01
The case presented in this paper illustrates how Attention Training (ATT; [Wells, A. (1990). "Panic disorder in association with relaxation induced anxiety: An attentional training approach to treatment." "Behavior Therapy," 21, 273-280.]) can be applied in an outpatient setting in the treatment of auditory hallucinations. The 25-year-old male…
Tillmann, Julian; Swettenham, John
2017-02-01
Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Söderlund, Göran B. W.; Jobs, Elisabeth Nilsson
2016-01-01
The most common neuropsychiatric condition in the in children is attention deficit hyperactivity disorder (ADHD), affecting ∼6–9% of the population. ADHD is distinguished by inattention and hyperactive, impulsive behaviors as well as poor performance in various cognitive tasks often leading to failures at school. Sensory and perceptual dysfunctions have also been noticed. Prior research has mainly focused on limitations in executive functioning where differences are often explained by deficits in pre-frontal cortex activation. Less notice has been given to sensory perception and subcortical functioning in ADHD. Recent research has shown that children with ADHD diagnosis have a deviant auditory brain stem response compared to healthy controls. The aim of the present study was to investigate if the speech recognition threshold differs between attentive and children with ADHD symptoms in two environmental sound conditions, with and without external noise. Previous research has namely shown that children with attention deficits can benefit from white noise exposure during cognitive tasks and here we investigate if noise benefit is present during an auditory perceptual task. For this purpose we used a modified Hagerman’s speech recognition test where children with and without attention deficits performed a binaural speech recognition task to assess the speech recognition threshold in no noise and noise conditions (65 dB). Results showed that the inattentive group displayed a higher speech recognition threshold than typically developed children and that the difference in speech recognition threshold disappeared when exposed to noise at supra threshold level. From this we conclude that inattention can partly be explained by sensory perceptual limitations that can possibly be ameliorated through noise exposure. PMID:26858679
Psycho acoustical Measures in Individuals with Congenital Visual Impairment.
Kumar, Kaushlendra; Thomas, Teenu; Bhat, Jayashree S; Ranjan, Rajesh
2017-12-01
In congenital visual impaired individuals one modality is impaired (visual modality) this impairment is compensated by other sensory modalities. There is evidence that visual impaired performed better in different auditory task like localization, auditory memory, verbal memory, auditory attention, and other behavioural tasks when compare to normal sighted individuals. The current study was aimed to compare the temporal resolution, frequency resolution and speech perception in noise ability in individuals with congenital visual impaired and normal sighted. Temporal resolution, frequency resolution, and speech perception in noise were measured using MDT, GDT, DDT, SRDT, and SNR50 respectively. Twelve congenital visual impaired participants with age range of 18 to 40 years were taken and equal in number with normal sighted participants. All the participants had normal hearing sensitivity with normal middle ear functioning. Individual with visual impairment showed superior threshold in MDT, SRDT and SNR50 as compared to normal sighted individuals. This may be due to complexity of the tasks; MDT, SRDT and SNR50 are complex tasks than GDT and DDT. Visual impairment showed superior performance in auditory processing and speech perception with complex auditory perceptual tasks.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients.
Golob, Edward J; Winston, Jenna; Mock, Jeffrey R
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory.
Impact of Spatial and Verbal Short-Term Memory Load on Auditory Spatial Attention Gradients
Golob, Edward J.; Winston, Jenna; Mock, Jeffrey R.
2017-01-01
Short-term memory load can impair attentional control, but prior work shows that the extent of the effect ranges from being very general to very specific. One factor for the mixed results may be reliance on point estimates of memory load effects on attention. Here we used auditory attention gradients as an analog measure to map-out the impact of short-term memory load over space. Verbal or spatial information was maintained during an auditory spatial attention task and compared to no-load. Stimuli were presented from five virtual locations in the frontal azimuth plane, and subjects focused on the midline. Reaction times progressively increased for lateral stimuli, indicating an attention gradient. Spatial load further slowed responses at lateral locations, particularly in the left hemispace, but had little effect at midline. Verbal memory load had no (Experiment 1), or a minimal (Experiment 2) influence on reaction times. Spatial and verbal load increased switch costs between memory encoding and attention tasks relative to the no load condition. The findings show that short-term memory influences the distribution of auditory attention over space; and that the specific pattern depends on the type of information in short-term memory. PMID:29218024
ERIC Educational Resources Information Center
Zeamer, Charlotte; Fox Tree, Jean E.
2013-01-01
Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…
Kaya, Emine Merve
2017-01-01
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information—a phenomenon referred to as the ‘cocktail party problem’. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by ‘bottom-up’ sensory-driven factors, as well as ‘top-down’ task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044012
Seeing tones and hearing rectangles - Attending to simultaneous auditory and visual events
NASA Technical Reports Server (NTRS)
Casper, Patricia A.; Kantowitz, Barry H.
1985-01-01
The allocation of attention in dual-task situations depends on both the overall and the momentary demands associated with both tasks. Subjects in an inclusive- or reaction-time task responded to changes in simultaneous sequences of discrete auditory and visual stimuli. Performance on individual trials was affected by (1) the ratio of stimuli in the two tasks, (2) response demands of the two tasks, and (3) patterns inherent in the demands of one task.
Motor (but not auditory) attention affects syntactic choice.
Pokhoday, Mikhail; Scheepers, Christoph; Shtyrov, Yury; Myachykov, Andriy
2018-01-01
Understanding the determinants of syntactic choice in sentence production is a salient topic in psycholinguistics. Existing evidence suggests that syntactic choice results from an interplay between linguistic and non-linguistic factors, and a speaker's attention to the elements of a described event represents one such factor. Whereas multimodal accounts of attention suggest a role for different modalities in this process, existing studies examining attention effects in syntactic choice are primarily based on visual cueing paradigms. Hence, it remains unclear whether attentional effects on syntactic choice are limited to the visual modality or are indeed more general. This issue is addressed by the current study. Native English participants viewed and described line drawings of simple transitive events while their attention was directed to the location of the agent or the patient of the depicted event by means of either an auditory (monaural beep) or a motor (unilateral key press) lateral cue. Our results show an effect of cue location, with participants producing more passive-voice descriptions in the patient-cued conditions. Crucially, this cue location effect emerged in the motor-cue but not (or substantially less so) in the auditory-cue condition, as confirmed by a reliable interaction between cue location (agent vs. patient) and cue type (auditory vs. motor). Our data suggest that attentional effects on the speaker's syntactic choices are modality-specific and limited to the visual and motor, but not the auditory, domain.
Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates.
Liu, Ying; Fan, Hao; Li, Jingting; Jones, Jeffery A; Liu, Peng; Zhang, Baofeng; Liu, Hanjun
2018-01-01
When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.
Emergent selectivity for task-relevant stimuli in higher-order auditory cortex
Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.
2014-01-01
A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467
Potts, Geoffrey F; Wood, Susan M; Kothmann, Delia; Martin, Laura E
2008-10-21
Attention directs limited-capacity information processing resources to a subset of available perceptual representations. The mechanisms by which attention selects task-relevant representations for preferential processing are not fully known. Triesman and Gelade's [Triesman, A., Gelade, G., 1980. A feature integration theory of attention. Cognit. Psychol. 12, 97-136.] influential attention model posits that simple features are processed preattentively, in parallel, but that attention is required to serially conjoin multiple features into an object representation. Event-related potentials have provided evidence for this model showing parallel processing of perceptual features in the posterior Selection Negativity (SN) and serial, hierarchic processing of feature conjunctions in the Frontal Selection Positivity (FSP). Most prior studies have been done on conjunctions within one sensory modality while many real-world objects have multimodal features. It is not known if the same neural systems of posterior parallel processing of simple features and frontal serial processing of feature conjunctions seen within a sensory modality also operate on conjunctions between modalities. The current study used ERPs and simultaneously presented auditory and visual stimuli in three task conditions: Attend Auditory (auditory feature determines the target, visual features are irrelevant), Attend Visual (visual features relevant, auditory irrelevant), and Attend Conjunction (target defined by the co-occurrence of an auditory and a visual feature). In the Attend Conjunction condition when the auditory but not the visual feature was a target there was an SN over auditory cortex, when the visual but not auditory stimulus was a target there was an SN over visual cortex, and when both auditory and visual stimuli were targets (i.e. conjunction target) there were SNs over both auditory and visual cortex, indicating parallel processing of the simple features within each modality. In contrast, an FSP was present when either the visual only or both auditory and visual features were targets, but not when only the auditory stimulus was a target, indicating that the conjunction target determination was evaluated serially and hierarchically with visual information taking precedence. This indicates that the detection of a target defined by audio-visual conjunction is achieved via the same mechanism as within a single perceptual modality, through separate, parallel processing of the auditory and visual features and serial processing of the feature conjunction elements, rather than by evaluation of a fused multimodal percept.
Lawo, Vera; Fels, Janina; Oberem, Josefa; Koch, Iring
2014-10-01
Using an auditory variant of task switching, we examined the ability to intentionally switch attention in a dichotic-listening task. In our study, participants responded selectively to one of two simultaneously presented auditory number words (spoken by a female and a male, one for each ear) by categorizing its numerical magnitude. The mapping of gender (female vs. male) and ear (left vs. right) was unpredictable. The to-be-attended feature for gender or ear, respectively, was indicated by a visual selection cue prior to auditory stimulus onset. In Experiment 1, explicitly cued switches of the relevant feature dimension (e.g., from gender to ear) and switches of the relevant feature within a dimension (e.g., from male to female) occurred in an unpredictable manner. We found large performance costs when the relevant feature switched, but switches of the relevant feature dimension incurred only small additional costs. The feature-switch costs were larger in ear-relevant than in gender-relevant trials. In Experiment 2, we replicated these findings using a simplified design (i.e., only within-dimension switches with blocked dimensions). In Experiment 3, we examined preparation effects by manipulating the cueing interval and found a preparation benefit only when ear was cued. Together, our data suggest that the large part of attentional switch costs arises from reconfiguration at the level of relevant auditory features (e.g., left vs. right) rather than feature dimensions (ear vs. gender). Additionally, our findings suggest that ear-based target selection benefits more from preparation time (i.e., time to direct attention to one ear) than gender-based target selection.
Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.
Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C
2016-01-01
A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.
Tokoro, Kazuhiko; Sato, Hironobu; Yamamoto, Mayumi; Nagai, Yoshiko
2015-12-01
Attention is the process by which information and selection occurs, the thalamus plays an important role in the selective attention of visual and auditory information. Selective attention is a conscious effort; however, it occurs subconsciously, as well. The lateral geniculate body (LGB) filters visual information before it reaches the cortex (bottom-up attention). The thalamic reticular nucleus (TRN) provides a strong inhibitory input to both the LGB and pulvinar. This regulation involves focusing a spotlight on important information, as well as inhibiting unnecessary background information. Behavioral contexts more strongly modulate activity of the TRN and pulvinar influencing feedforward and feedback information transmission between the frontal, temporal, parietal and occipital cortical areas (top-down attention). The medial geniculate body (MGB) filters auditory information the TRN inhibits the MGB. Attentional modulation occurring in the auditory pathway among the cochlea, cochlear nucleus, superior olivary complex, and inferior colliculus is more important than that of the MGB and TRN. We also discuss the attentional consequence of thalamic hemorrhage.
Abikoff, H; Courtney, M E; Szeibel, P J; Koplewicz, H S
1996-05-01
This study evaluated the impact of extra-task stimulation on the academic task performance of children with attention-deficit/hyperactivity disorder (ADHD). Twenty boys with ADHD and 20 nondisabled boys worked on an arithmetic task during high stimulation (music), low stimulation (speech), and no stimulation (silence). The music "distractors" were individualized for each child, and the arithmetic problems were at each child's ability level. A significant Group x Condition interaction was found for number of correct answers. Specifically, the nondisabled youngsters performed similarly under all three auditory conditions. In contrast, the children with ADHD did significantly better under the music condition than speech or silence conditions. However, a significant Group x Order interaction indicated that arithmetic performance was enhanced only for those children with ADHD who received music as the first condition. The facilitative effects of salient auditory stimulation on the arithmetic performance of the children with ADHD provide some support for the underarousal/optimal stimulation theory of ADHD.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
2017-08-01
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier
2016-11-21
Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.
Seither-Preisler, Annemarie; Parncutt, Richard; Schneider, Peter
2014-08-13
Playing a musical instrument is associated with numerous neural processes that continuously modify the human brain and may facilitate characteristic auditory skills. In a longitudinal study, we investigated the auditory and neural plasticity of musical learning in 111 young children (aged 7-9 y) as a function of the intensity of instrumental practice and musical aptitude. Because of the frequent co-occurrence of central auditory processing disorders and attentional deficits, we also tested 21 children with attention deficit (hyperactivity) disorder [AD(H)D]. Magnetic resonance imaging and magnetoencephalography revealed enlarged Heschl's gyri and enhanced right-left hemispheric synchronization of the primary evoked response (P1) to harmonic complex sounds in children who spent more time practicing a musical instrument. The anatomical characteristics were positively correlated with frequency discrimination, reading, and spelling skills. Conversely, AD(H)D children showed reduced volumes of Heschl's gyri and enhanced volumes of the plana temporalia that were associated with a distinct bilateral P1 asynchrony. This may indicate a risk for central auditory processing disorders that are often associated with attentional and literacy problems. The longitudinal comparisons revealed a very high stability of auditory cortex morphology and gray matter volumes, suggesting that the combined anatomical and functional parameters are neural markers of musicality and attention deficits. Educational and clinical implications are considered. Copyright © 2014 the authors 0270-6474/14/3410937-13$15.00/0.
Bellis, Teri James; Billiet, Cassie; Ross, Jody
2011-09-01
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANCOVAs (analyses of covariance) were used to examine effects of group, modality, and laterality (Dichotic/Dichoptic Digits) or response condition (auditory and visual patterning). In addition, planned univariate ANCOVAs were used to examine effects of group on intratest comparison measures (REA, HLD [Humming-Labeling Differential]). Children with both ADHD and (C)APD performed more poorly overall than typically developing children on all tasks, with the (C)APD group exhibiting the poorest performance on the auditory and visual patterns tests but the ADHD and (C)APD group performing similarly on the Dichotic/Dichoptic Digits task. However, each of the auditory and visual intratest comparison measures, when taken individually, was able to distinguish the (C)APD group from both the normal control and ADHD groups, whose performance did not differ from one another. Results underscore the importance of intratest comparison measures in the interpretation of central auditory tests (American Speech-Language-Hearing Association [ASHA], 2005 ; American Academy of Audiology [AAA], 2010). Results also support the "non-modular" view of (C)APD in which cross-modal deficits would be predicted based on shared neuroanatomical substrates. Finally, this study demonstrates that auditory tests alone are sufficient to distinguish (C)APD from supra-modal disorders, with cross-modal analogs adding little if anything to the differential diagnostic process. American Academy of Audiology.
Stress improves selective attention towards emotionally neutral left ear stimuli.
Hoskin, Robert; Hunter, M D; Woodruff, P W R
2014-09-01
Research concerning the impact of psychological stress on visual selective attention has produced mixed results. The current paper describes two experiments which utilise a novel auditory oddball paradigm to test the impact of psychological stress on auditory selective attention. Participants had to report the location of emotionally-neutral auditory stimuli, while ignoring task-irrelevant changes in their content. The results of the first experiment, in which speech stimuli were presented, suggested that stress improves the ability to selectively attend to left, but not right ear stimuli. When this experiment was repeated using tonal stimuli the same result was evident, but only for female participants. Females were also found to experience greater levels of distraction in general across the two experiments. These findings support the goal-shielding theory which suggests that stress improves selective attention by reducing the attentional resources available to process task-irrelevant information. The study also demonstrates, for the first time, that this goal-shielding effect extends to auditory perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Focal Suppression of Distractor Sounds by Selective Attention in Auditory Cortex.
Schwartz, Zachary P; David, Stephen V
2018-01-01
Auditory selective attention is required for parsing crowded acoustic environments, but cortical systems mediating the influence of behavioral state on auditory perception are not well characterized. Previous neurophysiological studies suggest that attention produces a general enhancement of neural responses to important target sounds versus irrelevant distractors. However, behavioral studies suggest that in the presence of masking noise, attention provides a focal suppression of distractors that compete with targets. Here, we compared effects of attention on cortical responses to masking versus non-masking distractors, controlling for effects of listening effort and general task engagement. We recorded single-unit activity from primary auditory cortex (A1) of ferrets during behavior and found that selective attention decreased responses to distractors masking targets in the same spectral band, compared with spectrally distinct distractors. This suppression enhanced neural target detection thresholds, suggesting that limited attention resources serve to focally suppress responses to distractors that interfere with target detection. Changing effort by manipulating target salience consistently modulated spontaneous but not evoked activity. Task engagement and changing effort tended to affect the same neurons, while attention affected an independent population, suggesting that distinct feedback circuits mediate effects of attention and effort in A1. © The Author 2017. Published by Oxford University Press.
Giuliano, Ryan J.; Karns, Christina M.; Neville, Helen J.; Hillyard, Steven A.
2015-01-01
A growing body of research suggests that the predictive power of working memory (WM) capacity for measures of intellectual aptitude is due to the ability to control attention and select relevant information. Crucially, attentional mechanisms implicated in controlling access to WM are assumed to be domain-general, yet reports of enhanced attentional abilities in individuals with larger WM capacities are primarily within the visual domain. Here, we directly test the link between WM capacity and early attentional gating across sensory domains, hypothesizing that measures of visual WM capacity should predict an individual’s capacity to allocate auditory selective attention. To address this question, auditory ERPs were recorded in a linguistic dichotic listening task, and individual differences in ERP modulations by attention were correlated with estimates of WM capacity obtained in a separate visual change detection task. Auditory selective attention enhanced ERP amplitudes at an early latency (ca. 70–90 msec), with larger P1 components elicited by linguistic probes embedded in an attended narrative. Moreover, this effect was associated with greater individual estimates of visual WM capacity. These findings support the view that domain-general attentional control mechanisms underlie the wide variation of WM capacity across individuals. PMID:25000526
Language-Specific Attention Treatment for Aphasia: Description and Preliminary Findings.
Peach, Richard K; Nathan, Meghana R; Beck, Katherine M
2017-02-01
The need for a specific, language-based treatment approach to aphasic impairments associated with attentional deficits is well documented. We describe language-specific attention treatment, a specific skill-based approach for aphasia that exploits increasingly complex linguistic tasks that focus attention. The program consists of eight tasks, some with multiple phases, to assess and treat lexical and sentence processing. Validation results demonstrate that these tasks load on six attentional domains: (1) executive attention; (2) attentional switching; (3) visual selective attention/processing speed; (4) sustained attention; (5) auditory-verbal working memory; and (6) auditory processing speed. The program demonstrates excellent inter- and intrarater reliability and adequate test-retest reliability. Two of four people with aphasia exposed to this program demonstrated good language recovery whereas three of the four participants showed improvements in auditory-verbal working memory. The results provide support for this treatment program in patients with aphasia having no greater than a moderate degree of attentional impairment. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
Attentional Capture by Deviant Sounds: A Noncontingent Form of Auditory Distraction?
ERIC Educational Resources Information Center
Vachon, François; Labonté, Katherine; Marsh, John E.
2017-01-01
The occurrence of an unexpected, infrequent sound in an otherwise homogeneous auditory background tends to disrupt the ongoing cognitive task. This "deviation effect" is typically explained in terms of attentional capture whereby the deviant sound draws attention away from the focal activity, regardless of the nature of this activity.…
Auditory Attention to Frequency and Time: An Analogy to Visual Local-Global Stimuli
ERIC Educational Resources Information Center
Justus, Timothy; List, Alexandra
2005-01-01
Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were…
Michalka, Samantha W; Kong, Lingqiang; Rosen, Maya L; Shinn-Cunningham, Barbara G; Somers, David C
2015-08-19
The frontal lobes control wide-ranging cognitive functions; however, functional subdivisions of human frontal cortex are only coarsely mapped. Here, functional magnetic resonance imaging reveals two distinct visual-biased attention regions in lateral frontal cortex, superior precentral sulcus (sPCS) and inferior precentral sulcus (iPCS), anatomically interdigitated with two auditory-biased attention regions, transverse gyrus intersecting precentral sulcus (tgPCS) and caudal inferior frontal sulcus (cIFS). Intrinsic functional connectivity analysis demonstrates that sPCS and iPCS fall within a broad visual-attention network, while tgPCS and cIFS fall within a broad auditory-attention network. Interestingly, we observe that spatial and temporal short-term memory (STM), respectively, recruit visual and auditory attention networks in the frontal lobe, independent of sensory modality. These findings not only demonstrate that both sensory modality and information domain influence frontal lobe functional organization, they also demonstrate that spatial processing co-localizes with visual processing and that temporal processing co-localizes with auditory processing in lateral frontal cortex. Copyright © 2015 Elsevier Inc. All rights reserved.
Countering the Consequences of Ego Depletion: The Effects of Self-Talk on Selective Attention.
Gregersen, Jón; Hatzigeorgiadis, Antonis; Galanis, Evangelos; Comoutos, Nikos; Papaioannou, Athanasios
2017-06-01
This study examined the effects of a self-talk intervention on selective attention in a state of ego depletion. Participants were 62 undergraduate students with a mean age of 20.02 years (SD = 1.17). The experiment was conducted in four consecutive sessions. Following baseline assessment, participants were randomly assigned into experimental and control groups. A two-session training was conducted for the two groups, with the experimental group using self-talk. In the final assessment, participants performed a selective attention test, including visual and auditory components, following a task inducing a state of ego depletion. The analysis showed that participants of the experimental group achieved a higher percentage of correct responses on the visual test and produced faster reaction times in both the visual and the auditory test compared with participants of the control group. The results of this study suggest that the use of self-talk can benefit selective attention for participants in states of ego depletion.
Development of Auditory Selective Attention: Why Children Struggle to Hear in Noisy Environments
2015-01-01
Children’s hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4–11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9–11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments. PMID:25706591
Sustained attention in children with primary language impairment: a meta-analysis.
Ebert, Kerry Danahy; Kohnert, Kathryn
2011-10-01
This study provides a meta-analysis of the difference between children with primary or specific language impairment (LI) and their typically developing peers on tasks of sustained attention. The meta-analysis seeks to determine whether children with LI demonstrate subclinical deficits in sustained attention and, if so, under what conditions. Articles that reported empirical data from the performance of children with LI, in comparison to typically developing peers, on a task assessing sustained attention were considered for inclusion. Twenty-eight effect sizes were included in the meta-analysis. Two moderator analyses addressed the effects of stimulus modality and attention-deficit/hypereactivity disorder exclusion. In addition, reaction time outcomes and the effects of task variables were summarized qualitatively. The meta-analysis supports the existence of sustained attention deficits in children with LI in both auditory and visual modalities, as demonstrated by reduced accuracy compared with typically developing peers. Larger effect sizes are found in tasks that use auditory-linguistic stimuli than in studies that use visual stimuli. Future research should consider the role that sustained attention weaknesses play in LI as well as the implications for clinical and research assessment tasks. Methodological recommendations are summarized.
The spectrotemporal filter mechanism of auditory selective attention
Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.
2013-01-01
SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126
Buchan, Julie N; Munhall, Kevin G
2011-01-01
Conflicting visual speech information can influence the perception of acoustic speech, causing an illusory percept of a sound not present in the actual acoustic speech (the McGurk effect). We examined whether participants can voluntarily selectively attend to either the auditory or visual modality by instructing participants to pay attention to the information in one modality and to ignore competing information from the other modality. We also examined how performance under these instructions was affected by weakening the influence of the visual information by manipulating the temporal offset between the audio and video channels (experiment 1), and the spatial frequency information present in the video (experiment 2). Gaze behaviour was also monitored to examine whether attentional instructions influenced the gathering of visual information. While task instructions did have an influence on the observed integration of auditory and visual speech information, participants were unable to completely ignore conflicting information, particularly information from the visual stream. Manipulating temporal offset had a more pronounced interaction with task instructions than manipulating the amount of visual information. Participants' gaze behaviour suggests that the attended modality influences the gathering of visual information in audiovisual speech perception.
Birkett, Emma E; Talcott, Joel B
2012-01-01
Motor timing tasks have been employed in studies of neurodevelopmental disorders such as developmental dyslexia and ADHD, where they provide an index of temporal processing ability. Investigations of these disorders have used different stimulus parameters within the motor timing tasks that are likely to affect performance measures. Here we assessed the effect of auditory and visual pacing stimuli on synchronised motor timing performance and its relationship with cognitive and behavioural predictors that are commonly used in the diagnosis of these highly prevalent developmental disorders. Twenty-one children (mean age 9.6 years) completed a finger tapping task in two stimulus conditions, together with additional psychometric measures. As anticipated, synchronisation to the beat (ISI 329 ms) was less accurate in the visually paced condition. Decomposition of timing variance indicated that this effect resulted from differences in the way that visual and auditory paced tasks are processed by central timekeeping and associated peripheral implementation systems. The ability to utilise an efficient processing strategy on the visual task correlated with both reading and sustained attention skills. Dissociations between these patterns of relationship across task modality suggest that not all timing tasks are equivalent.
Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’
Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David
2013-01-01
Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218
Karimi, D; Mondor, T A; Mann, D D
2008-01-01
The operation of agricultural vehicles is a multitask activity that requires proper distribution of attentional resources. Human factors theories suggest that proper utilization of the operator's sensory capacities under such conditions can improve the operator's performance and reduce the operator's workload. Using a tractor driving simulator, this study investigated whether auditory cues can be used to improve performance of the operator of an agricultural vehicle. Steering of a vehicle was simulated in visual mode (where driving error was shown to the subject using a lightbar) and in auditory mode (where a pair of speakers were used to convey the driving error direction and/or magnitude). A secondary task was also introduced in order to simulate the monitoring of an attached machine. This task included monitoring of two identical displays, which were placed behind the simulator, and responding to them, when needed, using a joystick. This task was also implemented in auditory mode (in which a beep signaled the subject to push the proper button when a response was needed) and in visual mode (in which there was no beep and visual, monitoring of the displays was necessary). Two levels of difficulty of the monitoring task were used. Deviation of the simulated vehicle from a desired straight line was used as the measure of performance in the steering task, and reaction time to the displays was used as the measure of performance in the monitoring task. Results of the experiments showed that steering performance was significantly better when steering was a visual task (driving errors were 40% to 60% of the driving errors in auditory mode), although subjective evaluations showed that auditory steering could be easier, depending on the implementation. Performance in the monitoring task was significantly better for auditory implementation (reaction time was approximately 6 times shorter), and this result was strongly supported by subjective ratings. The majority of the subjects preferred the combination of visual mode for the steering task and auditory mode for the monitoring task.
Jacoby, Oscar; Hall, Sarah E; Mattingley, Jason B
2012-07-16
Mechanisms of attention are required to prioritise goal-relevant sensory events under conditions of stimulus competition. According to the perceptual load model of attention, the extent to which task-irrelevant inputs are processed is determined by the relative demands of discriminating the target: the more perceptually demanding the target task, the less unattended stimuli will be processed. Although much evidence supports the perceptual load model for competing stimuli within a single sensory modality, the effects of perceptual load in one modality on distractor processing in another is less clear. Here we used steady-state evoked potentials (SSEPs) to measure neural responses to irrelevant visual checkerboard stimuli while participants performed either a visual or auditory task that varied in perceptual load. Consistent with perceptual load theory, increasing visual task load suppressed SSEPs to the ignored visual checkerboards. In contrast, increasing auditory task load enhanced SSEPs to the ignored visual checkerboards. This enhanced neural response to irrelevant visual stimuli under auditory load suggests that exhausting capacity within one modality selectively compromises inhibitory processes required for filtering stimuli in another. Copyright © 2012 Elsevier Inc. All rights reserved.
Speech-Processing Fatigue in Children: Auditory Event-Related Potential and Behavioral Measures
Gustafson, Samantha J.; Rentmeester, Lindsey; Hornsby, Benjamin W. Y.; Bess, Fred H.
2017-01-01
Purpose Fatigue related to speech processing is an understudied area that may have significant negative effects, especially in children who spend the majority of their school days listening to classroom instruction. Method This study examined the feasibility of using auditory P300 responses and behavioral indices (lapses of attention and self-report) to measure fatigue resulting from sustained listening demands in 27 children (M = 9.28 years). Results Consistent with predictions, increased lapses of attention, longer reaction times, reduced P300 amplitudes to infrequent target stimuli, and self-report of greater fatigue were observed after the completion of a series of demanding listening tasks compared with the baseline values. The event-related potential responses correlated with the behavioral measures of performance. Conclusion These findings suggest that neural and behavioral responses indexing attention and processing resources show promise as effective markers of fatigue in children. PMID:28595261
Evaluation of artifact-corrected electroencephalographic (EEG) training: a pilot study.
La Marca, Jeffry P; Cruz, Daniel; Fandino, Jennifer; Cacciaguerra, Fabiana R; Fresco, Joseph J; Guerra, Austin T
2018-07-01
This double-blind study examined the effect of electromyographical (EMG) artifacts, which contaminate electroencephalographical (EEG) signals, by comparing artifact-corrected (AC) and non-artifact-corrected (NAC) neurofeedback (NF) training procedures. 14 unmedicated college students were randomly assigned to two groups: AC (n = 7) or NAC (n = 7). Both groups received 12 sessions of NF and were trained using identical NF treatment protocols to reduce their theta/beta power ratios (TBPR). Outcomes on a continuous performance test revealed that the AC group had statistically significant increases across measures of auditory and visual attention. The NAC group showed smaller gains that only reached statistical significance on measures of visual attention. Only the AC group reduced their TBPR, the NAC group did not. AC NF appears to play an important role during training that leads to improvements in both auditory and visual attention. Additional research is required to confirm the results of this study.
2012-01-01
Background Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting. Methods 10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured. Results The results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality. Conclusions The performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual “pop-out” or enhance effect. Also, the NASA TLX, the EEG’s Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications. PMID:22682425
Montefinese, Maria; Semenza, Carlo
2018-05-17
It is widely accepted that different number-related tasks, including solving simple addition and subtraction, may induce attentional shifts on the so-called mental number line, which represents larger numbers on the right and smaller numbers on the left. Recently, it has been shown that different number-related tasks also employ spatial attention shifts along with general cognitive processes. Here we investigated for the first time whether number line estimation and complex mental arithmetic recruit a common mechanism in healthy adults. Participants' performance in two-digit mental additions and subtractions using visual stimuli was compared with their performance in a mental bisection task using auditory numerical intervals. Results showed significant correlations between participants' performance in number line bisection and that in two-digit mental arithmetic operations, especially in additions, providing a first proof of a shared cognitive mechanism (or multiple shared cognitive mechanisms) between auditory number bisection and complex mental calculation.
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-13
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features.
Spatiotemporal dynamics of auditory attention synchronize with speech
Wöstmann, Malte; Herrmann, Björn; Maess, Burkhard
2016-01-01
Attention plays a fundamental role in selectively processing stimuli in our environment despite distraction. Spatial attention induces increasing and decreasing power of neural alpha oscillations (8–12 Hz) in brain regions ipsilateral and contralateral to the locus of attention, respectively. This study tested whether the hemispheric lateralization of alpha power codes not just the spatial location but also the temporal structure of the stimulus. Participants attended to spoken digits presented to one ear and ignored tightly synchronized distracting digits presented to the other ear. In the magnetoencephalogram, spatial attention induced lateralization of alpha power in parietal, but notably also in auditory cortical regions. This alpha power lateralization was not maintained steadily but fluctuated in synchrony with the speech rate and lagged the time course of low-frequency (1–5 Hz) sensory synchronization. Higher amplitude of alpha power modulation at the speech rate was predictive of a listener’s enhanced performance of stream-specific speech comprehension. Our findings demonstrate that alpha power lateralization is modulated in tune with the sensory input and acts as a spatiotemporal filter controlling the read-out of sensory content. PMID:27001861
The division of attention and the human auditory evoked potential
NASA Technical Reports Server (NTRS)
Hink, R. F.; Van Voorhis, S. T.; Hillyard, S. A.; Smith, T. S.
1977-01-01
The sensitivity of the scalp-recorded, auditory evoked potential to selective attention was examined while subjects responded to stimuli presented to one ear (focused attention) and to both ears (divided attention). The amplitude of the N1 component was found to be largest to stimuli in the ear upon which attention was to be focused, smallest to stimuli in the ear to be ignored, and intermediate to stimuli in both ears when attention was divided. The results are interpreted as supporting a capacity model of attention.
Simone, Ashley N; Bédard, Anne-Claude V; Marks, David J; Halperin, Jeffrey M
2016-01-01
The aim of this study was to examine working memory (WM) modalities (visual-spatial and auditory-verbal) and processes (maintenance and manipulation) in children with and without attention-deficit/hyperactivity disorder (ADHD). The sample consisted of 63 8-year-old children with ADHD and an age- and sex-matched non-ADHD comparison group (N=51). Auditory-verbal and visual-spatial WM were assessed using the Digit Span and Spatial Span subtests from the Wechsler Intelligence Scale for Children Integrated - Fourth Edition. WM maintenance and manipulation were assessed via forward and backward span indices, respectively. Data were analyzed using a 3-way Group (ADHD vs. non-ADHD)×Modality (Auditory-Verbal vs. Visual-Spatial)×Condition (Forward vs. Backward) Analysis of Variance (ANOVA). Secondary analyses examined differences between Combined and Predominantly Inattentive ADHD presentations. Significant Group×Condition (p=.02) and Group×Modality (p=.03) interactions indicated differentially poorer performance by those with ADHD on backward relative to forward and visual-spatial relative to auditory-verbal tasks, respectively. The 3-way interaction was not significant. Analyses targeting ADHD presentations yielded a significant Group×Condition interaction (p=.009) such that children with ADHD-Predominantly Inattentive Presentation performed differentially poorer on backward relative to forward tasks compared to the children with ADHD-Combined Presentation. Findings indicate a specific pattern of WM weaknesses (i.e., WM manipulation and visual-spatial tasks) for children with ADHD. Furthermore, differential patterns of WM performance were found for children with ADHD-Predominantly Inattentive versus Combined Presentations. (JINS, 2016, 22, 1-11).
Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh
2016-01-01
Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814
Wronkiewicz, Mark; Larson, Eric; Lee, Adrian Kc
2016-10-01
Brain-computer interface (BCI) technology allows users to generate actions based solely on their brain signals. However, current non-invasive BCIs generally classify brain activity recorded from surface electroencephalography (EEG) electrodes, which can hinder the application of findings from modern neuroscience research. In this study, we use source imaging-a neuroimaging technique that projects EEG signals onto the surface of the brain-in a BCI classification framework. This allowed us to incorporate prior research from functional neuroimaging to target activity from a cortical region involved in auditory attention. Classifiers trained to detect attention switches performed better with source imaging projections than with EEG sensor signals. Within source imaging, including subject-specific anatomical MRI information (instead of using a generic head model) further improved classification performance. This source-based strategy also reduced accuracy variability across three dimensionality reduction techniques-a major design choice in most BCIs. Our work shows that source imaging provides clear quantitative and qualitative advantages to BCIs and highlights the value of incorporating modern neuroscience knowledge and methods into BCI systems.
Value-driven attentional capture in the auditory domain.
Anderson, Brian A
2016-01-01
It is now well established that the visual attention system is shaped by reward learning. When visual features are associated with a reward outcome, they acquire high priority and can automatically capture visual attention. To date, evidence for value-driven attentional capture has been limited entirely to the visual system. In the present study, I demonstrate that previously reward-associated sounds also capture attention, interfering more strongly with the performance of a visual task. This finding suggests that value-driven attention reflects a broad principle of information processing that can be extended to other sensory modalities and that value-driven attention can bias cross-modal stimulus competition.
Broken Expectations: Violation of Expectancies, Not Novelty, Captures Auditory Attention
ERIC Educational Resources Information Center
Vachon, Francois; Hughes, Robert W.; Jones, Dylan M.
2012-01-01
The role of memory in behavioral distraction by auditory attentional capture was investigated: We examined whether capture is a product of the novelty of the capturing event (i.e., the absence of a recent memory for the event) or its violation of learned expectancies on the basis of a memory for an event structure. Attentional capture--indicated…
Orienting Attention in Audition and between Audition and Vision: Young and Elderly Subjects.
ERIC Educational Resources Information Center
Robin, Donald A.; Rizzo, Matthew
1992-01-01
Thirty young and 10 elderly adults were assessed on orienting auditory attention, in a mixed-modal condition in which stimuli were either auditory or visual. Findings suggest that the mechanisms involved in orienting attention operate in audition and that individuals may allocate their processing resources among multiple sensory pools. (Author/JDD)
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison
NASA Astrophysics Data System (ADS)
Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan
2016-12-01
Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison.
Bleichner, Martin G; Mirkovic, Bojana; Debener, Stefan
2016-12-01
This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.
Top-down modulation of visual and auditory cortical processing in aging.
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
2015-02-01
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
Hausfeld, Lars; Riecke, Lars; Formisano, Elia
2018-06-01
Often, in everyday life, we encounter auditory scenes comprising multiple simultaneous sounds and succeed to selectively attend to only one sound, typically the most relevant for ongoing behavior. Studies using basic sounds and two-talker stimuli have shown that auditory selective attention aids this by enhancing the neural representations of the attended sound in auditory cortex. It remains unknown, however, whether and how this selective attention mechanism operates on representations of auditory scenes containing natural sounds of different categories. In this high-field fMRI study we presented participants with simultaneous voices and musical instruments while manipulating their focus of attention. We found an attentional enhancement of neural sound representations in temporal cortex - as defined by spatial activation patterns - at locations that depended on the attended category (i.e., voices or instruments). In contrast, we found that in frontal cortex the site of enhancement was independent of the attended category and the same regions could flexibly represent any attended sound regardless of its category. These results are relevant to elucidate the interacting mechanisms of bottom-up and top-down processing when listening to real-life scenes comprised of multiple sound categories. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Auditory selective attention in adolescents with major depression: An event-related potential study.
Greimel, E; Trinkl, M; Bartling, J; Bakos, S; Grossheinrich, N; Schulte-Körne, G
2015-02-01
Major depression (MD) is associated with deficits in selective attention. Previous studies in adults with MD using event-related potentials (ERPs) reported abnormalities in the neurophysiological correlates of auditory selective attention. However, it is yet unclear whether these findings can be generalized to MD in adolescence. Thus, the aim of the present ERP study was to explore the neural mechanisms of auditory selective attention in adolescents with MD. 24 male and female unmedicated adolescents with MD and 21 control subjects were included in the study. ERPs were collected during an auditory oddball paradigm. Depressive adolescents tended to show a longer N100 latency to target and non-target tones. Moreover, MD subjects showed a prolonged latency of the P200 component to targets. Across groups, longer P200 latency was associated with a decreased tendency of disinhibited behavior as assessed by a behavioral questionnaire. To be able to draw more precise conclusions about differences between the neural bases of selective attention in adolescents vs. adults with MD, future studies should include both age groups and apply the same experimental setting across all subjects. The study provides strong support for abnormalities in the neurophysiolgical bases of selective attention in adolecents with MD at early stages of auditory information processing. Absent group differences in later ERP components reflecting voluntary attentional processes stand in contrast to results reported in adults with MD and may suggest that adolescents with MD possess mechanisms to compensate for abnormalities in the early stages of selective attention. Copyright © 2014 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Kercood, Suneeta; Grskovic, Janice A.
2010-01-01
Two exploratory studies assessed the effects of an intervention on the math problem solving of students with Attention Deficit Hyperactivity Disorder (ADHD). In the first study, students were assessed on a visual task in a high stimulation classroom analog setting with and without the use of a fine motor activity. Results showed that the fine…
Cha, Yuri; Kim, Young; Hwang, Sujin; Chung, Yijung
2014-01-01
Motor relearning protocols should involve task-oriented movement, focused attention, and repetition of desired movements. To investigate the effect of intensive gait training with rhythmic auditory stimulation on postural control and gait performance in individuals with chronic hemiparetic stroke. Twenty patients with chronic hemiparetic stroke participated in this study. Subjects in the Rhythmic auditory stimulation training group (10 subjects) underwent intensive gait training with rhythmic auditory stimulation for a period of 6 weeks (30 min/day, five days/week), while those in the control group (10 subjects) underwent intensive gait training for the same duration. Two clinical measures, Berg balance scale and stroke specific quality of life scale, and a 2-demensional gait analysis system, were used as outcome measure. To provide rhythmic auditory stimulation during gait training, the MIDI Cuebase musical instrument digital interface program and a KM Player version 3.3 was utilized for this study. Intensive gait training with rhythmic auditory stimulation resulted in significant improvement in scores on the Berg balance scale, gait velocity, cadence, stride length and double support period in affected side, and stroke specific quality of life scale compared with the control group after training. Findings of this study suggest that intensive gait training with rhythmic auditory stimulation improves balance and gait performance as well as quality of life, in individuals with chronic hemiparetic stroke.
Selective attention in normal and impaired hearing.
Shinn-Cunningham, Barbara G; Best, Virginia
2008-12-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Selective Attention in Normal and Impaired Hearing
Shinn-Cunningham, Barbara G.; Best, Virginia
2008-01-01
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202
Pre-attentive, context-specific representation of fear memory in the auditory cortex of rat.
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
2013-01-01
Neural representation in the auditory cortex is rapidly modulated by both top-down attention and bottom-up stimulus properties, in order to improve perception in a given context. Learning-induced, pre-attentive, map plasticity has been also studied in the anesthetized cortex; however, little attention has been paid to rapid, context-dependent modulation. We hypothesize that context-specific learning leads to pre-attentively modulated, multiplex representation in the auditory cortex. Here, we investigate map plasticity in the auditory cortices of anesthetized rats conditioned in a context-dependent manner, such that a conditioned stimulus (CS) of a 20-kHz tone and an unconditioned stimulus (US) of a mild electrical shock were associated only under a noisy auditory context, but not in silence. After the conditioning, although no distinct plasticity was found in the tonotopic map, tone-evoked responses were more noise-resistive than pre-conditioning. Yet, the conditioned group showed a reduced spread of activation to each tone with noise, but not with silence, associated with a sharpening of frequency tuning. The encoding accuracy index of neurons showed that conditioning deteriorated the accuracy of tone-frequency representations in noisy condition at off-CS regions, but not at CS regions, suggesting that arbitrary tones around the frequency of the CS were more likely perceived as the CS in a specific context, where CS was associated with US. These results together demonstrate that learning-induced plasticity in the auditory cortex occurs in a context-dependent manner.
Phonological Processing in Human Auditory Cortical Fields
Woods, David L.; Herron, Timothy J.; Cate, Anthony D.; Kang, Xiaojian; Yund, E. W.
2011-01-01
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features. PMID:21541252
Wild-Wall, Nele; Falkenstein, Michael
2010-01-01
By using event-related potentials (ERPs) the present study examines if age-related differences in preparation and processing especially emerge during divided attention. Binaurally presented auditory cues called for focused (valid and invalid) or divided attention to one or both ears. Responses were required to subsequent monaurally presented valid targets (vowels), but had to be suppressed to non-target vowels or invalidly cued vowels. Middle-aged participants were more impaired under divided attention than young ones, likely due to an age-related decline in preparatory attention following cues as was reflected in a decreased CNV. Under divided attention, target processing was increased in the middle-aged, likely reflecting compensatory effort to fulfill task requirements in the difficult condition. Additionally, middle-aged participants processed invalidly cued stimuli more intensely as was reflected by stimulus ERPs. The results suggest an age-related impairment in attentional preparation after auditory cues especially under divided attention and latent difficulties to suppress irrelevant information.
Sustained Attention in Children with Primary Language Impairment: A Meta-Analysis
Ebert, Kerry Danahy; Kohnert, Kathryn
2014-01-01
Purpose This study provides a meta-analysis of the difference between children with primary or specific language impairment (LI) and their typically developing peers on tasks of sustained attention. The meta-analysis seeks to determine if children with LI demonstrate subclinical deficits in sustained attention and, if so, under what conditions. Methods Articles that reported empirical data from the performance of children with LI, in comparison to typically developing peers, on a task assessing sustained attention were considered for inclusion. Twenty-eight effect sizes were included in the meta-analysis. Two moderator analyses addressed the effects of stimulus modality and ADHD exclusion. In addition, reaction time outcomes and the effects of task variables were summarized qualitatively. Results The meta-analysis supports the existence of sustained attention deficits in children with LI in both auditory and visual modalities, as demonstrated by reduced accuracy compared to typically developing peers. Larger effect sizes are found in tasks that use auditory and linguistic stimuli than in studies that use visual stimuli. Conclusions Future research should consider the role that sustained attention weaknesses play in LI, as well as the implications for clinical and research assessment tasks. Methodological recommendations are summarized. PMID:21646419
Odors Bias Time Perception in Visual and Auditory Modalities
Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang
2016-01-01
Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143
Distraction and Facilitation--Two Faces of the Same Coin?
ERIC Educational Resources Information Center
Wetzel, Nicole; Widmann, Andreas; Schroger, Erich
2012-01-01
Unexpected and task-irrelevant sounds can capture our attention and may cause distraction effects reflected by impaired performance in a primary task unrelated to the perturbing sound. The present auditory-visual oddball study examines the effect of the informational content of a sound on the performance in a visual discrimination task. The…
Pre-Attentive Auditory Processing of Lexicality
ERIC Educational Resources Information Center
Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan
2004-01-01
The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…
Yeend, Ingrid; Beach, Elizabeth Francis; Sharma, Mridula; Dillon, Harvey
2017-09-01
Recent animal research has shown that exposure to single episodes of intense noise causes cochlear synaptopathy without affecting hearing thresholds. It has been suggested that the same may occur in humans. If so, it is hypothesized that this would result in impaired encoding of sound and lead to difficulties hearing at suprathreshold levels, particularly in challenging listening environments. The primary aim of this study was to investigate the effect of noise exposure on auditory processing, including the perception of speech in noise, in adult humans. A secondary aim was to explore whether musical training might improve some aspects of auditory processing and thus counteract or ameliorate any negative impacts of noise exposure. In a sample of 122 participants (63 female) aged 30-57 years with normal or near-normal hearing thresholds, we conducted audiometric tests, including tympanometry, audiometry, acoustic reflexes, otoacoustic emissions and medial olivocochlear responses. We also assessed temporal and spectral processing, by determining thresholds for detection of amplitude modulation and temporal fine structure. We assessed speech-in-noise perception, and conducted tests of attention, memory and sentence closure. We also calculated participants' accumulated lifetime noise exposure and administered questionnaires to assess self-reported listening difficulty and musical training. The results showed no clear link between participants' lifetime noise exposure and performance on any of the auditory processing or speech-in-noise tasks. Musical training was associated with better performance on the auditory processing tasks, but not the on the speech-in-noise perception tasks. The results indicate that sentence closure skills, working memory, attention, extended high frequency hearing thresholds and medial olivocochlear suppression strength are important factors that are related to the ability to process speech in noise. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Neural basis of processing threatening voices in a crowded auditory world
Mothes-Lasch, Martin; Becker, Michael P. I.; Miltner, Wolfgang H. R.
2016-01-01
In real world situations, we typically listen to voice prosody against a background crowded with auditory stimuli. Voices and background can both contain behaviorally relevant features and both can be selectively in the focus of attention. Adequate responses to threat-related voices under such conditions require that the brain unmixes reciprocally masked features depending on variable cognitive resources. It is unknown which brain systems instantiate the extraction of behaviorally relevant prosodic features under varying combinations of prosody valence, auditory background complexity and attentional focus. Here, we used event-related functional magnetic resonance imaging to investigate the effects of high background sound complexity and attentional focus on brain activation to angry and neutral prosody in humans. Results show that prosody effects in mid superior temporal cortex were gated by background complexity but not attention, while prosody effects in the amygdala and anterior superior temporal cortex were gated by attention but not background complexity, suggesting distinct emotional prosody processing limitations in different regions. Crucially, if attention was focused on the highly complex background, the differential processing of emotional prosody was prevented in all brain regions, suggesting that in a distracting, complex auditory world even threatening voices may go unnoticed. PMID:26884543
Investigating the role of visual and auditory search in reading and developmental dyslexia
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a “serial” search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d′) strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in “serial” search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills. PMID:24093014
Investigating the role of visual and auditory search in reading and developmental dyslexia.
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
It has been suggested that auditory and visual sequential processing deficits contribute to phonological disorders in developmental dyslexia. As an alternative explanation to a phonological deficit as the proximal cause for reading disorders, the visual attention span hypothesis (VA Span) suggests that difficulties in processing visual elements simultaneously lead to dyslexia, regardless of the presence of a phonological disorder. In this study, we assessed whether deficits in processing simultaneously displayed visual or auditory elements is linked to dyslexia associated with a VA Span impairment. Sixteen children with developmental dyslexia and 16 age-matched skilled readers were assessed on visual and auditory search tasks. Participants were asked to detect a target presented simultaneously with 3, 9, or 15 distracters. In the visual modality, target detection was slower in the dyslexic children than in the control group on a "serial" search condition only: the intercepts (but not the slopes) of the search functions were higher in the dyslexic group than in the control group. In the auditory modality, although no group difference was observed, search performance was influenced by the number of distracters in the control group only. Within the dyslexic group, not only poor visual search (high reaction times and intercepts) but also low auditory search performance (d') strongly correlated with poor irregular word reading accuracy. Moreover, both visual and auditory search performance was associated with the VA Span abilities of dyslexic participants but not with their phonological skills. The present data suggests that some visual mechanisms engaged in "serial" search contribute to reading and orthographic knowledge via VA Span skills regardless of phonological skills. The present results further open the question of the role of auditory simultaneous processing in reading as well as its link with VA Span skills.
Binaural auditory beats affect vigilance performance and mood.
Lane, J D; Kasian, S J; Owens, J E; Marsh, G R
1998-01-01
When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults.
Lee, Ahreum; Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-04-11
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standar Desviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition.
Analysis of MEG Auditory 40-Hz Response by Event-Related Coherence
NASA Astrophysics Data System (ADS)
Tanaka, Keita; Kawakatsu, Masaki; Yunokuchi, Kazutomo
We examined the event-related coherence of magnetoencephalography (auditory 40-Hz response) while the subjects were presented click acoustic stimuli at repetition rate 40Hz in the ‘Attend' and ‘Reading' conditions. MEG signals were recorded of 5 healthy males using the whole-head SQUID system. The event-related coherence was used to provide a measurement of short synchronization which occurs in response to a stimulus. The results showed that the peak value of coherence in auditory 40-Hz response between right and left temporal regions was significantly larger when subjects paid attention to stimuli (‘Attend' condition) rather than it was when the subject ignored them (‘Reading' condition). Moreover, the latency of coherence in auditory 40-Hz response was significantly shorter when the subjects paid attention to stimuli (‘Attend' condition). These results suggest that the phase synchronization between right and left temporal region in auditory 40-Hz response correlate closely with selective attention.
Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening.
Ong, Jia Hoong; Burnham, Denis; Escudero, Paola
2015-01-01
This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants' auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.
Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening
Ong, Jia Hoong; Burnham, Denis; Escudero, Paola
2015-01-01
This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution. PMID:26214002
Action-related auditory ERP attenuation: Paradigms and hypotheses.
Horváth, János
2015-11-11
A number studies have shown that the auditory N1 event-related potential (ERP) is attenuated when elicited by self-induced or self-generated sounds. Because N1 is a correlate of auditory feature- and event-detection, it was generally assumed that N1-attenuation reflected the cancellation of auditory re-afference, enabled by the internal forward modeling of the predictable sensory consequences of the given action. Focusing on paradigms utilizing non-speech actions, the present review summarizes recent progress on action-related auditory attenuation. Following a critical analysis of the most widely used, contingent paradigm, two further hypotheses on the possible causes of action-related auditory ERP attenuation are presented. The attention hypotheses suggest that auditory ERP attenuation is brought about by a temporary division of attention between the action and the auditory stimulation. The pre-activation hypothesis suggests that the attenuation is caused by the activation of a sensory template during the initiation of the action, which interferes with the incoming stimulation. Although each hypothesis can account for a number of findings, none of them can accommodate the whole spectrum of results. It is suggested that a better understanding of auditory ERP attenuation phenomena could be achieved by systematic investigations of the types of actions, the degree of action-effect contingency, and the temporal characteristics of action-effect contingency representation-buildup and -deactivation. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015. Published by Elsevier B.V.
Isbell, Elif; Wray, Amanda Hampton; Neville, Helen J
2016-11-01
Selective attention, the ability to enhance the processing of particular input while suppressing the information from other concurrent sources, has been postulated to be a foundational skill for learning and academic achievement. The neural mechanisms of this foundational ability are both vulnerable and enhanceable in children from lower socioeconomic status (SES) families. In the current study, we assessed individual differences in neural mechanisms of this malleable brain function in children from lower SES families. Specifically, we investigated the extent to which individual differences in neural mechanisms of selective auditory attention accounted for variability in nonverbal cognitive abilities in lower SES preschoolers. We recorded event-related potentials (ERPs) during a dichotic listening task and administered nonverbal IQ tasks to 124 lower SES children (77 females) between the ages of 40 and 67 months. The attention effect, i.e., the difference in ERP mean amplitudes elicited by identical probes embedded in stories when attended versus unattended, was significantly correlated with nonverbal IQ scores. Larger, more positive attention effects over the anterior and central electrode locations were associated with superior nonverbal IQ performance. Our findings provide initial evidence for prominent individual differences in neural indices of selective attention in lower SES children. Furthermore, our results indicate a noteworthy relationship between neural mechanisms of selective attention and nonverbal IQ performance in lower SES preschoolers. These findings provide the basis for future research to identify the factors that contribute to such individual differences in neural mechanisms of selective attention. © 2015 John Wiley & Sons Ltd.
Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.
2018-01-01
Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752
Edgar, J. Christopher; Hunter, Michael A.; Huang, Mingxiong; Smith, Ashley K.; Chen, Yuhan; Sadek, Joseph; Lu, Brett Y; Miller, Gregory A.; Cañive, José M.
2012-01-01
Background Although gray matter (GM) abnormalities are frequently observed in individuals with schizophrenia (SCZ), the functional consequences of these structural abnormalities are not yet understood. The present study sought to better understand GM abnormalities in SCZ by examining associations between GM and two putative functional SCZ biomarkers: weak 100 ms (M100) auditory responses and impairment on tests of attention. Methods Data were available from 103 subjects (healthy controls=52, SCZ=51). GM cortical thickness measures were obtained for superior temporal gyrus (STG) and prefrontal cortex (PFC). Magnetoencephalography (MEG) provided measures of left and right STG M100 source strength. Subjects were administered the Trail Making Test A and the Connors’ Continuous Performance Test to assess attention. Results A strong trend indicated less GM cortical thickness in SCZ than controls in both regions and in both hemispheres (p=0.06). Individuals with SCZ had weaker M100 responses than controls bilaterally, and individuals with SCZ performed more poorly than controls on tests of attention. Across groups, left STG GM was positively associated with left M00 source strength. In SCZ only, less left and right STG and PFC GM predicted poorer performance on tests of attention. After removing variance in attention associated with age, associations between GM and attention remained significant only in left and right STG. Conclusions Reduced GM cortical thickness may serve as a common substrate for multiple functional abnormalities in SCZ, with structural-functional abnormalities in STG GM especially prominent. As suggested by others, functional abnormalities in SCZ may be a consequence of elimination of the neuropil (dendritic arbors and associated synaptic infrastructure) between neuron bodies. PMID:22766129
Pauletti, C; Mannarelli, D; Locuratolo, N; Vanacore, N; De Lucia, M C; Fattapposta, F
2014-04-01
To investigate whether pre-attentive auditory discrimination is impaired in patients with essential tremor (ET) and to evaluate the role of age at onset in this function. Seventeen non-demented patients with ET and seventeen age- and sex-matched healthy controls underwent an EEG recording during a classical auditory MMN paradigm. MMN latency was significantly prolonged in patients with elderly-onset ET (>65 years) (p=0.046), while no differences emerged in either latency or amplitude between young-onset ET patients and controls. This study represents a tentative indication of a dysfunction of auditory automatic change detection in elderly-onset ET patients, pointing to a selective attentive deficit in this subgroup of ET patients. The delay in pre-attentive auditory discrimination, which affects elderly-onset ET patients alone, further supports the hypothesis that ET represents a heterogeneous family of diseases united by tremor; these diseases are characterized by cognitive differences that may range from a disturbance in a selective cognitive function, such as the automatic part of the orienting response, to more widespread and complex cognitive dysfunctions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction.
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load-and other aspects of task difficulty-increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information-which decreases distractibility-as a side effect of the increased activity in a focused-attention network.
Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction
Sörqvist, Patrik; Dahlström, Örjan; Karlsson, Thomas; Rönnberg, Jerker
2016-01-01
Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network. PMID:27242485
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.
Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns
Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.
2016-01-01
Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941
Sex-specific cognitive abnormalities in early-onset psychosis.
Ruiz-Veguilla, Miguel; Moreno-Granados, Josefa; Salcedo-Marin, Maria D; Barrigon, Maria L; Blanco-Morales, Maria J; Igunza, Evelio; Cañabate, Anselmo; Garcia, Maria D; Guijarro, Teresa; Diaz-Atienza, Francisco; Ferrin, Maite
2017-01-01
Brain maturation differs depending on the area of the brain and sex. Girls show an earlier peak in maturation of the prefrontal cortex. Although differences between adult females and males with schizophrenia have been widely studied, there has been less research in girls and boys with psychosis. The purpose of this study was to examine differences in verbal and visual memory, verbal working memory, auditory attention, processing speed, and cognitive flexibility between boys and girls. We compared a group of 80 boys and girls with first-episode psychosis to a group of controls. We found interactions between group and sex in verbal working memory (p = 0.04) and auditory attention (p = 0.01). The female controls showed better working memory (p = 0.01) and auditory attention (p = 0.001) than males. However, we did not find any sex differences in working memory (p = 0.91) or auditory attention (p = 0.93) in the psychosis group. These results are consistent with the presence of sex-modulated cognitive profiles at first presentation of early-onset psychosis.
Sörqvist, Patrik; Stenfelt, Stefan; Rönnberg, Jerker
2012-11-01
Two fundamental research questions have driven attention research in the past: One concerns whether selection of relevant information among competing, irrelevant, information takes place at an early or at a late processing stage; the other concerns whether the capacity of attention is limited by a central, domain-general pool of resources or by independent, modality-specific pools. In this article, we contribute to these debates by showing that the auditory-evoked brainstem response (an early stage of auditory processing) to task-irrelevant sound decreases as a function of central working memory load (manipulated with a visual-verbal version of the n-back task). Furthermore, individual differences in central/domain-general working memory capacity modulated the magnitude of the auditory-evoked brainstem response, but only in the high working memory load condition. The results support a unified view of attention whereby the capacity of a late/central mechanism (working memory) modulates early precortical sensory processing.
Acute marijuana effects on rCBF and cognition: a PET study.
O'Leary, D S; Block, R I; Flaum, M; Schultz, S K; Boles Ponto, L L; Watkins, G L; Hurtig, R R; Andreasen, N C; Hichwa, R D
2000-11-27
The effects of smoking marijuana on cognition and brain function were assessed with PET using H2(15)O. Regional cerebral blood flow (rCBF) was measured in five recreational users before and after smoking a marijuana cigarette, as they repeatedly performed an auditory attention task. Blood flow increased following smoking in a number of paralimbic brain regions (e.g. orbital frontal lobes, insula, temporal poles) and in anterior cingulate and cerebellum. Large reductions in rCBF were observed in temporal lobe regions that are sensitive to auditory attention effects. Brain regions showing increased rCBF may mediate the intoxicating and mood-related effects of smoking marijuana, whereas reduction of task-related rCBF in temporal lobe cortices may account for the impaired cognitive functions associated with acute intoxication.
Stevens, Courtney; Paulsen, David; Yasen, Alia; Neville, Helen
2015-02-01
Previous neuroimaging studies indicate that lower socio-economic status (SES) is associated with reduced effects of selective attention on auditory processing. Here, we investigated whether lower SES is also associated with differences in a stimulus-driven aspect of auditory processing: the neural refractory period, or reduced amplitude response at faster rates of stimulus presentation. Thirty-two children aged 3 to 8 years participated, and were divided into two SES groups based on maternal education. Event-related brain potentials were recorded to probe stimuli presented at interstimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting a simultaneous experimental manipulation of selective attention. Results indicated that group differences in refractory periods differed as a function of attention condition. Children from higher SES backgrounds showed full neural recovery by 500 ms for attended stimuli, but required at least 1000 ms for unattended stimuli. In contrast, children from lower SES backgrounds showed similar refractory effects to attended and unattended stimuli, with full neural recovery by 500 ms. Thus, in higher SES children only, one functional consequence of selective attention is attenuation of the response to unattended stimuli, particularly at rapid ISIs, altering basic properties of the auditory refractory period. Together, these data indicate that differences in selective attention impact basic aspects of auditory processing in children from lower SES backgrounds. Copyright © 2013 Elsevier B.V. All rights reserved.
Comprehensive evaluation of a child with an auditory brainstem implant.
Eisenberg, Laurie S; Johnson, Karen C; Martinez, Amy S; DesJardin, Jean L; Stika, Carren J; Dzubak, Danielle; Mahalak, Mandy Lutz; Rector, Emily P
2008-02-01
We had an opportunity to evaluate an American child whose family traveled to Italy to receive an auditory brainstem implant (ABI). The goal of this evaluation was to obtain insight into possible benefits derived from the ABI and to begin developing assessment protocols for pediatric clinical trials. Case study. Tertiary referral center. Pediatric ABI Patient 1 was born with auditory nerve agenesis. Auditory brainstem implant surgery was performed in December, 2005, in Verona, Italy. The child was assessed at the House Ear Institute, Los Angeles, in July 2006 at the age of 3 years 11 months. Follow-up assessment has continued at the HEAR Center in Birmingham, Alabama. Auditory brainstem implant. Performance was assessed for the domains of audition, speech and language, intelligence and behavior, quality of life, and parental factors. Patient 1 demonstrated detection of sound, speech pattern perception with visual cues, and inconsistent auditory-only vowel discrimination. Language age with signs was approximately 2 years, and vocalizations were increasing. Of normal intelligence, he exhibited attention deficits with difficulty completing structured tasks. Twelve months later, this child was able to identify speech patterns consistently; closed-set word identification was emerging. These results were within the range of performance for a small sample of similarly aged pediatric cochlear implant users. Pediatric ABI assessment with a group of well-selected children is needed to examine risk versus benefit in this population and to analyze whether open-set speech recognition is achievable.
Processing Resources in Attention, Dual Task Performance, and Workload Assessment.
1981-07-01
some levels of processing, discrete attention switching is clearly an identifiable phenomenon ( LaBerge , Van Gelder, & Yellott, 1971; Kristofferson...1967, 27, 93-101. LaBerge , D., Van Gilder, P., & Yellott, S. A cueing technique in choice reaction time. Journal of Experimental Psychology, 1971, 87...city processing in auditory and visual discrimination. Acta Psychologica, 1967, 27, 223-229. Teghtsoonian, R. On the exponent in Stevens ’ law and the
Comorbidity of Auditory Processing, Language, and Reading Disorders
ERIC Educational Resources Information Center
Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.
2009-01-01
Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…
ERIC Educational Resources Information Center
Ramirez, Luz Angela; Arenas, Angela Maria; Henao, Gloria Cecilia
2005-01-01
Introduction: This investigation describes and compares characteristics of visual, semantic and auditory memory in a group of children diagnosed with combined-type attention deficit with hyperactivity, attention deficit predominating, and a control group. Method: 107 boys and girls were selected, from 7 to 11 years of age, all residents in the…
Making every word count for nonresponsive patients.
Naci, Lorina; Owen, Adrian M
2013-10-01
Despite the apparent absence of external signs of consciousness, a significant small proportion of patients with disorders of consciousness can respond to commands by willfully modulating their brain activity, even respond to yes or no questions, by performing mental imagery tasks. However, little is known about the mental life of such responsive patients, for example, with regard to whether they can have coherent thoughts or selectively maintain attention to specific events in their environment. The ability to selectively pay attention would provide evidence of a patient's preserved cognition and a method for brain-based communication, thus far untested with functional magnetic resonance imaging in this patient group. To test whether selective auditory attention can be used to detect conscious awareness and communicate with behaviorally nonresponsive patients. Case study performed in 3 patients with severe brain injury, 2 diagnosed as being in a minimally conscious state and 1 as being in a vegetative state. The patients constituted a convenience sample. Functional magnetic resonance imaging data were acquired as the patients were asked to selectively attend to auditory stimuli, thereby conveying their ability to follow commands and communicate. All patients demonstrated command following according to instructions. Two patients (1 in a minimally conscious state and 1 in a vegetative state) were also able to guide their attention to repeatedly communicate correct answers to binary (yes or no) questions. To our knowledge, we show for the first time with functional magnetic resonance imaging that behaviorally nonresponsive patients can use selective auditory attention to convey their ability to follow commands and communicate. One patient in a minimally conscious state was able to use attention to establish functional communication in the scanner, despite his inability to produce any communication responses in repeated bedside examinations. More important, 1 patient, who had been in a vegetative state for 12 years before the scanning and subsequent to it, was able to use attention to correctly communicate answers to several binary questions. The technique may be useful in establishing basic communication with patients who appear unresponsive to bedside examinations and cannot respond with existing neuroimaging methods.
Blais, Mélody; Martin, Elodie; Albaret, Jean-Michel; Tallet, Jessica
2014-12-15
Despite the apparent age-related decline in perceptual-motor performance, recent studies suggest that the elderly people can improve their reaction time when relevant sensory information are available. However, little is known about which sensory information may improve motor behaviour itself. Using a synchronization task, the present study investigates how visual and/or auditory stimulations could increase accuracy and stability of three bimanual coordination modes produced by elderly and young adults. Neurophysiological activations are recorded with ElectroEncephaloGraphy (EEG) to explore neural mechanisms underlying behavioural effects. Results reveal that the elderly stabilize all coordination modes when auditory or audio-visual stimulations are available, compared to visual stimulation alone. This suggests that auditory stimulations are sufficient to improve temporal stability of rhythmic coordination, even more in the elderly. This behavioural effect is primarily associated with increased attentional and sensorimotor-related neural activations in the elderly but similar perceptual-related activations in elderly and young adults. This suggests that, despite a degradation of attentional and sensorimotor neural processes, perceptual integration of auditory stimulations is preserved in the elderly. These results suggest that perceptual-related brain plasticity is, at least partially, conserved in normal aging. Copyright © 2014 Elsevier B.V. All rights reserved.
Combined evoked potentials in co-occuring attention deficit hyperactivity disorder and epilepsy.
Major, Zoltán Zsigmond
2011-07-30
Evoked potentials, both stimulus related and event related, show disturbances in attention deficit-hyperactivity disorder and epilepsies, too. This study was designed to evaluate if these potentials are characteristically influenced by the presence of the two diseases, individually, and in the case of co-occurrence. Forty children were included, and four groups were formed, control group, ADHD group, epilepsy group and a group with the comorbidity of epilepsy and ADHD. Epilepsy patients were under proper antiepileptic treatment; ADHD patients were free of specific therapy. Brainstem auditory evoked potentials, visual evoked potentials and auditory P300 evaluation were performed. The latency of the P100 and N135 visual evoked potential components was significantly extended by the presence of epilepsy. If ADHD was concomitantly present, this effect was attenuated. Brainstem auditory evoked potential components were prolonged in the presence of the comorbidity, considering the waves elicited in the brainstem. P300 latencies were prolonged by the presence of co-occurring ADHD and epilepsy. Feedback parameters showed overall reduction of the tested cognitive performances in the ADHD group. Disturbances produced by the presence of ADHD-epilepsy comorbidity reveal hypothetically a linked physiopathological path for both diseases, and offers an approach with possible diagnostic importance, combined evoked potential recordings.
Hecht, Marcus; Thiemann, Ulf; Freitag, Christine M; Bender, Stephan
2016-01-15
Post-perceptual cues can enhance visual short term memory encoding even after the offset of the visual stimulus. However, both the mechanisms by which the sensory stimulus characteristics are buffered as well as the mechanisms by which post-perceptual selective attention enhances short term memory encoding remain unclear. We analyzed late post-perceptual event-related potentials (ERPs) in visual change detection tasks (100ms stimulus duration) by high-resolution ERP analysis to elucidate these mechanisms. The effects of early and late auditory post-cues (300ms or 850ms after visual stimulus onset) as well as the effects of a visual interference stimulus were examined in 27 healthy right-handed adults. Focusing attention with post-perceptual cues at both latencies significantly improved memory performance, i.e. sensory stimulus characteristics were available for up to 850ms after stimulus presentation. Passive watching of the visual stimuli without auditory cue presentation evoked a slow negative wave (N700) over occipito-temporal visual areas. N700 was strongly reduced by a visual interference stimulus which impeded memory maintenance. In contrast, contralateral delay activity (CDA) still developed in this condition after the application of auditory post-cues and was thereby dissociated from N700. CDA and N700 seem to represent two different processes involved in short term memory encoding. While N700 could reflect visual post processing by automatic attention attraction, CDA may reflect the top-down process of searching selectively for the required information through post-perceptual attention. Copyright © 2015 Elsevier Inc. All rights reserved.
Attention, Awareness, and the Perception of Auditory Scenes
Snyder, Joel S.; Gregg, Melissa K.; Weintraub, David M.; Alain, Claude
2011-01-01
Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences. PMID:22347201
Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya
2013-09-01
Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can implicitly strengthen automatic change detection from an early stage in a cross-sensory manner, at least in the vision to audition direction.
Auditory Attentional Capture: Effects of Singleton Distractor Sounds
ERIC Educational Resources Information Center
Dalton, Polly; Lavie, Nilli
2004-01-01
The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…
Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.
ERIC Educational Resources Information Center
Horowitz, Frances D.
This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…
Distinct Effects of Trial-Driven and Task Set-Related Control in Primary Visual Cortex
Vaden, Ryan J.; Visscher, Kristina M.
2015-01-01
Task sets are task-specific configurations of cognitive processes that facilitate task-appropriate reactions to stimuli. While it is established that the trial-by-trial deployment of visual attention to expected stimuli influences neural responses in primary visual cortex (V1) in a retinotopically specific manner, it is not clear whether the mechanisms that help maintain a task set over many trials also operate with similar retinotopic specificity. Here, we address this question by using BOLD fMRI to characterize how portions of V1 that are specialized for different eccentricities respond during distinct components of an attention-demanding discrimination task: cue-driven preparation for a trial, trial-driven processing, task-initiation at the beginning of a block of trials, and task-maintenance throughout a block of trials. Tasks required either unimodal attention to an auditory or a visual stimulus or selective intermodal attention to the visual or auditory component of simultaneously presented visual and auditory stimuli. We found that while the retinotopic patterns of trial-driven and cue-driven activity depended on the attended stimulus, the retinotopic patterns of task-initiation and task-maintenance activity did not. Further, only the retinotopic patterns of trial-driven activity were found to depend on the presence of intermodal distraction. Participants who performed well on the intermodal selective attention tasks showed strong task-specific modulations of both trial-driven and task-maintenance activity. Importantly, task-related modulations of trial-driven and task-maintenance activity were in opposite directions. Together, these results confirm that there are (at least) two different processes for top-down control of V1: One, working trial-by-trial, differently modulates activity across different eccentricity sectors—portions of V1 corresponding to different visual eccentricities. The second process works across longer epochs of task performance, and does not differ among eccentricity sectors. These results are discussed in the context of previous literature examining top-down control of visual cortical areas. PMID:26163806
Black, Emily; Stevenson, Jennifer L; Bish, Joel P
2017-08-01
The global precedence effect is a phenomenon in which global aspects of visual and auditory stimuli are processed before local aspects. Individuals with musical experience perform better on all aspects of auditory tasks compared with individuals with less musical experience. The hemispheric lateralization of this auditory processing is less well-defined. The present study aimed to replicate the global precedence effect with auditory stimuli and to explore the lateralization of global and local auditory processing in individuals with differing levels of musical experience. A total of 38 college students completed an auditory-directed attention task while electroencephalography was recorded. Individuals with low musical experience responded significantly faster and more accurately in global trials than in local trials regardless of condition, and significantly faster and more accurately when pitches traveled in the same direction (compatible condition) than when pitches traveled in two different directions (incompatible condition) consistent with a global precedence effect. In contrast, individuals with high musical experience showed less of a global precedence effect with regards to accuracy, but not in terms of reaction time, suggesting an increased ability to overcome global bias. Further, a difference in P300 latency between hemispheres was observed. These findings provide a preliminary neurological framework for auditory processing of individuals with differing degrees of musical experience.
Selective Audiovisual Semantic Integration Enabled by Feature-Selective Attention
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Li, Peijun; Fang, Fang; Sun, Pei
2016-01-01
An audiovisual object may contain multiple semantic features, such as the gender and emotional features of the speaker. Feature-selective attention and audiovisual semantic integration are two brain functions involved in the recognition of audiovisual objects. Humans often selectively attend to one or several features while ignoring the other features of an audiovisual object. Meanwhile, the human brain integrates semantic information from the visual and auditory modalities. However, how these two brain functions correlate with each other remains to be elucidated. In this functional magnetic resonance imaging (fMRI) study, we explored the neural mechanism by which feature-selective attention modulates audiovisual semantic integration. During the fMRI experiment, the subjects were presented with visual-only, auditory-only, or audiovisual dynamical facial stimuli and performed several feature-selective attention tasks. Our results revealed that a distribution of areas, including heteromodal areas and brain areas encoding attended features, may be involved in audiovisual semantic integration. Through feature-selective attention, the human brain may selectively integrate audiovisual semantic information from attended features by enhancing functional connectivity and thus regulating information flows from heteromodal areas to brain areas encoding the attended features. PMID:26759193
Neuropsychological implications of selective attentional functioning in psychopathic offenders.
Mayer, Andrew R; Kosson, David S; Bedrick, Edward J
2006-09-01
Several core characteristics of the psychopathic personality disorder (i.e., impulsivity, failure to attend to interpersonal cues) suggest that psychopaths suffer from disordered attention. However, there is mixed evidence from the cognitive literature as to whether they exhibit superior or deficient selective attention, which has led to the formation of several distinct theories of attentional functioning in psychopathy. The present experiment investigated participants' abilities to purposely allocate attentional resources on the basis of auditory or visual linguistic information and directly tested both theories of deficient or superior selective attention in psychopathy. Specifically, 91 male inmates at a county jail were presented with either auditory or visual linguistic cues (with and without distractors) that correctly indicated the position of an upcoming visual target in 75% of the trials. The results indicated that psychopaths did not exhibit evidence of superior selective attention in any of the conditions but were generally less efficient in shifting attention on the basis of linguistic cues, especially in regard to auditory information. Implications for understanding psychopaths' cognitive functioning and possible neuropsychological deficits are addressed. ((c) 2006 APA, all rights reserved).
Improving visual spatial working memory in younger and older adults: effects of cross-modal cues.
Curtis, Ashley F; Turner, Gary R; Park, Norman W; Murtha, Susan J E
2017-11-06
Spatially informative auditory and vibrotactile (cross-modal) cues can facilitate attention but little is known about how similar cues influence visual spatial working memory (WM) across the adult lifespan. We investigated the effects of cues (spatially informative or alerting pre-cues vs. no cues), cue modality (auditory vs. vibrotactile vs. visual), memory array size (four vs. six items), and maintenance delay (900 vs. 1800 ms) on visual spatial location WM recognition accuracy in younger adults (YA) and older adults (OA). We observed a significant interaction between spatially informative pre-cue type, array size, and delay. OA and YA benefitted equally from spatially informative pre-cues, suggesting that attentional orienting prior to WM encoding, regardless of cue modality, is preserved with age. Contrary to predictions, alerting pre-cues generally impaired performance in both age groups, suggesting that maintaining a vigilant state of arousal by facilitating the alerting attention system does not help visual spatial location WM.
Markevych, Vladlena; Asbjørnsen, Arve E; Lind, Ola; Plante, Elena; Cone, Barbara
2011-07-01
The present study investigated a possible connection between speech processing and cochlear function. Twenty-two subjects with age range from 18 to 39, balanced for gender with normal hearing and without any known neurological condition, were tested with the dichotic listening (DL) test, in which listeners were asked to identify CV-syllables in a nonforced, and also attention-right, and attention-left condition. Transient evoked otoacoustic emissions (TEOAEs) were recorded for both ears, with and without the presentation of contralateral broadband noise. The main finding was a strong negative correlation between language laterality as measured with the dichotic listening task and of the TEOAE responses. The findings support a hypothesis of shared variance between central and peripheral auditory lateralities, and contribute to the attentional theory of auditory lateralization. The results have implications for the understanding of the cortico-fugal efferent control of cochlear activity. 2011 Elsevier Inc. All rights reserved.
Giraudet, L; Imbert, J-P; Bérenger, M; Tremblay, S; Causse, M
2015-11-01
The Air Traffic Control (ATC) environment is complex and safety-critical. Whilst exchanging information with pilots, controllers must also be alert to visual notifications displayed on the radar screen (e.g., warning which indicates a loss of minimum separation between aircraft). Under the assumption that attentional resources are shared between vision and hearing, the visual interface design may also impact the ability to process these auditory stimuli. Using a simulated ATC task, we compared the behavioral and neural responses to two different visual notification designs--the operational alarm that involves blinking colored "ALRT" displayed around the label of the notified plane ("Color-Blink"), and the more salient alarm involving the same blinking text plus four moving yellow chevrons ("Box-Animation"). Participants performed a concurrent auditory task with the requirement to react to rare pitch tones. P300 from the occurrence of the tones was taken as an indicator of remaining attentional resources. Participants who were presented with the more salient visual design showed better accuracy than the group with the suboptimal operational design. On a physiological level, auditory P300 amplitude in the former group was greater than that observed in the latter group. One potential explanation is that the enhanced visual design freed up attentional resources which, in turn, improved the cerebral processing of the auditory stimuli. These results suggest that P300 amplitude can be used as a valid estimation of the efficiency of interface designs, and of cognitive load more generally. Copyright © 2015 Elsevier B.V. All rights reserved.
Bidet-Caulet, Aurélie; Buchanan, Kelly G; Viswanath, Humsini; Black, Jessica; Scabini, Donatella; Bonnet-Brilhault, Frédérique; Knight, Robert T
2015-11-01
There is growing evidence that auditory selective attention operates via distinct facilitatory and inhibitory mechanisms enabling selective enhancement and suppression of sound processing, respectively. The lateral prefrontal cortex (LPFC) plays a crucial role in the top-down control of selective attention. However, whether the LPFC controls facilitatory, inhibitory, or both attentional mechanisms is unclear. Facilitatory and inhibitory mechanisms were assessed, in patients with LPFC damage, by comparing event-related potentials (ERPs) to attended and ignored sounds with ERPs to these same sounds when attention was equally distributed to all sounds. In control subjects, we observed 2 late frontally distributed ERP components: a transient facilitatory component occurring from 150 to 250 ms after sound onset; and an inhibitory component onsetting at 250 ms. Only the facilitatory component was affected in patients with LPFC damage: this component was absent when attending to sounds delivered in the ear contralateral to the lesion, with the most prominent decreases observed over the damaged brain regions. These findings have 2 important implications: (i) they provide evidence for functionally distinct facilitatory and inhibitory mechanisms supporting late auditory selective attention; (ii) they show that the LPFC is involved in the control of the facilitatory mechanisms of auditory attention. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2017-07-01
Restoring normal hearing requires knowledge of how peripheral and central auditory processes are affected by hearing loss. Previous research has focussed primarily on peripheral changes following sensorineural hearing loss, whereas consequences for central auditory processing have received less attention. We examined the ability of hearing-impaired children to direct auditory attention to a voice of interest (based on the talker's spatial location or gender) in the presence of a common form of background noise: the voices of competing talkers (i.e. during multi-talker, or "Cocktail Party" listening). We measured brain activity using electro-encephalography (EEG) when children prepared to direct attention to the spatial location or gender of an upcoming target talker who spoke in a mixture of three talkers. Compared to normally-hearing children, hearing-impaired children showed significantly less evidence of preparatory brain activity when required to direct spatial attention. This finding is consistent with the idea that hearing-impaired children have a reduced ability to prepare spatial attention for an upcoming talker. Moreover, preparatory brain activity was not restored when hearing-impaired children listened with their acoustic hearing aids. An implication of these findings is that steps to improve auditory attention alongside acoustic hearing aids may be required to improve the ability of hearing-impaired children to understand speech in the presence of competing talkers. Copyright © 2017 Elsevier B.V. All rights reserved.
Motor contributions to the temporal precision of auditory attention
Morillon, Benjamin; Schroeder, Charles E.; Wyart, Valentin
2014-01-01
In temporal—or dynamic—attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory ‘active sensing’. PMID:25314898
Motor contributions to the temporal precision of auditory attention.
Morillon, Benjamin; Schroeder, Charles E; Wyart, Valentin
2014-10-15
In temporal-or dynamic-attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory 'active sensing'.
Krabbe, David; Ellbin, Susanne; Nilsson, Michael; Jonsdottir, Ingibjörg H; Samuelsson, Hans
2017-07-01
Cognitive impairment has frequently been shown in patients who seek medical care for stress-related mental health problems. This study aims to extend the current knowledge of cognitive impairments in these patients by focusing on perceived fatigue and effects of distraction during cognitive testing. Executive function and attention were tested in a group of patients with stress-related exhaustion (n = 25) and compared with healthy controls (n = 25). Perceived fatigue was measured before, during and after the test session, and some of the tests were administered with and without standardized auditory distraction. Executive function and complex attention performance were poorer among the patients compared to controls. Interestingly, their performance was not significantly affected by auditory distraction but, in contrast to the controls, they reported a clear-cut increase in mental tiredness, during and after the test session. Thus, patients with stress-related exhaustion manage to perform during distraction but this was achieved at a great cost. These findings are discussed in terms of a possible tendency to adopt a high-effort approach despite cognitive impairments and the likelihood that such an approach will require increased levels of effort, which can result in increased fatigue. We tentatively conclude that increased fatigue during cognitive tasks is a challenge for patients with stress-related exhaustion and plausibly of major importance when returning to work demanding high cognitive performance.
Kornysheva, Katja; Schubotz, Ricarda I.
2011-01-01
Integrating auditory and motor information often requires precise timing as in speech and music. In humans, the position of the ventral premotor cortex (PMv) in the dorsal auditory stream renders this area a node for auditory-motor integration. Yet, it remains unknown whether the PMv is critical for auditory-motor timing and which activity increases help to preserve task performance following its disruption. 16 healthy volunteers participated in two sessions with fMRI measured at baseline and following rTMS (rTMS) of either the left PMv or a control region. Subjects synchronized left or right finger tapping to sub-second beat rates of auditory rhythms in the experimental task, and produced self-paced tapping during spectrally matched auditory stimuli in the control task. Left PMv rTMS impaired auditory-motor synchronization accuracy in the first sub-block following stimulation (p<0.01, Bonferroni corrected), but spared motor timing and attention to task. Task-related activity increased in the homologue right PMv, but did not predict the behavioral effect of rTMS. In contrast, anterior midline cerebellum revealed most pronounced activity increase in less impaired subjects. The present findings suggest a critical role of the left PMv in feed-forward computations enabling accurate auditory-motor timing, which can be compensated by activity modulations in the cerebellum, but not in the homologue region contralateral to stimulation. PMID:21738657
Out of focus - brain attention control deficits in adult ADHD.
Salmi, Juha; Salmela, Viljami; Salo, Emma; Mikkola, Katri; Leppämäki, Sami; Tani, Pekka; Hokkanen, Laura; Laasonen, Marja; Numminen, Jussi; Alho, Kimmo
2018-04-24
Modern environments are full of information, and place high demands on the attention control mechanisms that allow the selection of information from one (focused attention) or multiple (divided attention) sources, react to changes in a given situation (stimulus-driven attention), and allocate effort according to demands (task-positive and task-negative activity). We aimed to reveal how attention deficit hyperactivity disorder (ADHD) affects the brain functions associated with these attention control processes in constantly demanding tasks. Sixteen adults with ADHD and 17 controls performed adaptive visual and auditory discrimination tasks during functional magnetic resonance imaging (fMRI). Overlapping brain activity in frontoparietal saliency and default-mode networks, as well as in the somato-motor, cerebellar, and striatal areas were observed in all participants. In the ADHD participants, we observed exclusive activity enhancement in the brain areas typically considered to be primarily involved in other attention control functions: During auditory-focused attention, we observed higher activation in the sensory cortical areas of irrelevant modality and the default-mode network (DMN). DMN activity also increased during divided attention in the ADHD group, in turn decreasing during a simple button-press task. Adding irrelevant stimulation resulted in enhanced activity in the salience network. Finally, the irrelevant distractors that capture attention in a stimulus-driven manner activated dorsal attention networks and the cerebellum. Our findings suggest that attention control deficits involve the activation of irrelevant sensory modality, problems in regulating the level of attention on demand, and may encumber top-down processing in cases of irrelevant information. Copyright © 2018. Published by Elsevier B.V.
Bender, Stephan; Behringer, Stephanie; Freitag, Christine M; Resch, Franz; Weisbrod, Matthias
2010-12-01
To elucidate the contributions of modality-dependent post-processing in auditory, motor and visual cortical areas to short-term memory. We compared late negative waves (N700) during the post-processing of single lateralized stimuli which were separated by long intertrial intervals across the auditory, motor and visual modalities. Tasks either required or competed with attention to post-processing of preceding events, i.e. active short-term memory maintenance. N700 indicated that cortical post-processing exceeded short movements as well as short auditory or visual stimuli for over half a second without intentional short-term memory maintenance. Modality-specific topographies pointed towards sensory (respectively motor) generators with comparable time-courses across the different modalities. Lateralization and amplitude of auditory/motor/visual N700 were enhanced by active short-term memory maintenance compared to attention to current perceptions or passive stimulation. The memory-related N700 increase followed the characteristic time-course and modality-specific topography of the N700 without intentional memory-maintenance. Memory-maintenance-related lateralized negative potentials may be related to a less lateralised modality-dependent post-processing N700 component which occurs also without intentional memory maintenance (automatic memory trace or effortless attraction of attention). Encoding to short-term memory may involve controlled attention to modality-dependent post-processing. Similar short-term memory processes may exist in the auditory, motor and visual systems. Copyright © 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Solís-Marcos, Ignacio; Galvao-Carmona, Alejandro; Kircher, Katja
2017-01-01
Research on partially automated driving has revealed relevant problems with driving performance, particularly when drivers’ intervention is required (e.g., take-over when automation fails). Mental fatigue has commonly been proposed to explain these effects after prolonged automated drives. However, performance problems have also been reported after just a few minutes of automated driving, indicating that other factors may also be involved. We hypothesize that, besides mental fatigue, an underload effect of partial automation may also affect driver attention. In this study, such potential effect was investigated during short periods of partially automated and manual driving and at different speeds. Subjective measures of mental demand and vigilance and performance to a secondary task (an auditory oddball task) were used to assess driver attention. Additionally, modulations of some specific attention-related event-related potentials (ERPs, N1 and P3 components) were investigated. The mental fatigue effects associated with the time on task were also evaluated by using the same measurements. Twenty participants drove in a fixed-base simulator while performing an auditory oddball task that elicited the ERPs. Six conditions were presented (5–6 min each) combining three speed levels (low, comfortable and high) and two automation levels (manual and partially automated). The results showed that, when driving partially automated, scores in subjective mental demand and P3 amplitudes were lower than in the manual conditions. Similarly, P3 amplitude and self-reported vigilance levels decreased with the time on task. Based on previous studies, these findings might reflect a reduction in drivers’ attention resource allocation, presumably due to the underload effects of partial automation and to the mental fatigue associated with the time on task. Particularly, such underload effects on attention could explain the performance decrements after short periods of automated driving reported in other studies. However, further studies are needed to investigate this relationship in partial automation and in other automation levels. PMID:29163112
Cognitive effects of rhythmic auditory stimulation in Parkinson's disease: A P300 study.
Lei, Juan; Conradi, Nadine; Abel, Cornelius; Frisch, Stefan; Brodski-Guerniero, Alla; Hildner, Marcel; Kell, Christian A; Kaiser, Jochen; Schmidt-Kassow, Maren
2018-05-16
Rhythmic auditory stimulation (RAS) may compensate dysfunctions of the basal ganglia (BG), involved with intrinsic evaluation of temporal intervals and action initiation or continuation. In the cognitive domain, RAS containing periodically presented tones facilitates young healthy participants' attention allocation to anticipated time points, indicated by better performance and larger P300 amplitudes to periodic compared to random stimuli. Additionally, active auditory-motor synchronization (AMS) leads to a more precise temporal encoding of stimuli via embodied timing encoding than stimulus presentation adapted to the participants' actual movements. Here we investigated the effect of RAS and AMS in Parkinson's disease (PD). 23 PD patients and 23 healthy age-matched controls underwent an auditory oddball task. We manipulated the timing (periodic/random/adaptive) and setting (pedaling/sitting still) of stimulation. While patients elicited a general timing effect, i.e., larger P300 amplitudes for periodic versus random tones for both, sitting and pedaling conditions, controls showed a timing effect only for the sitting but not for the pedaling condition. However, a correlation between P300 amplitudes and motor variability in the periodic pedaling condition was obtained in control participants only. We conclude that RAS facilitates attentional processing of temporally predictable external events in PD patients as well as healthy controls, but embodied timing encoding via body movement does not affect stimulus processing due to BG impairment in patients. Moreover, even with intact embodied timing encoding, such as healthy elderly, the effect of AMS depends on the degree of movement synchronization performance, which is very low in the current study. Copyright © 2018 Elsevier B.V. All rights reserved.
Multisensory Integration Strategy for Modality-Specific Loss of Inhibition Control in Older Adults
Ryu, Hokyoung; Kim, Jae-Kwan; Jeong, Eunju
2018-01-01
Older adults are known to have lesser cognitive control capability and greater susceptibility to distraction than young adults. Previous studies have reported age-related problems in selective attention and inhibitory control, yielding mixed results depending on modality and context in which stimuli and tasks were presented. The purpose of the study was to empirically demonstrate a modality-specific loss of inhibitory control in processing audio-visual information with ageing. A group of 30 young adults (mean age = 25.23, Standard Deviation (SD) = 1.86) and 22 older adults (mean age = 55.91, SD = 4.92) performed the audio-visual contour identification task (AV-CIT). We compared performance of visual/auditory identification (Uni-V, Uni-A) with that of visual/auditory identification in the presence of distraction in counterpart modality (Multi-V, Multi-A). The findings showed a modality-specific effect on inhibitory control. Uni-V performance was significantly better than Multi-V, indicating that auditory distraction significantly hampered visual target identification. However, Multi-A performance was significantly enhanced compared to Uni-A, indicating that auditory target performance was significantly enhanced by visual distraction. Additional analysis showed an age-specific effect on enhancement between Uni-A and Multi-A depending on the level of visual inhibition. Together, our findings indicated that the loss of visual inhibitory control was beneficial for the auditory target identification presented in a multimodal context in older adults. A likely multisensory information processing strategy in the older adults was further discussed in relation to aged cognition. PMID:29641462
An auditory attention task: a note on the processing of verbal information.
Linde, L
1994-04-01
On an auditory attention task subjects were required to reproduce spatial relationships between letters from auditorily presented verbal information containing the prepositions "before" or "after." It was assumed that propositions containing "after" induce a conflict between temporal, and semantically implied, spatial order between letters. Data from 36 subjects showing that propositions with "after" are more difficult to process are presented. A significant, general training effect appeared. 200 mg caffeine had a certain beneficial effect on performance of 18 subjects who had been awake for about 22 hours and were tested at 6 a.m.; however, the beneficial effect was not related to amount of conflict but concerned items without and with conflict. On the other hand, the effect of caffeine for 18 subjects tested at 4 p.m. after normal sleep was slightly negative.
Selective Auditory Attention in Adults: Effects of Rhythmic Structure of the Competing Language
ERIC Educational Resources Information Center
Reel, Leigh Ann; Hicks, Candace Bourland
2012-01-01
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
ERIC Educational Resources Information Center
Foster, Nicholas E. V.; Ouimet, Tia; Tryfon, Ana; Doyle-Thomas, Krissy; Anagnostou, Evdokia; Hyde, Krista L.
2016-01-01
In vision, typically-developing (TD) individuals perceive "global" (whole) before "local" (detailed) features, whereas individuals with autism spectrum disorder (ASD) exhibit a local bias. However, auditory global-local distinctions are less clear in ASD, particularly in terms of age and attention effects. To these aims, here…
Selective attention and the auditory vertex potential. 1: Effects of stimulus delivery rate
NASA Technical Reports Server (NTRS)
Schwent, V. L.; Hillyard, S. A.; Galambos, R.
1975-01-01
Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.
Audience gaze while appreciating a multipart musical performance.
Kawase, Satoshi; Obata, Satoshi
2016-11-01
Visual information has been observed to be crucial for audience members during musical performances. The present study used an eye tracker to investigate audience members' gazes while appreciating an audiovisual musical ensemble performance, based on evidence of the dominance of musical part in auditory attention when listening to multipart music that contains different melody lines and the joint-attention theory of gaze. We presented singing performances, by a female duo. The main findings were as follows: (1) the melody part (soprano) attracted more visual attention than the accompaniment part (alto) throughout the piece, (2) joint attention emerged when the singers shifted their gazes toward their co-performer, suggesting that inter-performer gazing interactions that play a spotlight role mediated performer-audience visual interaction, and (3) musical part (melody or accompaniment) strongly influenced the total duration of gazes among audiences, while the spotlight effect of gaze was limited to just after the singers' gaze shifts. Copyright © 2016. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.
1984-08-01
This work reviews the areas of auditory attention, recognition, memory and auditory perception of patterns, pitch, and loudness. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays.
Scheich, Henning; Brechmann, André; Brosch, Michael; Budinger, Eike; Ohl, Frank W; Selezneva, Elena; Stark, Holger; Tischmeyer, Wolfgang; Wetzel, Wolfram
2011-01-01
Two phenomena of auditory cortex activity have recently attracted attention, namely that the primary field can show different types of learning-related changes of sound representation and that during learning even this early auditory cortex is under strong multimodal influence. Based on neuronal recordings in animal auditory cortex during instrumental tasks, in this review we put forward the hypothesis that these two phenomena serve to derive the task-specific meaning of sounds by associative learning. To understand the implications of this tenet, it is helpful to realize how a behavioral meaning is usually derived for novel environmental sounds. For this purpose, associations with other sensory, e.g. visual, information are mandatory to develop a connection between a sound and its behaviorally relevant cause and/or the context of sound occurrence. This makes it plausible that in instrumental tasks various non-auditory sensory and procedural contingencies of sound generation become co-represented by neuronal firing in auditory cortex. Information related to reward or to avoidance of discomfort during task learning, that is essentially non-auditory, is also co-represented. The reinforcement influence points to the dopaminergic internal reward system, the local role of which for memory consolidation in auditory cortex is well-established. Thus, during a trial of task performance, the neuronal responses to the sounds are embedded in a sequence of representations of such non-auditory information. The embedded auditory responses show task-related modulations of auditory responses falling into types that correspond to three basic logical classifications that may be performed with a perceptual item, i.e. from simple detection to discrimination, and categorization. This hierarchy of classifications determine the semantic "same-different" relationships among sounds. Different cognitive classifications appear to be a consequence of learning task and lead to a recruitment of different excitatory and inhibitory mechanisms and to distinct spatiotemporal metrics of map activation to represent a sound. The described non-auditory firing and modulations of auditory responses suggest that auditory cortex, by collecting all necessary information, functions as a "semantic processor" deducing the task-specific meaning of sounds by learning. © 2010. Published by Elsevier B.V.
Paltoglou, Aspasia E; Sumner, Christian J; Hall, Deborah A
2011-01-01
Feature-specific enhancement refers to the process by which selectively attending to a particular stimulus feature specifically increases the response in the same region of the brain that codes that stimulus property. Whereas there are many demonstrations of this mechanism in the visual system, the evidence is less clear in the auditory system. The present functional magnetic resonance imaging (fMRI) study examined this process for two complex sound features, namely frequency modulation (FM) and spatial motion. The experimental design enabled us to investigate whether selectively attending to FM and spatial motion enhanced activity in those auditory cortical areas that were sensitive to the two features. To control for attentional effort, the difficulty of the target-detection tasks was matched as closely as possible within listeners. Locations of FM-related and motion-related activation were broadly compatible with previous research. The results also confirmed a general enhancement across the auditory cortex when either feature was being attended to, as compared with passive listening. The feature-specific effects of selective attention revealed the novel finding of enhancement for the nonspatial (FM) feature, but not for the spatial (motion) feature. However, attention to spatial features also recruited several areas outside the auditory cortex. Further analyses led us to conclude that feature-specific effects of selective attention are not statistically robust, and appear to be sensitive to the choice of fMRI experimental design and localizer contrast. PMID:21447093
Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians.
Sanju, Himanshu Kumar; Kumar, Prawin
2016-09-01
To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.
Lebedeva, I S; Akhadov, T A; Petriaĭkin, A V; Kaleda, V G; Barkhatova, A N; Golubev, S A; Rumiantseva, E E; Vdovenko, A M; Fufaeva, E A; Semenova, N A
2011-01-01
Six patients in the state of remission after the first episode ofjuvenile schizophrenia and seven sex- and age-matched mentally healthy subjects were examined by fMRI and ERP methods. The auditory oddball paradigm was applied. Differences in P300 parameters didn't reach the level of significance, however, a significantly higher hemodynamic response to target stimuli was found in patients bilaterally in the supramarginal gyrus and in the right medial frontal gyrus, which points to pathology of these brain areas in supporting of auditory selective attention.
Okazaki, Kosuke; Yamamuro, Kazuhiko; Iida, Junzo; Ota, Toyosaku; Nakanishi, Yoko; Matsuura, Hiroki; Uratani, Mitsuhiro; Sawada, Satomi; Azechi, Takahiro; Kishimoto, Naoko; Kishimoto, Toshifumi
2018-06-01
Attention deficit is commonly observed in several psychiatric conditions. In particular, patients with attention deficit hyperactivity disorder exhibit not only attention deficit, but also intra-individual variability in response times (IIV-RT) during the performance of cognitive tasks related to attention span and sustained attention. Although obsessive compulsive disorder (OCD) is commonly observed across childhood, little is known about abnormalities in IIV-RT during the auditory odd-ball task, and how these changes relate to event-related potentials (ERPs) components. In the present study, we compared the ERPs of 15 adolescent and pediatric patients with OCD with 15 healthy age, sex, and IQ-matched controls. We found that tau of IIV-TR was not significantly different between the OCD group and controls, whereas the OCD group exhibited lower mu and sigma compared to controls. Furthermore, we revealed that P300 amplitude was significantly attenuated in the OCD group at Fz, C3, and C4, compared with controls. The present study thereby provided the first evidence that individuals with pediatric or adolescent OCD exhibit lower variability in reaction time in IIV-RT during an auditory odd-ball task than controls. These results suggest that there are no impairments in attention span and sustained attention in pediatric and adolescent patients with OCD. Copyright © 2018 Elsevier B.V. All rights reserved.
Yang, Ming-Tao; Hsu, Chun-Hsien; Yeh, Pei-Wen; Lee, Wang-Tso; Liang, Jao-Shwann; Fu, Wen-Mei; Lee, Chia-Ying
2015-01-01
Inattention (IA) has been a major problem in children with attention deficit/hyperactivity disorder (ADHD), accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN), P3a, and late discriminative negativity (LDN) as event-related potential (ERP) markers, under the passive auditory oddball paradigm. Two types of stimuli-pure tones and Mandarin lexical tones-were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years). Two passive auditory oddball paradigms (lexical tones and pure tones) were applied. The pure tone oddball paradigm included a standard stimulus (1000 Hz, 80%) and two deviant stimuli (1015 and 1090 Hz, 10% each). The Mandarin lexical tone oddball paradigm's standard stimulus was /yi3/ (80%) and two deviant stimuli were /yi1/ and /yi2/ (10% each). The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents' and teachers' ratings on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for the evaluation of anti-ADHD drugs that aim to alleviate these dysfunctions.
1981-07-10
Pohlmann, L. D. Some models of observer behavior in two-channel auditory signal detection. Perception and Psychophy- sics, 1973, 14, 101-109. Spelke...spatial), and processing modalities ( auditory versus visual input, vocal versus manual response). If validated, this configuration has both theoretical...conclusion that auditory and visual processes will compete, as will spatial and verbal (albeit to a lesser extent than auditory - auditory , visual-visual
The effect of postsurgical pain on attentional processing in horses.
Dodds, Louise; Knight, Laura; Allen, Kate; Murrell, Joanna
2017-07-01
To investigate the effect of postsurgical pain on the performance of horses in a novel object and auditory startle task. Prospective clinical study. Twenty horses undergoing different types of surgery and 16 control horses that did not undergo surgery. The interaction of 36 horses with novel objects and a response to an auditory stimulus were measured at two time points; the day before surgery (T1) and the day after surgery (T2) for surgical horses (G1), and at a similar time interval for control horses (G2). Pain and sedation were measured using simple descriptive scales at the time the tests were carried out. Total time or score attributed to each of the behavioural categories was compared between groups (G1 and G2) for each test and between tests (T1 and T2) for each group. The median (range) time spent interacting with novel objects was reduced in G1 from 58 (6-367) seconds in T1 to 12 (0-495) seconds in T2 (p=0.0005). In G2 the change in interaction time between T1 and T2 was not statistically significant. Median (range) total auditory score was 7 (3-12) and 10 (1-12) in G1 and G2, respectively, at T1, decreasing to 6 (0-10) in G1 after surgery and 9.5 (1-12) in G2 (p=0.0003 and p=0.94, respectively). There was a difference in total auditory score between G1 and G2 at T2 (p=0.0169), with the score being lower in G1 than G2. Postsurgical pain negatively impacts attention towards novel objects and causes a decreased responsiveness to an auditory startle test. In horses, tasks demanding attention may be useful as a biomarker of pain. Copyright © 2017 Association of Veterinary Anaesthetists and American College of Veterinary Anesthesia and Analgesia. All rights reserved.
Fritz, Jonathan B; Elhilali, Mounya; David, Stephen V; Shamma, Shihab A
2007-07-01
Acoustic filter properties of A1 neurons can dynamically adapt to stimulus statistics, classical conditioning, instrumental learning and the changing auditory attentional focus. We have recently developed an experimental paradigm that allows us to view cortical receptive field plasticity on-line as the animal meets different behavioral challenges by attending to salient acoustic cues and changing its cortical filters to enhance performance. We propose that attention is the key trigger that initiates a cascade of events leading to the dynamic receptive field changes that we observe. In our paradigm, ferrets were initially trained, using conditioned avoidance training techniques, to discriminate between background noise stimuli (temporally orthogonal ripple combinations) and foreground tonal target stimuli. They learned to generalize the task for a wide variety of distinct background and foreground target stimuli. We recorded cortical activity in the awake behaving animal and computed on-line spectrotemporal receptive fields (STRFs) of single neurons in A1. We observed clear, predictable task-related changes in STRF shape while the animal performed spectral tasks (including single tone and multi-tone detection, and two-tone discrimination) with different tonal targets. A different set of task-related changes occurred when the animal performed temporal tasks (including gap detection and click-rate discrimination). Distinctive cortical STRF changes may constitute a "task-specific signature". These spectral and temporal changes in cortical filters occur quite rapidly, within 2min of task onset, and fade just as quickly after task completion, or in some cases, persisted for hours. The same cell could multiplex by differentially changing its receptive field in different task conditions. On-line dynamic task-related changes, as well as persistent plastic changes, were observed at a single-unit, multi-unit and population level. Auditory attention is likely to be pivotal in mediating these task-related changes since the magnitude of STRF changes correlated with behavioral performance on tasks with novel targets. Overall, these results suggest the presence of an attention-triggered plasticity algorithm in A1 that can swiftly change STRF shape by transforming receptive fields to enhance figure/ground separation, by using a contrast matched filter to filter out the background, while simultaneously enhancing the salient acoustic target in the foreground. These results favor the view of a nimble, dynamic, attentive and adaptive brain that can quickly reshape its sensory filter properties and sensori-motor links on a moment-to-moment basis, depending upon the current challenges the animal faces. In this review, we summarize our results in the context of a broader survey of the field of auditory attention, and then consider neuronal networks that could give rise to this phenomenon of attention-driven receptive field plasticity in A1.
Tsunoda, Naoko; Hashimoto, Mamoru; Ishikawa, Tomohisa; Fukuhara, Ryuji; Yuki, Seiji; Tanaka, Hibiki; Hatada, Yutaka; Miyagawa, Yusuke; Ikeda, Manabu
2018-05-08
Auditory hallucinations are an important symptom for diagnosing dementia with Lewy bodies (DLB), yet they have received less attention than visual hallucinations. We investigated the clinical features of auditory hallucinations and the possible mechanisms by which they arise in patients with DLB. We recruited 124 consecutive patients with probable DLB (diagnosis based on the DLB International Workshop 2005 criteria; study period: June 2007-January 2015) from the dementia referral center of Kumamoto University Hospital. We used the Neuropsychiatric Inventory to assess the presence of auditory hallucinations, visual hallucinations, and other neuropsychiatric symptoms. We reviewed all available clinical records of patients with auditory hallucinations to assess their clinical features. We performed multiple logistic regression analysis to identify significant independent predictors of auditory hallucinations. Of the 124 patients, 44 (35.5%) had auditory hallucinations and 75 (60.5%) had visual hallucinations. The majority of patients (90.9%) with auditory hallucinations also had visual hallucinations. Auditory hallucinations consisted mostly of human voices, and 90% of patients described them as like hearing a soundtrack of the scene. Multiple logistic regression showed that the presence of auditory hallucinations was significantly associated with female sex (P = .04) and hearing impairment (P = .004). The analysis also revealed independent correlations between the presence of auditory hallucinations and visual hallucinations (P < .001), phantom boarder delusions (P = .001), and depression (P = .038). Auditory hallucinations are common neuropsychiatric symptoms in DLB and usually appear as a background soundtrack accompanying visual hallucinations. Auditory hallucinations in patients with DLB are more likely to occur in women and those with impaired hearing, depression, delusions, or visual hallucinations. © Copyright 2018 Physicians Postgraduate Press, Inc.
ERIC Educational Resources Information Center
Lallier, Marie; Donnadieu, Sophie; Valdois, Sylviane
2013-01-01
The simultaneous auditory processing skills of 17 dyslexic children and 17 skilled readers were measured using a dichotic listening task. Results showed that the dyslexic children exhibited difficulties reporting syllabic material when presented simultaneously. As a measure of simultaneous visual processing, visual attention span skills were…
The Development of Visual and Auditory Selective Attention Using the Central-Incidental Paradigm.
ERIC Educational Resources Information Center
Conroy, Robert L.; Weener, Paul
Analogous auditory and visual central-incidental learning tasks were administered to 24 students from each of the second, fourth, and sixth grades. The visual tasks served as another modification of Hagen's central-incidental learning paradigm, with the interpretation that focal attention processes continue to develop until the age of 12 or 13…
Goebl, Werner
2015-01-01
Nonverbal auditory and visual communication helps ensemble musicians predict each other’s intentions and coordinate their actions. When structural characteristics of the music make predicting co-performers’ intentions difficult (e.g., following long pauses or during ritardandi), reliance on incoming auditory and visual signals may change. This study tested whether attention to visual cues during piano–piano and piano–violin duet performance increases in such situations. Pianists performed the secondo part to three duets, synchronizing with recordings of violinists or pianists playing the primo parts. Secondos’ access to incoming audio and visual signals and to their own auditory feedback was manipulated. Synchronization was most successful when primo audio was available, deteriorating when primo audio was removed and only cues from primo visual signals were available. Visual cues were used effectively following long pauses in the music, however, even in the absence of primo audio. Synchronization was unaffected by the removal of secondos’ own auditory feedback. Differences were observed in how successfully piano–piano and piano–violin duos synchronized, but these effects of instrument pairing were not consistent across pieces. Pianists’ success at synchronizing with violinists and other pianists is likely moderated by piece characteristics and individual differences in the clarity of cueing gestures used. PMID:26279610
Incidental category learning and cognitive load in a multisensory environment across childhood.
Broadbent, H J; Osborne, T; Rea, M; Peng, A; Mareschal, D; Kirkham, N Z
2018-06-01
Multisensory information has been shown to facilitate learning (Bahrick & Lickliter, 2000; Broadbent, White, Mareschal, & Kirkham, 2017; Jordan & Baker, 2011; Shams & Seitz, 2008). However, although research has examined the modulating effect of unisensory and multisensory distractors on multisensory processing, the extent to which a concurrent unisensory or multisensory cognitive load task would interfere with or support multisensory learning remains unclear. This study examined the role of concurrent task modality on incidental category learning in 6- to 10-year-olds. Participants were engaged in a multisensory learning task while also performing either a unisensory (visual or auditory only) or multisensory (audiovisual) concurrent task (CT). We found that engaging in an auditory CT led to poorer performance on incidental category learning compared with an audiovisual or visual CT, across groups. In 6-year-olds, category test performance was at chance in the auditory-only CT condition, suggesting auditory concurrent tasks may interfere with learning in younger children, but the addition of visual information may serve to focus attention. These findings provide novel insight into the use of multisensory concurrent information on incidental learning. Implications for the deployment of multisensory learning tasks within education across development and developmental changes in modality dominance and ability to switch flexibly across modalities are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Treating attention in mild aphasia: evaluation of attention process training-II.
Murray, Laura L; Keeton, R Jessica; Karcher, Laura
2006-01-01
This study examined whether attention processing training-II [Sohlberg, M. M., Johnson, L., Paule, L., Raskin, S. A., & Mateer, C. A. (2001). Attention Process Training-II: A program to address attentional deficits for persons with mild cognitive dysfunction (2nd ed.). Wake Forest, NC: Lash & Associates.; APT-II], when applied in the context of a multiple baseline ABA design, would improve the attention abilities of RW, a patient with mild conduction aphasia and concomitant attention and working memory deficits. We also explored whether APT-II training would enhance RW's auditory comprehension, other cognitive abilities such as memory, and his and his spouse's perceptions of his daily attention and communication difficulties. With treatment, RW improved on trained attention tasks and made modest gains on standardized tests and probes that evaluated cognitive skills related to treatment activities. Nominal change in auditory comprehension and untrained attention and memory functions was observed, and neither RW nor his spouse reported noticeable improvements in his daily attention or communication abilities. These and previous findings indicate that structured attention retraining may enhance specific attention skills, but that positive changes in broader attention and untrained functions are less likely. As a result of reading this article, the participant will be able to: (1) summarize the previous literature regarding attention impairments and treatment approaches for patients with aphasia. (2) describe how Attention Processing Training-II affected the attention, auditory comprehension, and other cognitive abilities of the patient in this study.
Brain Networks of Novelty-Driven Involuntary and Cued Voluntary Auditory Attention Shifting
Huang, Samantha; Belliveau, John W.; Tengshe, Chinmayi; Ahveninen, Jyrki
2012-01-01
In everyday life, we need a capacity to flexibly shift attention between alternative sound sources. However, relatively little work has been done to elucidate the mechanisms of attention shifting in the auditory domain. Here, we used a mixed event-related/sparse-sampling fMRI approach to investigate this essential cognitive function. In each 10-sec trial, subjects were instructed to wait for an auditory “cue” signaling the location where a subsequent “target” sound was likely to be presented. The target was occasionally replaced by an unexpected “novel” sound in the uncued ear, to trigger involuntary attention shifting. To maximize the attention effects, cues, targets, and novels were embedded within dichotic 800-Hz vs. 1500-Hz pure-tone “standard” trains. The sound of clustered fMRI acquisition (starting at t = 7.82 sec) served as a controlled trial-end signal. Our approach revealed notable activation differences between the conditions. Cued voluntary attention shifting activated the superior intraparietal sulcus (IPS), whereas novelty-triggered involuntary orienting activated the inferior IPS and certain subareas of the precuneus. Clearly more widespread activations were observed during voluntary than involuntary orienting in the premotor cortex, including the frontal eye fields. Moreover, we found evidence for a frontoinsular-cingular attentional control network, consisting of the anterior insula, inferior frontal cortex, and medial frontal cortices, which were activated during both target discrimination and voluntary attention shifting. Finally, novels and targets activated much wider areas of superior temporal auditory cortices than shifting cues. PMID:22937153
Aliakbaryhosseinabadi, Susan; Kamavuako, Ernest Nlandu; Jiang, Ning; Farina, Dario; Mrachacz-Kersting, Natalie
2017-11-01
Dual tasking is defined as performing two tasks concurrently and has been shown to have a significant effect on attention directed to the performance of the main task. In this study, an attention diversion task with two different levels was administered while participants had to complete a cue-based motor task consisting of foot dorsiflexion. An auditory oddball task with two levels of complexity was implemented to divert the user's attention. Electroencephalographic (EEG) recordings were made from nine single channels. Event-related potentials (ERPs) confirmed that the oddball task of counting a sequence of two tones decreased the auditory P300 amplitude more than the oddball task of counting one target tone among three different tones. Pre-movement features quantified from the movement-related cortical potential (MRCP) were changed significantly between single and dual-task conditions in motor and fronto-central channels. There was a significant delay in movement detection for the case of single tone counting in two motor channels only (237.1-247.4ms). For the task of sequence counting, motor cortex and frontal channels showed a significant delay in MRCP detection (232.1-250.5ms). This study investigated the effect of attention diversion in dual-task conditions by analysing both ERPs and MRCPs in single channels. The higher attention diversion lead to a significant reduction in specific MRCP features of the motor task. These results suggest that attention division in dual-tasking situations plays an important role in movement execution and detection. This has important implications in designing real-time brain-computer interface systems. Copyright © 2017 Elsevier B.V. All rights reserved.
Long-term memory biases auditory spatial attention.
Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude
2017-10-01
Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants heard audio clips, some of which included a lateralized target (p = 50%). On each trial participants indicated whether the target was presented from the left, right, or was absent. Following a 1 hr retention interval, participants were presented with the same audio clips, which now all included a target. In Experiment 1, participants showed memory-based gains in response time and d'. Experiment 2 showed that temporal expectations modulate attention, with greater memory-guided attention effects on performance when temporal context was reinstated from learning (i.e., when timing of the target within audio clips was not changed from initially learned timing). Experiment 3 showed that while conscious recall of target locations was modulated by exposure to target-context associations during learning (i.e., better recall with higher number of learning blocks), the influence of LTM associations on spatial attention was not reduced (i.e., number of learning blocks did not affect memory-guided attention). Both Experiments 2 and 3 showed gains in performance related to target-context associations, even for associations that were not explicitly remembered. Together, these findings indicate that memory for audio clips is acquired quickly and is surprisingly robust; both implicit and explicit LTM for the location of a faint target tone modulated auditory spatial attention. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Van Vleet, Thomas M; DeGutis, Joseph M
2013-03-01
Prominent deficits in spatial attention evident in patients with hemispatial neglect are often accompanied by equally prominent deficits in non-spatial attention (e.g., poor sustained and selective attention, pronounced vigilance decrement). A number of studies now show that deficits in non-spatial attention influence spatial attention. Treatment strategies focused on improving vigilance or sustained attention may effectively remediate neglect. For example, a recent study employing Tonic and Phasic Alertness Training (TAPAT), a task that requires monitoring a constant stream of hundreds of novel scenes, demonstrated group-level (n=12) improvements after training compared to a test-retest control group or active treatment control condition on measures of visual search, midpoint estimation and working memory (DeGutis and Van Vleet, 2010). To determine whether the modality of treatment or stimulus novelty are key factors to improving hemispatial neglect, we designed a similar continuous performance training task in which eight patients with chronic and moderate to severe neglect were challenged to rapidly and continuously discriminate a limited set of centrally presented auditory tones once a day for 9 days (36-min/day). All patients demonstrated significant improvement in several, untrained measures of spatial and non-spatial visual attention, and as a group failed to demonstrate a lateralized attention deficit 24-h post-training compared to a control group of chronic neglect patients who simply waited during the training period. The results indicate that TAPAT-related improvements in hemispatial neglect are likely due to improvements in the intrinsic regulation of supramodal, non-spatial attentional resources. Published by Elsevier Ltd.
Investigating three types of continuous auditory feedback in visuo-manual tracking.
Boyer, Éric O; Bevilacqua, Frédéric; Susini, Patrick; Hanneton, Sylvain
2017-03-01
The use of continuous auditory feedback for motor control and learning is still understudied and deserves more attention regarding fundamental mechanisms and applications. This paper presents the results of three experiments studying the contribution of task-, error-, and user-related sonification to visuo-manual tracking and assessing its benefits on sensorimotor learning. First results show that sonification can help decreasing the tracking error, as well as increasing the energy in participant's movement. In the second experiment, when alternating feedback presence, the user-related sonification did not show feedback dependency effects, contrary to the error and task-related feedback. In the third experiment, a reduced exposure of 50% diminished the positive effect of sonification on performance, whereas the increase of the average energy with sound was still significant. In a retention test performed on the next day without auditory feedback, movement energy was still superior for the groups previously trained with the feedback. Although performance was not affected by sound, a learning effect was measurable in both sessions and the user-related group improved its performance also in the retention test. These results confirm that a continuous auditory feedback can be beneficial for movement training and also show an interesting effect of sonification on movement energy. User-related sonification can prevent feedback dependency and increase retention. Consequently, sonification of the user's own motion appears as a promising solution to support movement learning with interactive feedback.
Rhythmic Haptic Stimuli Improve Short-Term Attention.
Zhang, Shusheng; Wang, Dangxiao; Afzal, Naqash; Zhang, Yuru; Wu, Ruilin
2016-01-01
Brainwave entrainment using rhythmic visual and/or auditory stimulation has shown its efficacy in modulating neural activities and cognitive ability. In the presented study, we aim to investigate whether rhythmic haptic stimulation could enhance short-term attention. An experiment with sensorimotor rhythm (SMR) increasing protocol was performed in which participants were presented sinusoidal vibrotactile stimulus of 15 Hz on their palm. Test of Variables of Attention (T.O.V.A.) was performed before and after the stimulating session. Electroencephalograph (EEG) was recorded across the stimulating session and the two attention test sessions. SMR band power manifested a significant increase after stimulation. Results of T.O.V.A. tests indicated an improvement in the attention of participants who had received the stimulation compared to the control group who had not received the stimulation. The D prime score of T.O.V.A. reveals that participants performed better in perceptual sensitivity and sustaining attention level compared to their baseline performance before the stimulating session. These findings highlight the potential value of using haptics-based brainwave entrainment for cognitive training.
Assessment of Auditory Functioning of Deaf-Blind Multihandicapped Children.
ERIC Educational Resources Information Center
Kukla, Deborah; Connolly, Theresa Thomas
The manual describes a procedure to assess to what extent a deaf-blind multiply handicapped student uses his residual hearing in the classroom. Six levels of auditory functioning (awareness/reflexive, attention/alerting, localization, auditory discrimination, recognition, and comprehension) are analyzed, and assessment activities are detailed for…
Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.
2015-01-01
This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918
Effects of auditory selective attention on chirp evoked auditory steady state responses.
Bohr, Andreas; Bernarding, Corinna; Strauss, Daniel J; Corona-Strauss, Farah I
2011-01-01
Auditory steady state responses (ASSRs) are frequently used to assess auditory function. Recently, the interest in effects of attention on ASSRs has increased. In this paper, we investigated for the first time possible effects of attention on AS-SRs evoked by amplitude modulated and frequency modulated chirps paradigms. Different paradigms were designed using chirps with low and high frequency content, and the stimulation was presented in a monaural and dichotic modality. A total of 10 young subjects participated in the study, they were instructed to ignore the stimuli and after a second repetition they had to detect a deviant stimulus. In the time domain analysis, we found enhanced amplitudes for the attended conditions. Furthermore, we noticed higher amplitudes values for the condition using frequency modulated low frequency chirps evoked by a monaural stimulation. The most difference between attended and unattended modality was exhibited at the dichotic case of the amplitude modulated condition using chirps with low frequency content.
Effects of smoking marijuana on brain perfusion and cognition.
O'Leary, Daniel S; Block, Robert I; Koeppel, Julie A; Flaum, Michael; Schultz, Susan K; Andreasen, Nancy C; Ponto, Laura Boles; Watkins, G Leonard; Hurtig, Richard R; Hichwa, Richard D
2002-06-01
The effects of smoking marijuana on regional cerebral blood flow (rCBF) and cognitive performance were assessed in 12 recreational users in a double-blinded, placebo-controlled study. PET with [(15)Oxygen]-labeled water ([(15)O]H(2)O) was used to measure rCBF before and after smoking of marijuana and placebo cigarettes, as subjects repeatedly performed an auditory attention task. Smoking marijuana resulted in intoxication, as assessed by a behavioral rating scale, but did not significantly alter mean behavioral performance on the attention task. Heart rate and blood pressure increased dramatically following smoking of marijuana but not placebo cigarettes. However, mean global CBF did not change significantly. Increased rCBF was observed in orbital and mesial frontal lobes, insula, temporal poles, anterior cingulate, as well as in the cerebellum. The increases in rCBF in anterior brain regions were predominantly in "paralimbic" regions and may be related to marijuana's mood-related effects. Reduced rCBF was observed in temporal lobe auditory regions, in visual cortex, and in brain regions that may be part of an attentional network (parietal lobe, frontal lobe and thalamus). These rCBF decreases may be the neural basis of perceptual and cognitive alterations that occur with acute marijuana intoxication. There was no significant rCBF change in the nucleus accumbens or other reward-related brain regions, nor in basal ganglia or hippocampus, which have a high density of cannabinoid receptors.
Gainotti, Guido
2010-02-01
The aim of the present survey was to review scientific articles dealing with the non-visual (auditory and tactile) forms of neglect to determine: (a) whether behavioural patterns similar to those observed in the visual modality can also be observed in the non-visual modalities; (b) whether a different severity of neglect can be found in the visual and in the auditory and tactile modalities; (c) the reasons for the possible differences between the visual and non-visual modalities. Data pointing to a contralesional orienting of attention in the auditory and the tactile modalities in visual neglect patients were separately reviewed. Results showed: (a) that in patients with right brain damage manifestations of neglect for the contralesional side of space can be found not only in the visual but also in the auditory and tactile modalities; (b) that the severity of neglect is greater in the visual than in the non-visual modalities. This asymmetry in the severity of neglect across modalities seems due to the greater role that the automatic capture of attention by irrelevant ipsilesional stimuli seems to play in the visual modality. Copyright 2009 Elsevier Srl. All rights reserved.
Beeghly, Marjorie; Rose-Jacobs, Ruth; Martin, Brett M.; Cabral, Howard J.; Heeren, Timothy C.; Frank, Deborah A.
2014-01-01
Neuropsychological processes such as attention and memory contribute to children's higher-level cognitive and language functioning and predict academic achievement. The goal of this analysis was to evaluate whether level of intrauterine cocaine exposure (IUCE) alters multiple aspects of preadolescents' neuropsychological functioning assessed using a single age-referenced instrument, the NEPSY: A Developmental Neuropsychological Assessment (NEPSY) [71], after controlling for relevant covariates. Participants included 137 term 9.5-year-old children from low-income urban backgrounds (51% male, 90% African American/Caribbean) from an ongoing prospective longitudinal study. Level of IUCE was assessed in the newborn period using infant meconium and maternal report. 52% of the children had IUCE (65% with lighter IUCE, and 35% with heavier IUCE), and 48% were unexposed. Infants with Fetal Alcohol Syndrome, HIV seropositivity, or intrauterine exposure to illicit substances other than cocaine and marijuana were excluded. At the 9.5-year follow-up visit, trained examiners masked to IUCE and background variables evaluated children's neuropsychological functioning using the NEPSY. The association between level of IUCE and NEPSY outcomes was evaluated in a series of linear regressions controlling for intrauterine exposure to other substances and relevant child, caregiver, and demographic variables. Results indicated that level of IUCE was associated with lower scores on the Auditory Attention and Narrative Memory tasks, both of which require auditory information processing and sustained attention for successful performance. However, results did not follow the expected ordinal, dose-dependent pattern. Children's neuropsychological test scores were also altered by a variety of other biological and psychosocial factors. PMID:24978115
Long-term exposure to noise impairs cortical sound processing and attention control.
Kujala, Teija; Shtyrov, Yury; Winkler, Istvan; Saher, Marieke; Tervaniemi, Mari; Sallinen, Mikael; Teder-Sälejärvi, Wolfgang; Alho, Kimmo; Reinikainen, Kalevi; Näätänen, Risto
2004-11-01
Long-term exposure to noise impairs human health, causing pathological changes in the inner ear as well as other anatomical and physiological deficits. Numerous individuals are daily exposed to excessive noise. However, there is a lack of systematic research on the effects of noise on cortical function. Here we report data showing that long-term exposure to noise has a persistent effect on central auditory processing and leads to concurrent behavioral deficits. We found that speech-sound discrimination was impaired in noise-exposed individuals, as indicated by behavioral responses and the mismatch negativity brain response. Furthermore, irrelevant sounds increased the distractibility of the noise-exposed subjects, which was shown by increased interference in task performance and aberrant brain responses. These results demonstrate that long-term exposure to noise has long-lasting detrimental effects on central auditory processing and attention control.
Auditory interfaces: The human perceiver
NASA Technical Reports Server (NTRS)
Colburn, H. Steven
1991-01-01
A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.
ERIC Educational Resources Information Center
Erbay, Filiz
2013-01-01
The aim of present research was to describe the relation of six-year-old children's attention and reading readiness skills (general knowledge, word comprehension, sentences, and matching) with their auditory reasoning and processing skills. This was a quantitative study based on scanning model. Research sampling consisted of 204 kindergarten…
Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters
ERIC Educational Resources Information Center
Smith, Nicholas A.; Trainor, Laurel J.
2011-01-01
This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…
Molina, S J; Miceli, M; Guelman, L R
2016-07-01
Noise coming from urban traffic, household appliances or discotheques might be as hazardous to the health of exposed people as occupational noise, because may likewise cause hearing loss, changes in hormonal, cardiovascular and immune systems and behavioral alterations. Besides, noise can affect sleep, work performance and productivity as well as communication skills. Moreover, exposure to noise can trigger an oxidative imbalance between reactive oxygen species (ROS) and the activity of antioxidant enzymes in different structures, which can contribute to tissue damage. In this review we systematized the information from reports concerning noise effects on cell oxidative balance in different tissues, focusing on auditory and non-auditory structures. We paid specific attention to in vivo studies, including results obtained in adult and developing subjects. Finally, we discussed the pharmacological strategies tested by different authors aimed to minimize the damaging effects of noise on living beings. Copyright © 2015 Elsevier Ltd. All rights reserved.
Mood Modulates Auditory Laterality of Hemodynamic Mismatch Responses during Dichotic Listening
Schock, Lisa; Dyck, Miriam; Demenescu, Liliana R.; Edgar, J. Christopher; Hertrich, Ingo; Sturm, Walter; Mathiak, Klaus
2012-01-01
Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI). Faces expressing emotions (sad/happy/neutral) were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV) syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing. PMID:22384105
Further Evidence of Auditory Extinction in Aphasia
ERIC Educational Resources Information Center
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
2013-01-01
Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…
Research and Studies Directory for Manpower, Personnel, and Training
1989-05-01
LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX TANGNEY J AIR FORCE OFFICE OF SCIENTIFIC RESEARCH 202-767-5021 A MODEL FOR...VISUAL ATTENTION AUDITORY PERCEPTION OF COMPLEX SOUNDS CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX EYE MOVEMENTS AND SPATIAL PATTERN VISION EYE
Children's Auditory Perception of Movement of Traffic Sounds.
ERIC Educational Resources Information Center
Pfeffer, K.; Barnecutt, P.
1996-01-01
Examined children's auditory perception of traffic sounds, focusing on identification of vehicle movement. Subjects were 60 children of 5, 8, and 11 years. Results indicated that the auditory perception of movement was a problem area for children, especially five-year olds. Discussed the role of attention-demanding characteristics of some traffic…
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
2016-01-01
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD (p < 0.01), and less correct responses in the PPS (p < 0.0001) and in the SN (p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD (p < 0.01) and increased the correct response in the PPS (p < 0.001) and SN (p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment. PMID:28050211
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
2017-01-01
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD ( p < 0.01), and less correct responses in the PPS ( p < 0.0001) and in the SN ( p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD ( p < 0.01) and increased the correct response in the PPS ( p < 0.001) and SN ( p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment.
Prenatal Nicotine Exposure Disrupts Infant Neural Markers of Orienting.
King, Erin; Campbell, Alana; Belger, Aysenil; Grewen, Karen
2018-06-07
Prenatal nicotine exposure (PNE) from maternal cigarette smoking is linked to developmental deficits, including impaired auditory processing, language, generalized intelligence, attention, and sleep. Fetal brain undergoes massive growth, organization, and connectivity during gestation, making it particularly vulnerable to neurotoxic insult. Nicotine binds to nicotinic acetylcholine receptors, which are extensively involved in growth, connectivity, and function of developing neural circuitry and neurotransmitter systems. Thus, PNE may have long-term impact on neurobehavioral development. The purpose of this study was to compare the auditory K-complex, an event-related potential reflective of auditory gating, sleep preservation and memory consolidation during sleep, in infants with and without PNE and to relate these neural correlates to neurobehavioral development. We compared brain responses to an auditory paired-click paradigm in 3- to 5-month-old infants during Stage 2 sleep, when the K-complex is best observed. We measured component amplitude and delta activity during the K-complex. Infants with PNE demonstrated significantly smaller amplitude of the N550 component and reduced delta-band power within elicited K-complexes compared to nonexposed infants and also were less likely to orient with a head turn to a novel auditory stimulus (bell ring) when awake. PNE may impair auditory sensory gating, which may contribute to disrupted sleep and to reduced auditory discrimination and learning, attention re-orienting, and/or arousal during wakefulness reported in other studies. Links between PNE and reduced K-complex amplitude and delta power may represent altered cholinergic and GABAergic synaptic programming and possibly reflect early neural bases for PNE-linked disruptions in sleep quality and auditory processing. These may pose significant disadvantage for language acquisition, attention, and social interaction necessary for academic and social success.
Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan
2018-02-27
The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.
Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart
2004-12-01
To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.
Generalization of Auditory Sensory and Cognitive Learning in Typically Developing Children.
Murphy, Cristina F B; Moore, David R; Schochat, Eliane
2015-01-01
Despite the well-established involvement of both sensory ("bottom-up") and cognitive ("top-down") processes in literacy, the extent to which auditory or cognitive (memory or attention) learning transfers to phonological and reading skills remains unclear. Most research has demonstrated learning of the trained task or even learning transfer to a closely related task. However, few studies have reported "far-transfer" to a different domain, such as the improvement of phonological and reading skills following auditory or cognitive training. This study assessed the effectiveness of auditory, memory or attention training on far-transfer measures involving phonological and reading skills in typically developing children. Mid-transfer was also assessed through untrained auditory, attention and memory tasks. Sixty 5- to 8-year-old children with normal hearing were quasi-randomly assigned to one of five training groups: attention group (AG), memory group (MG), auditory sensory group (SG), placebo group (PG; drawing, painting), and a control, untrained group (CG). Compliance, mid-transfer and far-transfer measures were evaluated before and after training. All trained groups received 12 x 45-min training sessions over 12 weeks. The CG did not receive any intervention. All trained groups, especially older children, exhibited significant learning of the trained task. On pre- to post-training measures (test-retest), most groups exhibited improvements on most tasks. There was significant mid-transfer for a visual digit span task, with highest span in the MG, relative to other groups. These results show that both sensory and cognitive (memory or attention) training can lead to learning in the trained task and to mid-transfer learning on a task (visual digit span) within the same domain as the trained tasks. However, learning did not transfer to measures of language (reading and phonological awareness), as the PG and CG improved as much as the other trained groups. Further research is required to investigate the effects of various stimuli and lengths of training on the generalization of sensory and cognitive learning to literacy skills.
Hwang, Chung-Feng; Ko, Hui-Chen; Tsou, Yung-Ting; Chan, Kai-Chieh; Fang, Hsuan-Yeh; Wu, Che-Ming
2016-01-01
Objectives. We evaluated the causes, hearing, and speech performance before and after cochlear implant reimplantation in Mandarin-speaking users. Methods. In total, 589 patients who underwent cochlear implantation in our medical center between 1999 and 2014 were reviewed retrospectively. Data related to demographics, etiologies, implant-related information, complications, and hearing and speech performance were collected. Results. In total, 22 (3.74%) cases were found to have major complications. Infection (n = 12) and hard failure of the device (n = 8) were the most common major complications. Among them, 13 were reimplanted in our hospital. The mean scores of the Categorical Auditory Performance (CAP) and the Speech Intelligibility Rating (SIR) obtained before and after reimplantation were 5.5 versus 5.8 and 3.7 versus 4.3, respectively. The SIR score after reimplantation was significantly better than preoperation. Conclusions. Cochlear implantation is a safe procedure with low rates of postsurgical revisions and device failures. The Mandarin-speaking patients in this study who received reimplantation had restored auditory performance and speech intelligibility after surgery. Device soft failure was rare in our series, calling attention to Mandarin-speaking CI users requiring revision of their implants due to undesirable symptoms or decreasing performance of uncertain cause. PMID:27413753
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.
Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia
Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail
2015-01-01
Major theories on the neural basis of schizophrenic core symptoms highlight aberrant salience network activity (insula and anterior cingulate cortex), prefrontal hypoactivation, sensory processing deficits as well as an impaired connectivity between temporal and prefrontal cortices. The mismatch negativity is a potential biomarker of schizophrenia and its reduction might be a consequence of each of these mechanisms. In contrast to the previous electroencephalographic studies, functional magnetic resonance imaging may disentangle the involved brain networks at high spatial resolution and determine contributions from localized brain responses and functional connectivity to the schizophrenic impairments. Twenty-four patients and 24 matched control subjects underwent functional magnetic resonance imaging during an optimized auditory mismatch task. Haemodynamic responses and functional connectivity were compared between groups. These data sets further entered a diagnostic classification analysis to assess impairments on the individual patient level. In the control group, mismatch responses were detected in the auditory cortex, prefrontal cortex and the salience network (insula and anterior cingulate cortex). Furthermore, mismatch processing was associated with a deactivation of the visual system and the dorsal attention network indicating a shift of resources from the visual to the auditory domain. The patients exhibited reduced activation in all of the respective systems (right auditory cortex, prefrontal cortex, and the salience network) as well as reduced deactivation of the visual system and the dorsal attention network. Group differences were most prominent in the anterior cingulate cortex and adjacent prefrontal areas. The latter regions also exhibited a reduced functional connectivity with the auditory cortex in the patients. In the classification analysis, haemodynamic responses yielded a maximal accuracy of 83% based on four features; functional connectivity data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. PMID:25743635
A Model of Auditory-Cognitive Processing and Relevance to Clinical Applicability.
Edwards, Brent
2016-01-01
Hearing loss and cognitive function interact in both a bottom-up and top-down relationship. Listening effort is tied to these interactions, and models have been developed to explain their relationship. The Ease of Language Understanding model in particular has gained considerable attention in its explanation of the effect of signal distortion on speech understanding. Signal distortion can also affect auditory scene analysis ability, however, resulting in a distorted auditory scene that can affect cognitive function, listening effort, and the allocation of cognitive resources. These effects are explained through an addition to the Ease of Language Understanding model. This model can be generalized to apply to all sounds, not only speech, representing the increased effort required for auditory environmental awareness and other nonspeech auditory tasks. While the authors have measures of speech understanding and cognitive load to quantify these interactions, they are lacking measures of the effect of hearing aid technology on auditory scene analysis ability and how effort and attention varies with the quality of an auditory scene. Additionally, the clinical relevance of hearing aid technology on cognitive function and the application of cognitive measures in hearing aid fittings will be limited until effectiveness is demonstrated in real-world situations.
Auditory salience using natural soundscapes.
Huang, Nicholas; Elhilali, Mounya
2017-03-01
Salience describes the phenomenon by which an object stands out from a scene. While its underlying processes are extensively studied in vision, mechanisms of auditory salience remain largely unknown. Previous studies have used well-controlled auditory scenes to shed light on some of the acoustic attributes that drive the salience of sound events. Unfortunately, the use of constrained stimuli in addition to a lack of well-established benchmarks of salience judgments hampers the development of comprehensive theories of sensory-driven auditory attention. The present study explores auditory salience in a set of dynamic natural scenes. A behavioral measure of salience is collected by having human volunteers listen to two concurrent scenes and indicate continuously which one attracts their attention. By using natural scenes, the study takes a data-driven rather than experimenter-driven approach to exploring the parameters of auditory salience. The findings indicate that the space of auditory salience is multidimensional (spanning loudness, pitch, spectral shape, as well as other acoustic attributes), nonlinear and highly context-dependent. Importantly, the results indicate that contextual information about the entire scene over both short and long scales needs to be considered in order to properly account for perceptual judgments of salience.
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
2016-01-01
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits. PMID:28050213
Feature-based and object-based attention orientation during short-term memory maintenance.
Ku, Yixuan
2015-12-01
Top-down attention biases the short-term memory (STM) processing at multiple stages. Orienting attention during the maintenance period of STM by a retrospective cue (retro-cue) strengthens the representation of the cued item and improves the subsequent STM performance. In a recent article, Backer et al. (Backer KC, Binns MA, Alain C. J Neurosci 35: 1307-1318, 2015) extended these findings from the visual to the auditory domain and combined electroencephalography to dissociate neural mechanisms underlying feature-based and object-based attention orientation. Both event-related potentials and neural oscillations explained the behavioral benefits of retro-cues and favored the theory that feature-based and object-based attention orientation were independent. Copyright © 2015 the American Physiological Society.
Doneva, Silviya; Davis, Steve; Cavenagh, Penny
2018-08-01
Compelling findings into the relationship between stuttering and attentional ability have started to emerge, with some child and adult studies indicating poorer attentional ability among people who stutter (PWS). The purpose of the present research was to provide a more complete picture of the attentional abilities of PWS, as well as to gather insights into their individual attentional performance. We compared the attentional ability of PWS to that of people who do not stutter (PWNS) by using the Test of Everyday Attention (TEA). TEA is a clinical assessment battery with a very good validity and reliability comprising 8 subtests that pose differential demands on sustained attention, selective attention, attentional switching, and divided attention. Fifty age- and gender-matched PWS and PWNS (aged 19-77 years) took part in the study. Importantly, we also examined stuttering severity in the PWS group. PWS performed significantly worse on tasks tapping into visual selective and divided attentional resources. Furthermore despite failing to reach statistical significance, the results also revealed an interesting trend for stuttering to be associated with poorer performance on two subtests measuring attentional switching and one tapping into auditory selective attention. Moreover, as hypothesized, there was also a negative association between stuttering severity and performance on two TEA subtests measuring visual selective attention. Finally, the type of TEA test variant produced no significant effect on performance. We interpret these results as indicative of stuttering being associated with poorer performance on tasks measuring certain attentional abilities. These tie in well with theoretical models identifying speech production as particularly attention-demanding in stuttering or approaches placing attentional dysfunction at the heart of the condition. The present research also has practical implications for the use of attentional training to improve fluency.
Aberrant interference of auditory negative words on attention in patients with schizophrenia.
Iwashiro, Norichika; Yahata, Noriaki; Kawamuro, Yu; Kasai, Kiyoto; Yamasue, Hidenori
2013-01-01
Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral) was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words).
Ceponiene, R; Westerfield, M; Torki, M; Townsend, J
2008-06-18
Major accounts of aging implicate changes in processing external stimulus information. Little is known about differential effects of auditory and visual sensory aging, and the mechanisms of sensory aging are still poorly understood. Using event-related potentials (ERPs) elicited by unattended stimuli in younger (M=25.5 yrs) and older (M=71.3 yrs) subjects, this study examined mechanisms of sensory aging under minimized attention conditions. Auditory and visual modalities were examined to address modality-specificity vs. generality of sensory aging. Between-modality differences were robust. The earlier-latency responses (P1, N1) were unaffected in the auditory modality but were diminished in the visual modality. The auditory N2 and early visual N2 were diminished. Two similarities between the modalities were age-related enhancements in the late P2 range and positive behavior-early N2 correlation, the latter suggesting that N2 may reflect long-latency inhibition of irrelevant stimuli. Since there is no evidence for salient differences in neuro-biological aging between the two sensory regions, the observed between-modality differences are best explained by the differential reliance of auditory and visual systems on attention. Visual sensory processing relies on facilitation by visuo-spatial attention, withdrawal of which appears to be more disadvantageous in older populations. In contrast, auditory processing is equipped with powerful inhibitory capacities. However, when the whole auditory modality is unattended, thalamo-cortical gating deficits may not manifest in the elderly. In contrast, ERP indices of longer-latency, stimulus-level inhibitory modulation appear to diminish with age.
A Dual-Process Account of Auditory Change Detection
ERIC Educational Resources Information Center
McAnally, Ken I.; Martin, Russell L.; Eramudugolla, Ranmalee; Stuart, Geoffrey W.; Irvine, Dexter R. F.; Mattingley, Jason B.
2010-01-01
Listeners can be "deaf" to a substantial change in a scene comprising multiple auditory objects unless their attention has been directed to the changed object. It is unclear whether auditory change detection relies on identification of the objects in pre- and post-change scenes. We compared the rates at which listeners correctly identify changed…
Failure to detect critical auditory alerts in the cockpit: evidence for inattentional deafness.
Dehais, Frédéric; Causse, Mickaël; Vachon, François; Régis, Nicolas; Menant, Eric; Tremblay, Sébastien
2014-06-01
The aim of this study was to test whether inattentional deafness to critical alarms would be observed in a simulated cockpit. The inability of pilots to detect unexpected changes in their auditory environment (e.g., alarms) is a major safety problem in aeronautics. In aviation, the lack of response to alarms is usually not attributed to attentional limitations, but rather to pilots choosing to ignore such warnings due to decision biases, hearing issues, or conscious risk taking. Twenty-eight general aviation pilots performed two landings in a flight simulator. In one scenario an auditory alert was triggered alone, whereas in the other the auditory alert occurred while the pilots dealt with a critical windshear. In the windshear scenario, II pilots (39.3%) did not report or react appropriately to the alarm whereas all the pilots perceived the auditory warning in the no-windshear scenario. Also, of those pilots who were first exposed to the no-windshear scenario and detected the alarm, only three suffered from inattentional deafness in the subsequent windshear scenario. These findings establish inattentional deafness as a cognitive phenomenon that is critical for air safety. Pre-exposure to a critical event triggering an auditory alarm can enhance alarm detection when a similar event is encountered subsequently. Case-based learning is a solution to mitigate auditory alarm misperception.
Techer, Franck; Jallais, Christophe; Corson, Yves; Moreau, Fabien; Ndiaye, Daniel; Piechnick, Bruno; Fort, Alexandra
2017-01-01
Driver internal state, including emotion, can have negative impacts on road safety. Studies have shown that an anger state can provoke aggressive behavior and impair driving performance. Apart from driving, anger can also influence attentional processing and increase the benefits taken from auditory alerts. However, to our knowledge, no prior event-related potentials study assesses this impact on attention during simulated driving. Therefore, the aim of this study was to investigate the impact of anger on attentional processing and its consequences on driving performance. For this purpose, 33 participants completed a simulated driving scenario once in an anger state and once during a control session. Results indicated that anger impacted driving performance and attention, provoking an increase in lateral variations while reducing the amplitude of the visual N1 peak. The observed effects were discussed as a result of high arousal and mind-wandering associated with anger. This kind of physiological data may be used to monitor a driver's internal state and provide specific assistance corresponding to their current needs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Petranovich, Christine L; Walz, Nicolay Chertkoff; Staat, Mary Allen; Chiu, Chung-Yiu Peter; Wade, Shari L
2015-01-01
The aim of this study was to investigate the association of neurocognitive functioning with internalizing and externalizing problems and school and social competence in children adopted internationally. Participants included girls between the ages of 6-12 years who were internationally adopted from China (n = 32) or Eastern Europe (n = 25) and a control group of never-adopted girls (n = 25). Children completed the Vocabulary and Matrix Reasoning subtests from the Wechsler Abbreviated Scale of Intelligence and the Score! and Sky Search subtests from the Test of Everyday Attention for Children. Parents completed the Child Behavior Checklist and the Home and Community Social Behavior Scales. Compared to the controls, the Eastern European group evidenced significantly more problems with externalizing behaviors and school and social competence and poorer performance on measures of verbal intelligence, perceptual reasoning, and auditory attention. More internalizing problems were reported in the Chinese group compared to the controls. Using generalized linear regression, interaction terms were examined to determine whether the associations of neurocognitive functioning with behavior varied across groups. Eastern European group status was associated with more externalizing problems and poorer school and social competence, irrespective of neurocognitive test performance. In the Chinese group, poorer auditory attention was associated with more problems with social competence. Neurocognitive functioning may be related to behavior in children adopted internationally. Knowledge about neurocognitive functioning may further our understanding of the impact of early institutionalization on post-adoption behavior.
Sonic morphology: Aesthetic dimensional auditory spatial awareness
NASA Astrophysics Data System (ADS)
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Subcortical processing of speech regularities underlies reading and music aptitude in children.
Strait, Dana L; Hornickel, Jane; Kraus, Nina
2011-10-17
Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to regularities in auditory input. Definition of common biological underpinnings for music and reading supports the usefulness of music for promoting child literacy, with the potential to improve reading remediation.
Anderson, Nathaniel E; Maurer, J Michael; Steele, Vaughn R; Kiehl, Kent A
2018-06-01
Psychopathy is a personality disorder accompanied by abnormalities in emotional processing and attention. Recent theoretical applications of network-based models of cognition have been used to explain the diverse range of abnormalities apparent in psychopathy. Still, the physiological basis for these abnormalities is not well understood. A significant body of work has examined psychopathy-related abnormalities in simple attention-based tasks, but these studies have largely been performed using electrocortical measures, such as event-related potentials (ERPs), and they often have been carried out among individuals with low levels of psychopathic traits. In this study, we examined neural activity during an auditory oddball task using functional magnetic resonance imaging (fMRI) during a simple auditory target detection (oddball) task among 168 incarcerated adult males, with psychopathic traits assessed via the Hare Psychopathy Checklist-Revised (PCL-R). Event-related contrasts demonstrated that the largest psychopathy-related effects were apparent between the frequent standard stimulus condition and a task-off, implicit baseline. Negative correlations with interpersonal-affective dimensions (Factor 1) of the PCL-R were apparent in regions comprising default mode and salience networks. These findings support models of psychopathy describing impaired integration across functional networks. They additionally corroborate reports which have implicated failures of efficient transition between default mode and task-positive networks. Finally, they demonstrate a neurophysiological basis for abnormal mobilization of attention and reduced engagement with stimuli that have little motivational significance among those with high psychopathic traits.
Cross-modal links among vision, audition, and touch in complex environments.
Ferris, Thomas K; Sarter, Nadine B
2008-02-01
This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.
Lu, Xi; Siu, Ka-Chun; Fu, Siu N; Hui-Chan, Christina W Y; Tsang, William W N
2013-08-01
To compare the performance of older experienced Tai Chi practitioners and healthy controls in dual-task versus single-task paradigms, namely stepping down with and without performing an auditory response task, a cross-sectional study was conducted in the Center for East-meets-West in Rehabilitation Sciences at The Hong Kong Polytechnic University, Hong Kong. Twenty-eight Tai Chi practitioners (73.6 ± 4.2 years) and 30 healthy control subjects (72.4 ± 6.1 years) were recruited. Participants were asked to step down from a 19-cm-high platform and maintain a single-leg stance for 10 s with and without a concurrent cognitive task. The cognitive task was an auditory Stroop test in which the participants were required to respond to different tones of voices regardless of their word meanings. Postural stability after stepping down under single- and dual-task paradigms, in terms of excursion of the subject's center of pressure (COP) and cognitive performance, was measured for comparison between the two groups. Our findings demonstrated significant between-group differences in more outcome measures during dual-task than single-task performance. Thus, the auditory Stroop test showed that Tai Chi practitioners achieved not only significantly less error rate in single-task, but also significantly faster reaction time in dual-task, when compared with healthy controls similar in age and other relevant demographics. Similarly, the stepping-down task showed that Tai Chi practitioners not only displayed significantly less COP sway area in single-task, but also significantly less COP sway path than healthy controls in dual-task. These results showed that Tai Chi practitioners achieved better postural stability after stepping down as well as better performance in auditory response task than healthy controls. The improved performance that was magnified by dual motor-cognitive task performance may point to the benefits of Tai Chi being a mind-and-body exercise.
Metacognition of Multi-Tasking: How Well Do We Predict the Costs of Divided Attention?
Finley, Jason R.; Benjamin, Aaron S.; McCarley, Jason S.
2014-01-01
Risky multi-tasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and predicted their dual-task tracking performance before finally performing the two tasks simultaneously. Most participants correctly predicted reductions in tracking performance under dual-task conditions, with a majority overestimating the costs of dual-tasking. However, the between-subjects correlation between predicted and actual performance decrements was near zero. This combination of results suggests that people do anticipate costs of multi-tasking, but have little metacognitive insight on the extent to which they are personally vulnerable to the risks of divided attention, relative to other people. PMID:24490818
Metacognition of multitasking: How well do we predict the costs of divided attention?
Finley, Jason R; Benjamin, Aaron S; McCarley, Jason S
2014-06-01
Risky multitasking, such as texting while driving, may occur because people misestimate the costs of divided attention. In two experiments, participants performed a computerized visual-manual tracking task in which they attempted to keep a mouse cursor within a small target that moved erratically around a circular track. They then separately performed an auditory n-back task. After practicing both tasks separately, participants received feedback on their single-task tracking performance and predicted their dual-task tracking performance before finally performing the 2 tasks simultaneously. Most participants correctly predicted reductions in tracking performance under dual-task conditions, with a majority overestimating the costs of dual-tasking. However, the between-subjects correlation between predicted and actual performance decrements was near 0. This combination of results suggests that people do anticipate costs of multitasking, but have little metacognitive insight on the extent to which they are personally vulnerable to the risks of divided attention, relative to other people. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Li, Yuanqing; Long, Jinyi; Huang, Biao; Yu, Tianyou; Wu, Wei; Liu, Yongjian; Liang, Changhong; Sun, Pei
2015-02-01
Previous studies have shown that audiovisual integration improves identification performance and enhances neural activity in heteromodal brain areas, for example, the posterior superior temporal sulcus/middle temporal gyrus (pSTS/MTG). Furthermore, it has also been demonstrated that attention plays an important role in crossmodal integration. In this study, we considered crossmodal integration in audiovisual facial perception and explored its effect on the neural representation of features. The audiovisual stimuli in the experiment consisted of facial movie clips that could be classified into 2 gender categories (male vs. female) or 2 emotion categories (crying vs. laughing). The visual/auditory-only stimuli were created from these movie clips by removing the auditory/visual contents. The subjects needed to make a judgment about the gender/emotion category for each movie clip in the audiovisual, visual-only, or auditory-only stimulus condition as functional magnetic resonance imaging (fMRI) signals were recorded. The neural representation of the gender/emotion feature was assessed using the decoding accuracy and the brain pattern-related reproducibility indices, obtained by a multivariate pattern analysis method from the fMRI data. In comparison to the visual-only and auditory-only stimulus conditions, we found that audiovisual integration enhanced the neural representation of task-relevant features and that feature-selective attention might play a role of modulation in the audiovisual integration. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
P300 Event-Related Potential as an Indicator of Inattentional Deafness?
Giraudet, Louise; St-Louis, Marie-Eve; Scannella, Sébastien; Causse, Mickaël
2015-01-01
An analysis of airplane accidents reveals that pilots sometimes purely fail to react to critical auditory alerts. This inability of an auditory stimulus to reach consciousness has been coined under the term of inattentional deafness. Recent data from literature tends to show that tasks involving high cognitive load consume most of the attentional capacities, leaving little or none remaining for processing any unexpected information. In addition, there is a growing body of evidence for a shared attentional capacity between vision and hearing. In this context, the abundant information in modern cockpits is likely to produce inattentional deafness. We investigated this hypothesis by combining electroencephalographic (EEG) measurements with an ecological aviation task performed under contextual variation of the cognitive load (high or low), including an alarm detection task. Two different audio tones were played: standard tones and deviant tones. Participants were instructed to ignore standard tones and to report deviant tones using a response pad. More than 31% of the deviant tones were not detected in the high load condition. Analysis of the EEG measurements showed a drastic diminution of the auditory P300 amplitude concomitant with this behavioral effect, whereas the N100 component was not affected. We suggest that these behavioral and electrophysiological results provide new insights on explaining the trend of pilots’ failure to react to critical auditory information. Relevant applications concern prevention of alarms omission, mental workload measurements and enhanced warning designs. PMID:25714746
Marsh, John E; Yang, Jingqi; Qualter, Pamela; Richardson, Cassandra; Perham, Nick; Vachon, François; Hughes, Robert W
2018-06-01
Task-irrelevant speech impairs short-term serial recall appreciably. On the interference-by-process account, the processing of physical (i.e., precategorical) changes in speech yields order cues that conflict with the serial-ordering process deployed to perform the serial recall task. In this view, the postcategorical properties (e.g., phonology, meaning) of speech play no role. The present study reassessed the implications of recent demonstrations of auditory postcategorical distraction in serial recall that have been taken as support for an alternative, attentional-diversion, account of the irrelevant speech effect. Focusing on the disruptive effect of emotionally valent compared with neutral words on serial recall, we show that the distracter-valence effect is eliminated under conditions-high task-encoding load-thought to shield against attentional diversion whereas the general effect of speech (neutral words compared with quiet) remains unaffected (Experiment 1). Furthermore, the distracter-valence effect generalizes to a task that does not require the processing of serial order-the missing-item task-whereas the effect of speech per se is attenuated in this task (Experiment 2). We conclude that postcategorical auditory distraction phenomena in serial short-term memory (STM) are incidental: they are observable in such a setting but, unlike the acoustically driven irrelevant speech effect, are not integral to it. As such, the findings support a duplex-mechanism account over a unitary view of auditory distraction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control. PMID:26137479
Neurophysiological Effects of Meditation Based on Evoked and Event Related Potential Recordings.
Singh, Nilkamal; Telles, Shirley
2015-01-01
Evoked potentials (EPs) are a relatively noninvasive method to assess the integrity of sensory pathways. As the neural generators for most of the components are relatively well worked out, EPs have been used to understand the changes occurring during meditation. Event-related potentials (ERPs) yield useful information about the response to tasks, usually assessing attention. A brief review of the literature yielded eleven studies on EPs and seventeen on ERPs from 1978 to 2014. The EP studies covered short, mid, and long latency EPs, using both auditory and visual modalities. ERP studies reported the effects of meditation on tasks such as the auditory oddball paradigm, the attentional blink task, mismatched negativity, and affective picture viewing among others. Both EP and ERPs were recorded in several meditations detailed in the review. Maximum changes occurred in mid latency (auditory) EPs suggesting that maximum changes occur in the corresponding neural generators in the thalamus, thalamic radiations, and primary auditory cortical areas. ERP studies showed meditation can increase attention and enhance efficiency of brain resource allocation with greater emotional control.
Selective entrainment of brain oscillations drives auditory perceptual organization.
Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles
2017-10-01
Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.
Visual attention modulates brain activation to angry voices.
Mothes-Lasch, Martin; Mentzel, Hans-Joachim; Miltner, Wolfgang H R; Straube, Thomas
2011-06-29
In accordance with influential models proposing prioritized processing of threat, previous studies have shown automatic brain responses to angry prosody in the amygdala and the auditory cortex under auditory distraction conditions. However, it is unknown whether the automatic processing of angry prosody is also observed during cross-modal distraction. The current fMRI study investigated brain responses to angry versus neutral prosodic stimuli during visual distraction. During scanning, participants were exposed to angry or neutral prosodic stimuli while visual symbols were displayed simultaneously. By means of task requirements, participants either attended to the voices or to the visual stimuli. While the auditory task revealed pronounced activation in the auditory cortex and amygdala to angry versus neutral prosody, this effect was absent during the visual task. Thus, our results show a limitation of the automaticity of the activation of the amygdala and auditory cortex to angry prosody. The activation of these areas to threat-related voices depends on modality-specific attention.
Tavakoli, Paniz; Campbell, Kenneth
2016-10-01
A rarely occurring and highly relevant auditory stimulus occurring outside of the current focus of attention can cause a switching of attention. Such attention capture is often studied in oddball paradigms consisting of a frequently occurring "standard" stimulus which is changed at odd times to form a "deviant". The deviant may result in the capturing of attention. An auditory ERP, the P3a, is often associated with this process. To collect a sufficient amount of data is however very time-consuming. A more multi-feature "optimal" paradigm has been proposed but it is not known if it is appropriate for the study of attention capture. An optimal paradigm was run in which 6 different rare deviants (p=.08) were separated by a standard stimulus (p=.50) and compared to results when 4 oddball paradigms were also run. A large P3a was elicited by some of the deviants in the optimal paradigm but not by others. However, very similar results were observed when separate oddball paradigms were run. The present study indicates that the optimal paradigm provides a very time-saving method to study attention capture and the P3a. Copyright © 2016 Elsevier B.V. All rights reserved.
Music-induced positive mood broadens the scope of auditory attention
Makkonen, Tommi; Eerola, Tuomas
2017-01-01
Abstract Previous studies indicate that positive mood broadens the scope of visual attention, which can manifest as heightened distractibility. We used event-related potentials (ERP) to investigate whether music-induced positive mood has comparable effects on selective attention in the auditory domain. Subjects listened to experimenter-selected happy, neutral or sad instrumental music and afterwards participated in a dichotic listening task. Distractor sounds in the unattended channel elicited responses related to early sound encoding (N1/MMN) and bottom-up attention capture (P3a) while target sounds in the attended channel elicited a response related to top-down-controlled processing of task-relevant stimuli (P3b). For the subjects in a happy mood, the N1/MMN responses to the distractor sounds were enlarged while the P3b elicited by the target sounds was diminished. Behaviorally, these subjects tended to show heightened error rates on target trials following the distractor sounds. Thus, the ERP and behavioral results indicate that the subjects in a happy mood allocated their attentional resources more diffusely across the attended and the to-be-ignored channels. Therefore, the current study extends previous research on the effects of mood on visual attention and indicates that even unfamiliar instrumental music can broaden the scope of auditory attention via its effects on mood. PMID:28460035
Effect of voluntary attention on auditory processing during REM sleep.
Takahara, Madoka; Nittono, Hiroshi; Hori, Tadao
2006-07-01
The study investigates whether there is an effect of voluntary attention to external auditory stimuli during rapid eye movement (REM) sleep in humans by measuring event-related potentials (ERPs). Using a 2-tone auditory-discrimination task, a standard 1000-Hz tone and a deviant 2000-Hz tone were presented to participants when awake and during sleep. In the ATTENTIVE condition, participants were requested to detect the deviant stimuli during their sleep whenever possible. In the PASSIVE sleep condition, participants were only exposed to the tones. ERPs were measured during REM sleep and compared between the 2 conditions. All experiments were conducted at the sleep laboratory of Hiroshima University. Twenty healthy university student volunteers. N/A. In the tonic period of REM sleep (the period without REM), P200 and P400 were elicited by deviant stimuli, with scalp distributions maximal at central and occipital sites, respectively. The P400 in REM sleep showed larger amplitudes in the ATTENTIVE condition, whereas the P200 amplitude did not differ between the 2 conditions. No effects on ERPs due to attention were observed during stage 2 sleep. The instruction to pay attention to external stimuli during REM sleep influenced the late positive potentials. Thus electrophysiologic evidence of voluntary attention during REM sleep has been demonstrated.
The role of the primary auditory cortex in the neural mechanism of auditory verbal hallucinations
Kompus, Kristiina; Falkenberg, Liv E.; Bless, Josef J.; Johnsen, Erik; Kroken, Rune A.; Kråkvik, Bodil; Larøi, Frank; Løberg, Else-Marie; Vedul-Kjelsås, Einar; Westerhausen, René; Hugdahl, Kenneth
2013-01-01
Auditory verbal hallucinations (AVHs) are a subjective experience of “hearing voices” in the absence of corresponding physical stimulation in the environment. The most remarkable feature of AVHs is their perceptual quality, that is, the experience is subjectively often as vivid as hearing an actual voice, as opposed to mental imagery or auditory memories. This has lead to propositions that dysregulation of the primary auditory cortex (PAC) is a crucial component of the neural mechanism of AVHs. One possible mechanism by which the PAC could give rise to the experience of hallucinations is aberrant patterns of neuronal activity whereby the PAC is overly sensitive to activation arising from internal processing, while being less responsive to external stimulation. In this paper, we review recent research relevant to the role of the PAC in the generation of AVHs. We present new data from a functional magnetic resonance imaging (fMRI) study, examining the responsivity of the left and right PAC to parametrical modulation of the intensity of auditory verbal stimulation, and corresponding attentional top-down control in non-clinical participants with AVHs, and non-clinical participants with no AVHs. Non-clinical hallucinators showed reduced activation to speech sounds but intact attentional modulation in the right PAC. Additionally, we present data from a group of schizophrenia patients with AVHs, who do not show attentional modulation of left or right PAC. The context-appropriate modulation of the PAC may be a protective factor in non-clinical hallucinations. PMID:23630479
Olivetti Belardinelli, Marta; Santangelo, Valerio
2005-07-08
This paper examines the characteristics of spatial attention orienting in situations of visual impairment. Two groups of subjects, respectively schizophrenic and blind, with different degrees of visual spatial information impairment, were tested. In Experiment 1, the schizophrenic subjects were instructed to detect an auditory target, which was preceded by a visual cue. The cue could appear in the same location as the target, separated from it respectively by the vertical visual meridian (VM), the vertical head-centered meridian (HCM) or another meridian. Similarly to normal subjects tested with the same paradigm (Ferlazzo, Couyoumdjian, Padovani, and Olivetti Belardinelli, 2002), schizophrenic subjects showed slower reactions times (RTs) when cued, and when the target locations were on the opposite sides of the HCM. This HCM effect strengthens the assumption that different auditory and visual spatial maps underlie the representation of attention orienting mechanisms. In Experiment 2, blind subjects were asked to detect an auditory target, which had been preceded by an auditory cue, while staring at an imaginary point. The point was located either to the left or to the right, in order to control for ocular movements and maintain the dissociation between the HCM and the VM. Differences between crossing and no-crossing conditions of HCM were not found. Therefore it is possible to consider the HCM effect as a consequence of the interaction between visual and auditory modalities. Related theoretical issues are also discussed.
Neural correlates of auditory scene analysis and perception
Cohen, Yale E.
2014-01-01
The auditory system is designed to transform acoustic information from low-level sensory representations into perceptual representations. These perceptual representations are the computational result of the auditory system's ability to group and segregate spectral, spatial and temporal regularities in the acoustic environment into stable perceptual units (i.e., sounds or auditory objects). Current evidence suggests that the cortex--specifically, the ventral auditory pathway--is responsible for the computations most closely related to perceptual representations. Here, we discuss how the transformations along the ventral auditory pathway relate to auditory percepts, with special attention paid to the processing of vocalizations and categorization, and explore recent models of how these areas may carry out these computations. PMID:24681354
Beeghly, Marjorie; Rose-Jacobs, Ruth; Martin, Brett M; Cabral, Howard J; Heeren, Timothy C; Frank, Deborah A
2014-01-01
Neuropsychological processes such as attention and memory contribute to children's higher-level cognitive and language functioning and predict academic achievement. The goal of this analysis was to evaluate whether level of intrauterine cocaine exposure (IUCE) alters multiple aspects of preadolescents' neuropsychological functioning assessed using a single age-referenced instrument, the NEPSY: A Developmental Neuropsychological Assessment (NEPSY) (Korkman et al., 1998), after controlling for relevant covariates. Participants included 137 term 9.5-year-old children from low-income urban backgrounds (51% male, 90% African American/Caribbean) from an ongoing prospective longitudinal study. Level of IUCE was assessed in the newborn period using infant meconium and maternal report. 52% of the children had IUCE (65% with lighter IUCE, and 35% with heavier IUCE), and 48% were unexposed. Infants with Fetal Alcohol Syndrome, HIV seropositivity, or intrauterine exposure to illicit substances other than cocaine and marijuana were excluded. At the 9.5-year follow-up visit, trained examiners masked to IUCE and background variables evaluated children's neuropsychological functioning using the NEPSY. The association between level of IUCE and NEPSY outcomes was evaluated in a series of linear regressions controlling for intrauterine exposure to other substances and relevant child, caregiver, and demographic variables. Results indicated that level of IUCE was associated with lower scores on the Auditory Attention and Narrative Memory tasks, both of which require auditory information processing and sustained attention for successful performance. However, results did not follow the expected ordinal, dose-dependent pattern. Children's neuropsychological test scores were also altered by a variety of other biological and psychosocial factors. Copyright © 2014 Elsevier Inc. All rights reserved.
Using neuropsychological profiles to classify neglected children with or without physical abuse.
Nolin, Pierre; Ethier, Louise
2007-06-01
The aim of this study is twofold: First, to investigate whether cognitive functions can contribute to differentiating neglected children with or without physical abuse compared to comparison participants; second, to demonstrate the detrimental impact of children being victimized by a combination of different types of maltreatment. Seventy-nine children aged 6-12 years and currently receiving Child Protection Services because of one of two types of maltreatment (neglect with physical abuse, n=56; neglect without physical abuse, n=28) were compared with a control group of 53 children matched for age, gender, and annual family income. The neuropsychological assessment focused on motor performance, attention, memory and learning, visual-motor integration, language, frontal/executive functions, and intelligence. Discriminant analysis identified auditory attention and response set, and visual-motor integration (Function 1), and problem solving, abstraction, and planning (Function 2) as the two sets of variables that most distinguished the groups. Discriminant analysis predicted group membership in 80% of the cases. Children who were neglected with physical abuse showed cognitive deficits in auditory attention and response set, and visual-motor integration (Function 1) and problem solving, abstraction, and planning (Function 2). Children who were neglected without physical abuse differed from the control group in that they obtained lower scores in auditory attention and response set, and visual-motor integration (Function 1). Surprisingly, these same children demonstrated a greater capacity for problem solving, abstraction, and planning (Function 2) than the physically abused neglected and control children. The present study underscores the relevance of neuropsychology to maltreatment research. The results support the heterogeneity of cognitive deficits in children based on different types of maltreatment and the fact that neglect with physical abuse is more harmful than neglect alone.
Effects of Training Auditory Sequential Memory and Attention on Reading.
ERIC Educational Resources Information Center
Klein, Pnina S.; Schwartz, Allen A.
To determine if auditory sequential memory (ASM) in young children can be improved through training and to discover the effects of such training on the reading scores of children with reading problems, a study was conducted involving 92 second and third graders. For purposes of this study, auditory sequential memory was defined as the ability to…
Correa-Jaraba, Kenia S.; Cid-Fernández, Susana; Lindín, Mónica; Díaz, Fernando
2016-01-01
The main aim of this study was to examine the effects of aging on event-related brain potentials (ERPs) associated with the automatic detection of unattended infrequent deviant and novel auditory stimuli (Mismatch Negativity, MMN) and with the orienting to these stimuli (P3a component), as well as the effects on ERPs associated with reorienting to relevant visual stimuli (Reorienting Negativity, RON). Participants were divided into three age groups: (1) Young: 21–29 years old; (2) Middle-aged: 51–64 years old; and (3) Old: 65–84 years old. They performed an auditory-visual distraction-attention task in which they were asked to attend to visual stimuli (Go, NoGo) and to ignore auditory stimuli (S: standard, D: deviant, N: novel). Reaction times (RTs) to Go visual stimuli were longer in old and middle-aged than in young participants. In addition, in all three age groups, longer RTs were found when Go visual stimuli were preceded by novel relative to deviant and standard auditory stimuli, indicating a distraction effect provoked by novel stimuli. ERP components were identified in the Novel minus Standard (N-S) and Deviant minus Standard (D-S) difference waveforms. In the N-S condition, MMN latency was significantly longer in middle-aged and old participants than in young participants, indicating a slowing of automatic detection of changes. The following results were observed in both difference waveforms: (1) the P3a component comprised two consecutive phases in all three age groups—an early-P3a (e-P3a) that may reflect the orienting response toward the irrelevant stimulation and a late-P3a (l-P3a) that may be a correlate of subsequent evaluation of the infrequent unexpected novel or deviant stimuli; (2) the e-P3a, l-P3a, and RON latencies were significantly longer in the Middle-aged and Old groups than in the Young group, indicating delay in the orienting response to and the subsequent evaluation of unattended auditory stimuli, and in the reorienting of attention to relevant (Go) visual stimuli, respectively; and (3) a significantly smaller e-P3a amplitude in Middle-aged and Old groups, indicating a deficit in the orienting response to irrelevant novel and deviant auditory stimuli. PMID:27065004
Data of ERPs and spectral alpha power when attention is engaged on visual or verbal/auditory imagery
Villena-González, Mario; López, Vladimir; Rodríguez, Eugenio
2016-01-01
This article provides data from statistical analysis of event-related brain potentials (ERPs) and spectral power from 20 participants during three attentional conditions. Specifically, P1, N1 and P300 amplitude of ERP were compared when participant׳s attention was oriented to an external task, to a visual imagery and to an inner speech. The spectral power from alpha band was also compared in these three attentional conditions. These data are related to the research article where sensory processing of external information was compared during these three conditions entitled “Orienting attention to visual or verbal/auditory imagery differentially impairs the processing of visual stimuli” (Villena-Gonzalez et al., 2016) [1]. PMID:27077090
The effect of sleep deprivation on BOLD activity elicited by a divided attention task.
Jackson, Melinda L; Hughes, Matthew E; Croft, Rodney J; Howard, Mark E; Crewther, David; Kennedy, Gerard A; Owens, Katherine; Pierce, Rob J; O'Donoghue, Fergal J; Johnston, Patrick
2011-06-01
Sleep loss, widespread in today's society and associated with a number of clinical conditions, has a detrimental effect on a variety of cognitive domains including attention. This study examined the sequelae of sleep deprivation upon BOLD fMRI activation during divided attention. Twelve healthy males completed two randomized sessions; one after 27 h of sleep deprivation and one after a normal night of sleep. During each session, BOLD fMRI was measured while subjects completed a cross-modal divided attention task (visual and auditory). After normal sleep, increased BOLD activation was observed bilaterally in the superior frontal gyrus and the inferior parietal lobe during divided attention performance. Subjects reported feeling significantly more sleepy in the sleep deprivation session, and there was a trend towards poorer divided attention task performance. Sleep deprivation led to a down regulation of activation in the left superior frontal gyrus, possibly reflecting an attenuation of top-down control mechanisms on the attentional system. These findings have implications for understanding the neural correlates of divided attention and the neurofunctional changes that occur in individuals who are sleep deprived.
Cognitive mechanisms associated with auditory sensory gating
Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.
2016-01-01
Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891
Effect of dual task activity on reaction time in males and females.
Kaur, Manjinder; Nagpal, Sangeeta; Singh, Harpreet; Suhalka, M L
2014-01-01
The present study was designed to compare the auditory and visual reaction time on an Audiovisual Reaction Time Machine with the concomitant use of mobile phones in 52 women and 30 men in the age group of 18-40 years. Males showed significantly (p < 0.05) shorter reaction times, both auditory and visual, than females both during single task and multi task performance. But the percentage increase from their respective baseline auditory reaction times, was more in men than women during multitasking, in hand held (24.38% & 18.70% respectively) and hands free modes (36.40% & 18.40% respectively) of the use of cell phone. VRT increased non significantly during multitasking in both the groups. However, the multitasking per se has detrimental effect on the reaction times in both the groups studied. Hence, it should best be avoided in crucial and high attention demanding tasks like driving.
Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat
2014-01-01
Background: Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Methods: Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Results: Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). Conclusion: The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems. PMID:25679009
Control of Auditory Attention in Children with Specific Language Impairment
ERIC Educational Resources Information Center
Victorino, Kristen R.; Schwartz, Richard G.
2015-01-01
Purpose: Children with specific language impairment (SLI) appear to demonstrate deficits in attention and its control. Selective attention involves the cognitive control of attention directed toward a relevant stimulus and simultaneous inhibition of attention toward irrelevant stimuli. The current study examined attention control during a…
Effect of perceptual load on semantic access by speech in children
Jerger, Susan; Damian, Markus F.; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervè
2013-01-01
Purpose To examine whether semantic access by speech requires attention in children. Method Children (N=200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multi-modal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal had a low load, and the multi-modal had a high load [i.e., respectively naming pictures displayed 1) on a blank screen vs 2) below the talker’s face on his T-shirt]. Semantic content of distractors was manipulated to be related vs unrelated to picture (e.g., picture dog with distractors bear vs cheese). Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity limited attentional resources if irrelevant semantic-content manipulation influences naming times on both tasks despite variations in loads but dependent on attentional resources exhausted by higher load task if irrelevant content influences naming only on cross-modal (low load). Results Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds, but only on cross-modal in 4–5-year-olds. The addition of visual speech did not influence results on the multi-modal task. Conclusion Younger and older children differ in dependence on attentional resources for semantic access by speech. PMID:22896045
Effect of perceptual load on semantic access by speech in children.
Jerger, Susan; Damian, Markus F; Mills, Candice; Bartlett, James; Tye-Murray, Nancy; Abdi, Hervé
2013-04-01
To examine whether semantic access by speech requires attention in children. Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. Younger and older children differ in dependence on attentional resources for semantic access by speech.
Frequency-specific attentional modulation in human primary auditory cortex and midbrain.
Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina
2018-07-01
Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
2016-01-01
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787
1990-03-01
decided to have three kinds of sessions: invited-paper sessions, panel discussions, and poster sessions. The invited papers were divided into papers...soon followed. Applications in medicine, involving exploration and operation within the human body, are now receiving increased attention . Early... attention toward issues that may be important for the design of auditory interfaces. The importance of appropriate auditory inputs to observers with normal
Using neuroimaging to understand the cortical mechanisms of auditory selective attention
Lee, Adrian KC; Larson, Eric; Maddox, Ross K; Shinn-Cunningham, Barbara G
2013-01-01
Over the last four decades, a range of different neuroimaging tools have been used to study human auditory attention, spanning from classic event-related potential studies using electroencephalography to modern multimodal imaging approaches (e.g., combining anatomical information based on magnetic resonance imaging with magneto- and electroencephalography). This review begins by exploring the different strengths and limitations inherent to different neuroimaging methods, and then outlines some common behavioral paradigms that have been adopted to study auditory attention. We argue that in order to design a neuroimaging experiment that produces interpretable, unambiguous results, the experimenter must not only have a deep appreciation of the imaging technique employed, but also a sophisticated understanding of perception and behavior. Only with the proper caveats in mind can one begin to infer how the cortex supports a human in solving the “cocktail party” problem. PMID:23850664
2013-01-01
Background There is an accumulating body of evidence indicating that neuronal functional specificity to basic sensory stimulation is mutable and subject to experience. Although fMRI experiments have investigated changes in brain activity after relative to before perceptual learning, brain activity during perceptual learning has not been explored. This work investigated brain activity related to auditory frequency discrimination learning using a variational Bayesian approach for source localization, during simultaneous EEG and fMRI recording. We investigated whether the practice effects are determined solely by activity in stimulus-driven mechanisms or whether high-level attentional mechanisms, which are linked to the perceptual task, control the learning process. Results The results of fMRI analyses revealed significant attention and learning related activity in left and right superior temporal gyrus STG as well as the left inferior frontal gyrus IFG. Current source localization of simultaneously recorded EEG data was estimated using a variational Bayesian method. Analysis of current localized to the left inferior frontal gyrus and the right superior temporal gyrus revealed gamma band activity correlated with behavioral performance. Conclusions Rapid improvement in task performance is accompanied by plastic changes in the sensory cortex as well as superior areas gated by selective attention. Together the fMRI and EEG results suggest that gamma band activity in the right STG and left IFG plays an important role during perceptual learning. PMID:23316957
ERIC Educational Resources Information Center
Hughes, Robert W.; Vachon, Francois; Jones, Dylan M.
2007-01-01
The disruption of short-term memory by to-be-ignored auditory sequences (the changing-state effect) has often been characterized as attentional capture by deviant events (deviation effect). However, the present study demonstrates that changing-state and deviation effects are functionally distinct forms of auditory distraction: The disruption of…
NASA Technical Reports Server (NTRS)
Wortz, E. C.; Saur, A. J.; Nowlis, D. P.; Kendall, M. P.
1974-01-01
Results are presented of an initial experiment in a research program designed to develop objective techniques for psychological assessment of individuals and groups participating in long-duration space flights. Specifically examined is the rationale for utilizing measures of attention as an objective assessment technique. Subjects participating in the experiment performed various tasks (eg, playing matrix games which appeared on a display screen along with auditory stimuli). The psychophysiological reactions of the subjects were measured and are given. Previous research of various performance and psychophysiological methods of measuring attention is also discussed. The experiment design (independent and dependent variables) and apparatus (computers and display devices) are described and shown. Conclusions and recommendations are presented.
Bertels, Julie; Kolinsky, Régine; Pietrons, Elise; Morais, José
2011-02-01
Using an auditory adaptation of the emotional and taboo Stroop tasks, the authors compared the effects of negative and taboo spoken words in mixed and blocked designs. Both types of words elicited carryover effects with mixed presentations and interference with blocked presentations, suggesting similar long-lasting attentional effects. Both were also relatively resilient to the long-lasting influence of the preceding emotional word. Hence, contrary to what has been assumed (Schmidt & Saari, 2007), negative and taboo words do not seem to differ in terms of the temporal dynamics of the interdimensional shifting, at least in the auditory modality. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Wöstmann, Malte; Vosskuhl, Johannes; Obleser, Jonas; Herrmann, Christoph S
2018-04-06
Spatial attention relatively increases the power of neural 10-Hz alpha oscillations in the hemisphere ipsilateral to attention, and decreases alpha power in the contralateral hemisphere. For gamma oscillations (>40 Hz), the opposite effect has been observed. The functional roles of lateralised oscillations for attention are currently unclear. If lateralised oscillations are functionally relevant for attention, transcranial stimulation of alpha versus gamma oscillations in one hemisphere should differentially modulate the accuracy of spatial attention to the ipsi-versus contralateral side. 20 human participants performed a dichotic listening task under continuous transcranial alternating current stimulation (tACS, vs sham) at alpha (10 Hz) or gamma (47 Hz) frequency. On each trial, participants attended to four spoken numbers on the left or right ear, while ignoring numbers on the other ear. In order to stimulate a left temporo-parietal cortex region, which is known to show marked modulations of alpha power during auditory spatial attention, tACS (1 mA peak-to-peak amplitude) was applied at electrode positions TP7 and FC5 over the left hemisphere. As predicted, unihemispheric alpha-tACS relatively decreased the recall of targets contralateral to stimulation, but increased recall of ipsilateral targets. Importantly, this spatial pattern of results was reversed for gamma-tACS. Results provide a proof of concept that transcranially stimulated oscillations can enhance spatial attention and facilitate attentional selection of speech. Furthermore, opposite effects of alpha versus gamma stimulation support the view that states of high alpha are incommensurate with active neural processing as reflected by states of high gamma. Copyright © 2018 Elsevier Inc. All rights reserved.
Hill, N J; Schölkopf, B
2012-01-01
We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135
NASA Astrophysics Data System (ADS)
Hill, N. J.; Schölkopf, B.
2012-04-01
We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.
Callahan, Brandy L; Belleville, Sylvie; Ferland, Guylaine; Potvin, Olivier; Tremblay, Marie-Pier; Hudon, Carol; Macoir, Joël
2014-01-01
The Brown-Peterson task is used to assess verbal short-term memory as well as divided attention. In its auditory three-consonant version, trigrams are presented to participants who must recall the items in correct order after variable delays, during which an interference task is performed. The present study aimed to establish normative data for this test in the elderly French-Quebec population based on cross-sectional data from a retrospective, multi-center convenience sample. A total of 595 elderly native French-speakers from the province of Quebec performed the Memoria version of the auditory three-consonant Brown-Peterson test. For both series and item-by-item scoring methods, age, education, and, in most cases, recall after a 0-second interval were found to be significantly associated with recall performance after 10-second, 20-second, and 30-second interference intervals. Based on regression model results, equations to calculate Z scores are presented for the 10-second, 20-second and 30-second intervals and for each scoring method to allow estimation of expected performance based on participants' individual characteristics. As an important ceiling effect was observed at the 0-second interval, norms for this interference interval are presented in percentiles.
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.
Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin
2017-02-01
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.
Getzmann, Stephan; Jasny, Julian; Falkenstein, Michael
2017-02-01
Verbal communication in a "cocktail-party situation" is a major challenge for the auditory system. In particular, changes in target speaker usually result in declined speech perception. Here, we investigated whether speech cues indicating a subsequent change in target speaker reduce the costs of switching in younger and older adults. We employed event-related potential (ERP) measures and a speech perception task, in which sequences of short words were simultaneously presented by four speakers. Changes in target speaker were either unpredictable or semantically cued by a word within the target stream. Cued changes resulted in a less decreased performance than uncued changes in both age groups. The ERP analysis revealed shorter latencies in the change-related N400 and late positive complex (LPC) after cued changes, suggesting an acceleration in context updating and attention switching. Thus, both younger and older listeners used semantic cues to prepare changes in speaker setting. Copyright © 2016 Elsevier Inc. All rights reserved.
Selective attention: psi performance in children with learning disabilities.
Garcia, Vera Lúcia; Pereira, Liliane Desgualdo; Fukuda, Yotaka
2007-01-01
Selective attention is essential for learning how to write and read. The objective of this study was to examine the process of selective auditory attention in children with learning disabilities. Group I included forty subjects aged between 9 years and six months and 10 years and eleven months, who had a low risk of altered hearing, language and learning development. Group II included 20 subjects aged between 9 years and five months and 11 years and ten months, who presented learning disabilities. A prospective study was done using the Pediatric Speech Intelligibility Test (PSI). Right ear PSI with an ipsilateral competing message at speech/noise ratios of 0 and -10 was sufficient to differentiate Group I and Group II. Special attention should be given to the performance of Group II on the first tested ear, which may substantiate important signs of improvements in performance and rehabilitation. The PSI - MCI of the right ear at speech/noise ratios of 0 and -10 was appropriate to differentiate Groups I and II. There was an association with the group that presented learning disabilities: this group showed problems in selective attention.
Oei, Adam C; Patterson, Michael D
2015-01-01
Despite increasing evidence that shows action video game play improves perceptual and cognitive skills, the mechanisms of transfer are not well-understood. In line with previous work, we suggest that transfer is dependent upon common demands between the game and transfer task. In the current study, participants played one of four action games with varying speed, visual, and attentional demands for 20 h. We examined whether training enhanced performance for attentional blink, selective attention, attending to multiple items, visual search and auditory detection. Non-gamers who played the game (Modern Combat) with the highest demands showed transfer to tasks of attentional blink and attending to multiple items. The game (MGS Touch) with fewer attentional demands also decreased attentional blink, but to a lesser degree. Other games failed to show transfer, despite having many action game characteristics but at a reduced intensity. The results support the common demands hypothesis.
Bashiri, Azadeh; Shahmoradi, Leila; Beigy, Hamid; Savareh, Behrouz A; Nosratabadi, Masood; N Kalhori, Sharareh R; Ghazisaeedi, Marjan
2018-06-01
Quantitative EEG gives valuable information in the clinical evaluation of psychological disorders. The purpose of the present study is to identify the most prominent features of quantitative electroencephalography (QEEG) that affect attention and response control parameters in children with attention deficit hyperactivity disorder. The QEEG features and the Integrated Visual and Auditory-Continuous Performance Test ( IVA-CPT) of 95 attention deficit hyperactivity disorder subjects were preprocessed by Independent Evaluation Criterion for Binary Classification. Then, the importance of selected features in the classification of desired outputs was evaluated using the artificial neural network. Findings uncovered the highest rank of QEEG features in each IVA-CPT parameters related to attention and response control. Using the designed model could help therapists to determine the existence or absence of defects in attention and response control relying on QEEG.
Oei, Adam C.; Patterson, Michael D.
2015-01-01
Despite increasing evidence that shows action video game play improves perceptual and cognitive skills, the mechanisms of transfer are not well-understood. In line with previous work, we suggest that transfer is dependent upon common demands between the game and transfer task. In the current study, participants played one of four action games with varying speed, visual, and attentional demands for 20 h. We examined whether training enhanced performance for attentional blink, selective attention, attending to multiple items, visual search and auditory detection. Non-gamers who played the game (Modern Combat) with the highest demands showed transfer to tasks of attentional blink and attending to multiple items. The game (MGS Touch) with fewer attentional demands also decreased attentional blink, but to a lesser degree. Other games failed to show transfer, despite having many action game characteristics but at a reduced intensity. The results support the common demands hypothesis. PMID:25713551
The effect of pain on involuntary and voluntary capture of attention.
Troche, S J; Houlihan, M E; Connolly, J F; Dick, B D; McGrath, P J; Finley, G A; Stroink, G
2015-03-01
There is converging evidence for the notion that pain affects a broad range of attentional domains. This study investigated the influence of pain on the involuntary capture of attention as indexed by the P3a component in the event-related potential derived from the electroencephalogram. Participants performed in an auditory oddball task in a pain-free and a pain condition during which they submerged a hand in cold water. Novel, infrequent and unexpected auditory stimuli were presented randomly in a series of frequent standard and infrequent target tones. P3a and P3b amplitudes were observed to novel, unexpected and target-related stimuli, respectively. Both electrophysiological components were characterized by reduced amplitudes in the pain compared with the pain-free condition. Hit rate and reaction time to target stimuli did not differ between the two conditions presumably because the experimental task was not difficult enough to exceed attentional capacities under pain conditions. These results indicate that voluntary attention serving the maintenance and control of ongoing information processing (reflected by the P3b amplitude) is impaired by pain. In addition, the involuntary capture of attention and orientation to novel, unexpected information (measured by the P3a) is also impaired by pain. Thus, neurophysiological measures examined in this study support the theoretical positions proposing that pain can reduce attentional processing capacity. These findings have potentially important implications at the theoretical level for our understanding of the interplay of pain and cognition, and at the therapeutic level for the clinical treatment of individuals experiencing ongoing pain. © 2014 European Pain Federation - EFIC®
A right-ear bias of auditory selective attention is evident in alpha oscillations.
Payne, Lisa; Rogers, Chad S; Wingfield, Arthur; Sekuler, Robert
2017-04-01
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening. © 2016 Society for Psychophysiological Research.
Nonverbal spatially selective attention in 4- and 5-year-old children.
Sanders, Lisa D; Zobel, Benjamin H
2012-07-01
Under some conditions 4- and 5-year-old children can differentially process sounds from attended and unattended locations. In fact, the latency of spatially selective attention effects on auditory processing as measured with event-related potentials (ERPs) is quite similar in young children and adults. However, it is not clear if developmental differences in the polarity, distribution, and duration of attention effects are best attributed to acoustic characteristics, availability of non-spatial attention cues, task demands, or domain. In the current study adults and children were instructed to attend to one of two simultaneously presented soundscapes (e.g., city sounds or night sounds) to detect targets (e.g., car horn or owl hoot) in the attended channel only. Probes presented from the same location as the attended soundscape elicited a larger negativity by 80 ms after onset in both adults and children. This initial negative difference (Nd) was followed by a larger positivity for attended probes in adults and another negativity for attended probes in children. The results indicate that the neural systems by which attention modulates early auditory processing are available for young children even when presented with nonverbal sounds. They also suggest important interactions between attention, acoustic characteristics, and maturity on auditory evoked potentials. Copyright © 2012 Elsevier Ltd. All rights reserved.
Neuronal effects of nicotine during auditory selective attention.
Smucny, Jason; Olincy, Ann; Eichman, Lindsay S; Tregellas, Jason R
2015-06-01
Although the attention-enhancing effects of nicotine have been behaviorally and neurophysiologically well-documented, its localized functional effects during selective attention are poorly understood. In this study, we examined the neuronal effects of nicotine during auditory selective attention in healthy human nonsmokers. We hypothesized to observe significant effects of nicotine in attention-associated brain areas, driven by nicotine-induced increases in activity as a function of increasing task demands. A single-blind, prospective, randomized crossover design was used to examine neuronal response associated with a go/no-go task after 7 mg nicotine or placebo patch administration in 20 individuals who underwent functional magnetic resonance imaging at 3T. The task design included two levels of difficulty (ordered vs. random stimuli) and two levels of auditory distraction (silence vs. noise). Significant treatment × difficulty × distraction interaction effects on neuronal response were observed in the hippocampus, ventral parietal cortex, and anterior cingulate. In contrast to our hypothesis, U and inverted U-shaped dependencies were observed between the effects of nicotine on response and task demands, depending on the brain area. These results suggest that nicotine may differentially affect neuronal response depending on task conditions. These results have important theoretical implications for understanding how cholinergic tone may influence the neurobiology of selective attention.