Sample records for auditory duration discrimination

  1. Audition dominates vision in duration perception irrespective of salience, attention, and temporal discriminability

    PubMed Central

    Ortega, Laura; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2014-01-01

    Whereas the visual modality tends to dominate over the auditory modality in bimodal spatial perception, the auditory modality tends to dominate over the visual modality in bimodal temporal perception. Recent results suggest that the visual modality dominates bimodal spatial perception because spatial discriminability is typically greater for the visual than auditory modality; accordingly, visual dominance is eliminated or reversed when visual-spatial discriminability is reduced by degrading visual stimuli to be equivalent or inferior to auditory spatial discriminability. Thus, for spatial perception, the modality that provides greater discriminability dominates. Here we ask whether auditory dominance in duration perception is similarly explained by factors that influence the relative quality of auditory and visual signals. In contrast to the spatial results, the auditory modality dominated over the visual modality in bimodal duration perception even when the auditory signal was clearly weaker, when the auditory signal was ignored (i.e., the visual signal was selectively attended), and when the temporal discriminability was equivalent for the auditory and visual signals. Thus, unlike spatial perception where the modality carrying more discriminable signals dominates, duration perception seems to be mandatorily linked to auditory processing under most circumstances. PMID:24806403

  2. Summary statistics in auditory perception.

    PubMed

    McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P

    2013-04-01

    Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.

  3. Human sensitivity to differences in the rate of auditory cue change.

    PubMed

    Maloff, Erin S; Grantham, D Wesley; Ashmead, Daniel H

    2013-05-01

    Measurement of sensitivity to differences in the rate of change of auditory signal parameters is complicated by confounds among duration, extent, and velocity of the changing signal. Dooley and Moore [(1988) J. Acoust. Soc. Am. 84(4), 1332-1337] proposed a method for measuring sensitivity to rate of change using a duration discrimination task. They reported improved duration discrimination when an additional intensity or frequency change cue was present. The current experiments were an attempt to use this method to measure sensitivity to the rate of change in intensity and spatial position. Experiment 1 investigated whether duration discrimination was enhanced when additional cues of rate of intensity change, rate of spatial position change, or both were provided. Experiment 2 determined whether participant listening experience or the testing environment influenced duration discrimination task performance. Experiment 3 assessed whether duration discrimination could be used to measure sensitivity to rates of changes in intensity and spatial position for stimuli with lower rates of change, as well as emphasizing the constancy of the velocity cue. Results of these experiments showed that duration discrimination was impaired rather than enhanced by the additional velocity cues. The findings are discussed in terms of the demands of listening to concurrent changes along multiple auditory dimensions.

  4. Behavioral and subcortical signatures of musical expertise in Mandarin Chinese speakers

    PubMed Central

    Tervaniemi, Mari; Aalto, Daniel

    2018-01-01

    Both musical training and native language have been shown to have experience-based plastic effects on auditory processing. However, the combined effects within individuals are unclear. Recent research suggests that musical training and tone language speaking are not clearly additive in their effects on processing of auditory features and that there may be a disconnect between perceptual and neural signatures of auditory feature processing. The literature has only recently begun to investigate the effects of musical expertise on basic auditory processing for different linguistic groups. This work provides a profile of primary auditory feature discrimination for Mandarin speaking musicians and nonmusicians. The musicians showed enhanced perceptual discrimination for both frequency and duration as well as enhanced duration discrimination in a multifeature discrimination task, compared to nonmusicians. However, there were no differences between the groups in duration processing of nonspeech sounds at a subcortical level or in subcortical frequency representation of a nonnative tone contour, for fo or for the first or second formant region. The results indicate that musical expertise provides a cognitive, but not subcortical, advantage in a population of Mandarin speakers. PMID:29300756

  5. Rise time and formant transition duration in the discrimination of speech sounds: the Ba-Wa distinction in developmental dyslexia.

    PubMed

    Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Dénes

    2011-01-01

    Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory processing of brief, rapidly successive acoustic changes is compromised in dyslexia, thereby affecting phonetic discrimination (e.g. discriminating /b/ from /d/) via impaired discrimination of formant transitions (rapid acoustic changes in frequency and intensity). However, an alternative auditory temporal hypothesis is that the basic auditory processing of the slower amplitude modulation cues in speech is compromised (Goswami et al., 2002). Here, we contrast children's perception of a synthetic speech contrast (ba/wa) when it is based on the speed of the rate of change of frequency information (formant transition duration) versus the speed of the rate of change of amplitude modulation (rise time). We show that children with dyslexia have excellent phonetic discrimination based on formant transition duration, but poor phonetic discrimination based on envelope cues. The results explain why phonetic discrimination may be allophonic in developmental dyslexia (Serniclaes et al., 2004), and suggest new avenues for the remediation of developmental dyslexia. © 2010 Blackwell Publishing Ltd.

  6. The Relationship between Auditory Processing and Restricted, Repetitive Behaviors in Adults with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Kargas, Niko; López, Beatriz; Reddy, Vasudevi; Morris, Paul

    2015-01-01

    Current views suggest that autism spectrum disorders (ASDs) are characterised by enhanced low-level auditory discrimination abilities. Little is known, however, about whether enhanced abilities are universal in ASD and how they relate to symptomatology. We tested auditory discrimination for intensity, frequency and duration in 21 adults with ASD…

  7. Auditory Discrimination Learning: Role of Working Memory.

    PubMed

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  8. Auditory Discrimination Learning: Role of Working Memory

    PubMed Central

    Zhang, Yu-Xuan; Moore, David R.; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  9. Auditory/visual Duration Bisection in Patients with Left or Right Medial-Temporal Lobe Resection

    ERIC Educational Resources Information Center

    Melgire, Manuela; Ragot, Richard; Samson, Severine; Penney, Trevor B.; Meck, Warren H.; Pouthas, Viviane

    2005-01-01

    Patients with unilateral (left or right) medial temporal lobe lesions and normal control (NC) volunteers participated in two experiments, both using a duration bisection procedure. Experiment 1 assessed discrimination of auditory and visual signal durations ranging from 2 to 8 s, in the same test session. Patients and NC participants judged…

  10. Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders

    ERIC Educational Resources Information Center

    Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia

    2006-01-01

    Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…

  11. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity.

    PubMed

    Jansson-Verkasalo, Eira; Eggers, Kurt; Järvenpää, Anu; Suominen, Kalervo; Van den Bergh, Bea; De Nil, Luc; Kujala, Teija

    2014-09-01

    Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are affected in children who stutter (CWS). Participants were 10 CWS, and 12 typically developing children with fluent speech (TDC). Event-related potentials (ERPs) for syllables and syllable changes [consonant, vowel, vowel-duration, frequency (F0), and intensity changes], critical in speech perception and language development of CWS were compared to those of TDC. There were no significant group differences in the amplitudes or latencies of the P1 or N2 responses elicited by the standard stimuli. However, the Mismatch Negativity (MMN) amplitude was significantly smaller in CWS than in TDC. For TDC all deviants of the linguistic multifeature paradigm elicited significant MMN amplitudes, comparable with the results found earlier with the same paradigm in 6-year-old children. In contrast, only the duration change elicited a significant MMN in CWS. The results showed that central auditory speech-sound processing was typical at the level of sound encoding in CWS. In contrast, central speech-sound discrimination, as indexed by the MMN for multiple sound features (both phonetic and prosodic), was atypical in the group of CWS. Findings were linked to existing conceptualizations on stuttering etiology. The reader will be able (a) to describe recent findings on central auditory speech-sound processing in individuals who stutter, (b) to describe the measurement of auditory reception and central auditory speech-sound discrimination, (c) to describe the findings of central auditory speech-sound discrimination, as indexed by the mismatch negativity (MMN), in children who stutter. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Neural Correlates of Temporal Auditory Processing in Developmental Dyslexia during German Vowel Length Discrimination: An fMRI Study

    ERIC Educational Resources Information Center

    Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel

    2012-01-01

    This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…

  13. Laterality of basic auditory perception.

    PubMed

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  14. Laterality of Basic Auditory Perception

    PubMed Central

    Sininger, Yvonne S.; Bhatara, Anjali

    2010-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: 1) gap detection 2) frequency discrimination and 3) intensity discrimination. Stimuli included tones (500, 1000 and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was: processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by 1) spectral width, a narrow band noise (NBN) of 450 Hz bandwidth was evaluated using intensity discrimination and 2) stimulus duration, 200, 500 and 1000 ms duration tones were evaluated using frequency discrimination. Results A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterized as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli. PMID:22385138

  15. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    PubMed

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  16. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    PubMed Central

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  17. The Development of Mental Models for Auditory Events: Relational Complexity and Discrimination of Pitch and Duration

    ERIC Educational Resources Information Center

    Stevens, Catherine; Gallagher, Melinda

    2004-01-01

    This experiment investigated relational complexity and relational shift in judgments of auditory patterns. Pitch and duration values were used to construct two-note perceptually similar sequences (unary relations) and four-note relationally similar sequences (binary relations). It was hypothesized that 5-, 8- and 11-year-old children would perform…

  18. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    PubMed

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  19. Atypical pattern of discriminating sound features in adults with Asperger syndrome as reflected by the mismatch negativity.

    PubMed

    Kujala, T; Aho, E; Lepistö, T; Jansson-Verkasalo, E; Nieminen-von Wendt, T; von Wendt, L; Näätänen, R

    2007-04-01

    Asperger syndrome, which belongs to the autistic spectrum of disorders, is characterized by deficits of social interaction and abnormal perception, like hypo- or hypersensitivity in reacting to sounds and discriminating certain sound features. We determined auditory feature discrimination in adults with Asperger syndrome with the mismatch negativity (MMN), a neural response which is an index of cortical change detection. We recorded MMN for five different sound features (duration, frequency, intensity, location, and gap). Our results suggest hypersensitive auditory change detection in Asperger syndrome, as reflected in the enhanced MMN for deviant sounds with a gap or shorter duration, and speeded MMN elicitation for frequency changes.

  20. Coding of auditory temporal and pitch information by hippocampal individual cells and cell assemblies in the rat.

    PubMed

    Sakurai, Y

    2002-01-01

    This study reports how hippocampal individual cells and cell assemblies cooperate for neural coding of pitch and temporal information in memory processes for auditory stimuli. Each rat performed two tasks, one requiring discrimination of auditory pitch (high or low) and the other requiring discrimination of their duration (long or short). Some CA1 and CA3 complex-spike neurons showed task-related differential activity between the high and low tones in only the pitch-discrimination task. However, without exception, neurons which showed task-related differential activity between the long and short tones in the duration-discrimination task were always task-related neurons in the pitch-discrimination task. These results suggest that temporal information (long or short), in contrast to pitch information (high or low), cannot be coded independently by specific neurons. The results also indicate that the two different behavioral tasks cannot be fully differentiated by the task-related single neurons alone and suggest a model of cell-assembly coding of the tasks. Cross-correlation analysis among activities of simultaneously recorded multiple neurons supported the suggested cell-assembly model.Considering those results, this study concludes that dual coding by hippocampal single neurons and cell assemblies is working in memory processing of pitch and temporal information of auditory stimuli. The single neurons encode both auditory pitches and their temporal lengths and the cell assemblies encode types of tasks (contexts or situations) in which the pitch and the temporal information are processed.

  1. Auditory Perception, Suprasegmental Speech Processing, and Vocabulary Development in Chinese Preschoolers.

    PubMed

    Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu

    2016-10-01

    The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.

  2. Effect of duration and inter-stimulus interval on auditory temporal order discrimination in young normal-hearing and elderly hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Narendran, Mini M.; Humes, Larry E.

    2003-04-01

    Increasing the rate of presentation can have a deleterious effect on auditory processing, especially among the elderly. Rate can be manipulated by changing the duration of individual components of a sequence of sounds, by changing the inter-stimulus interval (ISI) between components, or both. Consequently, when age-related deficits in performance appear to be attributable to rate of stimulus presentation, it is often the case that alternative explanations in terms of the effects of stimulus duration or ISI are also possible. In this study, the independent effects of duration and ISI on the discrimination of temporal order for four-tone sequences were investigated in a group of young normal-hearing and elderly hearing-impaired listeners. It was found that discrimination performance was driven by the rate of presentation, rather than stimulus duration or ISI alone, for both groups of listeners. The performance of the two groups of listeners differed significantly for the fastest presentation rates, but was similar for the slower rates. Slowing the rate of presentation seemed to improve performance, regardless of whether this was done by increasing stimulus duration or increasing ISI, and this was observed for both groups of listeners. [Work supported, in part, by NIA.

  3. Basic Auditory Processing Deficits in Dyslexia: Systematic Review of the Behavioral and Event-Related Potential/Field Evidence

    ERIC Educational Resources Information Center

    Hämäläinen, Jarmo A.; Salminen, Hanne K.; Leppänen, Paavo H. T.

    2013-01-01

    A review of research that uses behavioral, electroencephalographic, and/or magnetoencephalographic methods to investigate auditory processing deficits in individuals with dyslexia is presented. Findings show that measures of frequency, rise time, and duration discrimination as well as amplitude modulation and frequency modulation detection were…

  4. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    PubMed

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  5. Neural Measures of a Japanese Consonant Length Discrimination by Japanese and American English Listeners: Affects of Attention

    PubMed Central

    Hisagi, Miwako; Shafer, Valerie L.; Strange, Winifred; Sussman, Elyse S.

    2015-01-01

    This study examined automaticity of discrimination of a Japanese length contrast for consonants (miʃi vs. miʃʃi) in native (Japanese) and non-native (American-English) listeners using behavioral measures and the event-related potential (ERP) mismatch negativity (MMN). Attention to the auditory input was manipulated either away from the auditory input via a visual oddball task (Visual Attend), or to the input by asking the listeners to count auditory deviants (Auditory Attend). Results showed a larger MMN when attention was focused on the consonant contrast than away from it for both groups. The MMN was larger for consonant duration increments than decrements. No difference in MMN between the language groups was observed, but the Japanese listeners did show better behavioral discrimination than the American English listeners. In addition, behavioral responses showed a weak, but significant correlation with MMN amplitude. These findings suggest that both acoustic-phonetic properties and phonological experience affects automaticity of speech processing. PMID:26119918

  6. Representations of temporal information in short-term memory: Are they modality-specific?

    PubMed

    Bratzke, Daniel; Quinn, Katrina R; Ulrich, Rolf; Bausenhart, Karin M

    2016-10-01

    Rattat and Picard (2012) reported that the coding of temporal information in short-term memory is modality-specific, that is, temporal information received via the visual (auditory) modality is stored as a visual (auditory) code. This conclusion was supported by modality-specific interference effects on visual and auditory duration discrimination, which were induced by secondary tasks (visual tracking or articulatory suppression), presented during a retention interval. The present study assessed the stability of these modality-specific interference effects. Our study did not replicate the selective interference pattern but rather indicated that articulatory suppression not only impairs short-term memory for auditory but also for visual durations. This result pattern supports a crossmodal or an abstract view of temporal encoding. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  8. Interplay between singing and cortical processing of music: a longitudinal study in children with cochlear implants.

    PubMed

    Torppa, Ritva; Huotilainen, Minna; Leminen, Miika; Lipsanen, Jari; Tervaniemi, Mari

    2014-01-01

    Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4-13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14-17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs.

  9. Development of a Pitch Discrimination Screening Test for Preschool Children.

    PubMed

    Abramson, Maria Kulick; Lloyd, Peter J

    2016-04-01

    There is a critical need for tests of auditory discrimination for young children as this skill plays a fundamental role in the development of speaking, prereading, reading, language, and more complex auditory processes. Frequency discrimination is important with regard to basic sensory processing affecting phonological processing, dyslexia, measurements of intelligence, auditory memory, Asperger syndrome, and specific language impairment. This study was performed to determine the clinical feasibility of the Pitch Discrimination Test (PDT) to screen the preschool child's ability to discriminate some of the acoustic demands of speech perception, primarily pitch discrimination, without linguistic content. The PDT used brief speech frequency tones to gather normative data from preschool children aged 3 to 5 yrs. A cross-sectional study was used to gather data regarding the pitch discrimination abilities of a sample of typically developing preschool children, between 3 and 5 yrs of age. The PDT consists of ten trials using two pure tones of 100-msec duration each, and was administered in an AA or AB forced-choice response format. Data from 90 typically developing preschool children between the ages of 3 and 5 yrs were used to provide normative data. Nonparametric Mann-Whitney U-testing was used to examine the effects of age as a continuous variable on pitch discrimination. The Kruskal-Wallis test was used to determine the significance of age on performance on the PDT. Spearman rank was used to determine the correlation of age and performance on the PDT. Pitch discrimination of brief tones improved significantly from age 3 yrs to age 4 yrs, as well as from age 3 yrs to the age 4- and 5-yrs group. Results indicated that between ages 3 and 4 yrs, children's auditory discrimination of pitch improved on the PDT. The data showed that children can be screened for auditory discrimination of pitch beginning with age 4 yrs. The PDT proved to be a time efficient, feasible tool for a simple form of frequency discrimination screening in the preschool population before the age where other diagnostic tests of auditory processing disorders can be used. American Academy of Audiology.

  10. Human auditory event-related potentials predict duration judgments.

    PubMed

    Bendixen, Alexandra; Grimm, Sabine; Schröger, Erich

    2005-08-05

    Internal clock models postulate a pulse accumulation process underlying timing activities, with more accumulated pulses resulting in longer perceived durations. We investigated whether this accumulation is reflected in the amplitude of event-related brain potentials (ERPs) elicited by auditory stimuli with durations of 400-600 ms. In a duration discrimination paradigm, we found more negative amplitudes to physically identical stimuli when they were judged as longer than the memorized standard duration (500 ms) as compared to being classified as shorter. This sustained negativity was already developing during the first 100 ms after stimulus onset. It could not be explained as a bias to respond with a particular hand (lateralized readiness potential), but rather reflects a processing difference between the tones to be judged as shorter or longer. Our results are in line with models of time processing which assume that higher numbers of accumulated pulses of a temporal processor result in an increase in perceived duration.

  11. Perception of non-verbal auditory stimuli in Italian dyslexic children.

    PubMed

    Cantiani, Chiara; Lorusso, Maria Luisa; Valnegri, Camilla; Molteni, Massimo

    2010-01-01

    Auditory temporal processing deficits have been proposed as the underlying cause of phonological difficulties in Developmental Dyslexia. The hypothesis was tested in a sample of 20 Italian dyslexic children aged 8-14, and 20 matched control children. Three tasks of auditory processing of non-verbal stimuli, involving discrimination and reproduction of sequences of rapidly presented short sounds were expressly created. Dyslexic subjects performed more poorly than control children, suggesting the presence of a deficit only partially influenced by the duration of the stimuli and of inter-stimulus intervals (ISIs).

  12. Hearing shapes our perception of time: temporal discrimination of tactile stimuli in deaf people.

    PubMed

    Bolognini, Nadia; Cecchetto, Carlo; Geraci, Carlo; Maravita, Angelo; Pascual-Leone, Alvaro; Papagno, Costanza

    2012-02-01

    Confronted with the loss of one type of sensory input, we compensate using information conveyed by other senses. However, losing one type of sensory information at specific developmental times may lead to deficits across all sensory modalities. We addressed the effect of auditory deprivation on the development of tactile abilities, taking into account changes occurring at the behavioral and cortical level. Congenitally deaf and hearing individuals performed two tactile tasks, the first requiring the discrimination of the temporal duration of touches and the second requiring the discrimination of their spatial length. Compared with hearing individuals, deaf individuals were impaired only in tactile temporal processing. To explore the neural substrate of this difference, we ran a TMS experiment. In deaf individuals, the auditory association cortex was involved in temporal and spatial tactile processing, with the same chronometry as the primary somatosensory cortex. In hearing participants, the involvement of auditory association cortex occurred at a later stage and selectively for temporal discrimination. The different chronometry in the recruitment of the auditory cortex in deaf individuals correlated with the tactile temporal impairment. Thus, early hearing experience seems to be crucial to develop an efficient temporal processing across modalities, suggesting that plasticity does not necessarily result in behavioral compensation.

  13. A locus for an auditory processing deficit and language impairment in an extended pedigree maps to 12p13.31-q14.3

    PubMed Central

    Addis, L; Friederici, A D; Kotz, S A; Sabisch, B; Barry, J; Richter, N; Ludwig, A A; Rübsamen, R; Albert, F W; Pääbo, S; Newbury, D F; Monaco, A P

    2010-01-01

    Despite the apparent robustness of language learning in humans, a large number of children still fail to develop appropriate language skills despite adequate means and opportunity. Most cases of language impairment have a complex etiology, with genetic and environmental influences. In contrast, we describe a three-generation German family who present with an apparently simple segregation of language impairment. Investigations of the family indicate auditory processing difficulties as a core deficit. Affected members performed poorly on a nonword repetition task and present with communication impairments. The brain activation pattern for syllable duration as measured by event-related brain potentials showed clear differences between affected family members and controls, with only affected members displaying a late discrimination negativity. In conjunction with psychoacoustic data showing deficiencies in auditory duration discrimination, the present results indicate increased processing demands in discriminating syllables of different duration. This, we argue, forms the cognitive basis of the observed language impairment in this family. Genome-wide linkage analysis showed a haplotype in the central region of chromosome 12 which reaches the maximum possible logarithm of odds ratio (LOD) score and fully co-segregates with the language impairment, consistent with an autosomal dominant, fully penetrant mode of inheritance. Whole genome analysis yielded no novel inherited copy number variants strengthening the case for a simple inheritance pattern. Several genes in this region of chromosome 12 which are potentially implicated in language impairment did not contain polymorphisms likely to be the causative mutation, which is as yet unknown. PMID:20345892

  14. Auditory cortical change detection in adults with Asperger syndrome.

    PubMed

    Lepistö, Tuulia; Nieminen-von Wendt, Taina; von Wendt, Lennart; Näätänen, Risto; Kujala, Teija

    2007-03-06

    The present study investigated whether auditory deficits reported in children with Asperger syndrome (AS) are also present in adulthood. To this end, event-related potentials (ERPs) were recorded from adults with AS for duration, pitch, and phonetic changes in vowels, and for acoustically matched non-speech stimuli. These subjects had enhanced mismatch negativity (MMN) amplitudes particularly for pitch and duration deviants, indicating enhanced sound-discrimination abilities. Furthermore, as reflected by the P3a, their involuntary orienting was enhanced for changes in non-speech sounds, but tended to be deficient for changes in speech sounds. The results are consistent with those reported earlier in children with AS, except for the duration-MMN, which was diminished in children and enhanced in adults.

  15. Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus.

    PubMed

    Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D

    2009-10-01

    By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.

  16. Discrimination of nonlinear frequency glides.

    PubMed

    Thyer, Nick; Mahar, Doug

    2006-05-01

    Discrimination thresholds for short duration nonlinear tone glides that differed in glide rate were measured in order to determine whether cues related to rate of frequency change alone were sufficient for discrimination. Thresholds for rising and falling nonlinear glides of 50-ms and 400-ms duration, spanning three frequency excursions (0.5, 1, and 2 ERBs) at three center frequencies (0.5, 2.0, and 6.0 kHz) were measured. Results showed that glide discrimination was possible when duration and initial and final frequencies were identical. Thresholds were of a different order to those found in previous studies using linear frequency glides where endpoint frequency or duration information is available as added cues. The pattern of results was suggestive of a mechanism sensitive to spectral changes in time. Thresholds increased as the rate of transition span increased, particularly above spans of 1 ERB. The Weber fraction associated with these changes was 0.6-0.7. Overall, the results were consistent with an excitation pattern model of nonlinear glide detection that has difficulty in tracking signals with rapid frequency changes that exceed the width of an auditory filter and are of short duration.

  17. Low-level neural auditory discrimination dysfunctions in specific language impairment-A review on mismatch negativity findings.

    PubMed

    Kujala, Teija; Leminen, Miika

    2017-12-01

    In specific language impairment (SLI), there is a delay in the child's oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN) in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Copyright © 2017. Published by Elsevier Ltd.

  18. Interplay between singing and cortical processing of music: a longitudinal study in children with cochlear implants

    PubMed Central

    Torppa, Ritva; Huotilainen, Minna; Leminen, Miika; Lipsanen, Jari; Tervaniemi, Mari

    2014-01-01

    Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4–13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14–17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs. PMID:25540628

  19. The perception of FM sweeps by Chinese and English listeners.

    PubMed

    Luo, Huan; Boemio, Anthony; Gordon, Michael; Poeppel, David

    2007-02-01

    Frequency-modulated (FM) signals are an integral acoustic component of ecologically natural sounds and are analyzed effectively in the auditory systems of humans and animals. Linearly frequency-modulated tone sweeps were used here to evaluate two questions. First, how rapid a sweep can listeners accurately perceive? Second, is there an effect of native language insofar as the language (phonology) is differentially associated with processing of FM signals? Speakers of English and Mandarin Chinese were tested to evaluate whether being a speaker of a tone language altered the perceptual identification of non-speech tone sweeps. In two psychophysical studies, we demonstrate that Chinese subjects perform better than English subjects in FM direction identification, but not in an FM discrimination task, in which English and Chinese speakers show similar detection thresholds of approximately 20 ms duration. We suggest that the better FM direction identification in Chinese subjects is related to their experience with FM direction analysis in the tone-language environment, even though supra-segmental tonal variation occurs over a longer time scale. Furthermore, the observed common discrimination temporal threshold across two language groups supports the conjecture that processing auditory signals at durations of approximately 20 ms constitutes a fundamental auditory perceptual threshold.

  20. Auditory Pattern Memory

    DTIC Science & Technology

    1990-10-31

    Creelman , 1962; Getty, 1975; Divenyi and Danner, kin et aL (1982) jitter-detection paradigm. An advantage of 1977; Divenyi and Sachs, 1978; and Allen...discrimination of tonal se- Creelman . C. D. (1962). "’Human discrimination ofauditory duration." 3. quences.’" J. Acoust. Soc. Am. 82.1218-1226. Acoust...single marked intervals (Abel, 1972a,b; Creelman , 1962; Getty, 1975; Divenyi and Danner, 1977; Divenyi and Sachs, 1978; Espinoza-Varas and Jamieson

  1. Auditory perception and attention as reflected by the brain event-related potentials in children with Asperger syndrome.

    PubMed

    Lepistö, T; Silokallio, S; Nieminen-von Wendt, T; Alku, P; Näätänen, R; Kujala, T

    2006-10-01

    Language development is delayed and deviant in individuals with autism, but proceeds quite normally in those with Asperger syndrome (AS). We investigated auditory-discrimination and orienting in children with AS using an event-related potential (ERP) paradigm that was previously applied to children with autism. ERPs were measured to pitch, duration, and phonetic changes in vowels and to corresponding changes in non-speech sounds. Active sound discrimination was evaluated with a sound-identification task. The mismatch negativity (MMN), indexing sound-discrimination accuracy, showed right-hemisphere dominance in the AS group, but not in the controls. Furthermore, the children with AS had diminished MMN-amplitudes and decreased hit rates for duration changes. In contrast, their MMN to speech pitch changes was parietally enhanced. The P3a, reflecting involuntary orienting to changes, was diminished in the children with AS for speech pitch and phoneme changes, but not for the corresponding non-speech changes. The children with AS differ from controls with respect to their sound-discrimination and orienting abilities. The results of the children with AS are relatively similar to those earlier obtained from children with autism using the same paradigm, although these clinical groups differ markedly in their language development.

  2. Children's discrimination of vowel sequences

    NASA Astrophysics Data System (ADS)

    Coady, Jeffry A.; Kluender, Keith R.; Evans, Julia

    2003-10-01

    Children's ability to discriminate sequences of steady-state vowels was investigated. Vowels (as in ``beet,'' ``bat,'' ``bought,'' and ``boot'') were synthesized at durations of 40, 80, 160, 320, 640, and 1280 ms. Four different vowel sequences were created by concatenating different orders of vowels for each duration, separated by 10-ms intervening silence. Thus, sequences differed in vowel order and duration (rate). Sequences were 12 s in duration, with amplitude ramped linearly over the first and last 2 s. Sequence pairs included both same (identical sequences) and different trials (sequences with vowels in different orders). Sequences with vowel of equal duration were presented on individual trials. Children aged 7;0 to 10;6 listened to pairs of sequences (with 100 ms between sequences) and responded whether sequences sounded the same or different. Results indicate that children are best able to discriminate sequences of intermediate-duration vowels, typical of conversational speaking rate. Children were less accurate with both shorter and longer vowels. Results are discussed in terms of auditory processing (shortest vowels) and memory (longest vowels). [Research supported by NIDCD DC-05263, DC-04072, and DC-005650.

  3. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations.

    PubMed

    Smith, David R R

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women's but not for men's voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it's spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels.

  4. Research on Frequency Transposition for Hearing Aids. Final Report.

    ERIC Educational Resources Information Center

    Gengel, Roy W.; Pickett, J. M.

    Reported were studies measuring residual auditory capacities of deaf persons and investigating hearing aids which transpose speech to lower frequencies where deaf persons may have better hearing. Studies on temporal and frequency discrimination indicated that the duration of a signal may have a differential effect on its detectability by…

  5. Selective auditory grouping by zebra finches: testing the iambic-trochaic law.

    PubMed

    Spierings, Michelle; Hubert, Jeroen; Ten Cate, Carel

    2017-07-01

    Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high-low and loud-soft) and alternating in duration as iambs (short-long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.

  6. Importance of the left auditory areas in chord discrimination in music experts as demonstrated by MEG.

    PubMed

    Tervaniemi, Mari; Sannemann, Christian; Noyranen, Maiju; Salonen, Johanna; Pihko, Elina

    2011-08-01

    The brain basis behind musical competence in its various forms is not yet known. To determine the pattern of hemispheric lateralization during sound-change discrimination, we recorded the magnetic counterpart of the electrical mismatch negativity (MMNm) responses in professional musicians, musical participants (with high scores in the musicality tests but without professional training in music) and non-musicians. While watching a silenced video, they were presented with short sounds with frequency and duration deviants and C major chords with C minor chords as deviants. MMNm to chord deviants was stronger in both musicians and musical participants than in non-musicians, particularly in their left hemisphere. No group differences were obtained in the MMNm strength in the right hemisphere in any of the conditions or in the left hemisphere in the case of frequency or duration deviants. Thus, in addition to professional training in music, musical aptitude (combined with lower-level musical training) is also reflected in brain functioning related to sound discrimination. The present magnetoencephalographic evidence therefore indicates that the sound discrimination abilities may be differentially distributed in the brain in musically competent and naïve participants, especially in a musical context established by chord stimuli: the higher forms of musical competence engage both auditory cortices in an integrative manner. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  7. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    PubMed Central

    Fengler, Ineke; Nava, Elena; Röder, Brigitte

    2015-01-01

    Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 h and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up) on an audio–visual (i.e., faces and voices) emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio–visual (i.e., tone bursts and light flashes) discrimination task and two unimodal (one auditory and one visual) perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile) tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seems to possibly prevail for longer durations. PMID:25954166

  8. Impaired encoding of rapid pitch information underlies perception and memory deficits in congenital amusia.

    PubMed

    Albouy, Philippe; Cousineau, Marion; Caclin, Anne; Tillmann, Barbara; Peretz, Isabelle

    2016-01-06

    Recent theories suggest that the basis of neurodevelopmental auditory disorders such as dyslexia or specific language impairment might be a low-level sensory dysfunction. In the present study we test this hypothesis in congenital amusia, a neurodevelopmental disorder characterized by severe deficits in the processing of pitch-based material. We manipulated the temporal characteristics of auditory stimuli and investigated the influence of the time given to encode pitch information on participants' performance in discrimination and short-term memory. Our results show that amusics' performance in such tasks scales with the duration available to encode acoustic information. This suggests that in auditory neuro-developmental disorders, abnormalities in early steps of the auditory processing can underlie the high-level deficits (here musical disabilities). Observing that the slowing down of temporal dynamics improves amusics' pitch abilities allows considering this approach as a potential tool for remediation in developmental auditory disorders.

  9. Inservice Training Packet: Auditory Discrimination Listening Skills.

    ERIC Educational Resources Information Center

    Florida Learning Resources System/CROWN, Jacksonville.

    Intended to be used as the basis for a brief inservice workshop, the auditory discrimination/listening skills packet provides information on ideas, materials, and resources for remediating auditory discrimination and listening skill deficits. Included are a sample prescription form, tests of auditory discrimination, and a list of auditory…

  10. Musically cued gait-training improves both perceptual and motor timing in Parkinson's disease.

    PubMed

    Benoit, Charles-Etienne; Dalla Bella, Simone; Farrugia, Nicolas; Obrig, Hellmuth; Mainka, Stefan; Kotz, Sonja A

    2014-01-01

    It is well established that auditory cueing improves gait in patients with idiopathic Parkinson's disease (IPD). Disease-related reductions in speed and step length can be improved by providing rhythmical auditory cues via a metronome or music. However, effects on cognitive aspects of motor control have yet to be thoroughly investigated. If synchronization of movement to an auditory cue relies on a supramodal timing system involved in perceptual, motor, and sensorimotor integration, auditory cueing can be expected to affect both motor and perceptual timing. Here, we tested this hypothesis by assessing perceptual and motor timing in 15 IPD patients before and after a 4-week music training program with rhythmic auditory cueing. Long-term effects were assessed 1 month after the end of the training. Perceptual and motor timing was evaluated with a battery for the assessment of auditory sensorimotor and timing abilities and compared to that of age-, gender-, and education-matched healthy controls. Prior to training, IPD patients exhibited impaired perceptual and motor timing. Training improved patients' performance in tasks requiring synchronization with isochronous sequences, and enhanced their ability to adapt to durational changes in a sequence in hand tapping tasks. Benefits of cueing extended to time perception (duration discrimination and detection of misaligned beats in musical excerpts). The current results demonstrate that auditory cueing leads to benefits beyond gait and support the idea that coupling gait to rhythmic auditory cues in IPD patients relies on a neuronal network engaged in both perceptual and motor timing.

  11. On pure word deafness, temporal processing, and the left hemisphere.

    PubMed

    Stefanatos, Gerry A; Gershkoff, Arthur; Madigan, Sean

    2005-07-01

    Pure word deafness (PWD) is a rare neurological syndrome characterized by severe difficulties in understanding and reproducing spoken language, with sparing of written language comprehension and speech production. The pathognomonic disturbance of auditory comprehension appears to be associated with a breakdown in processes involved in mapping auditory input to lexical representations of words, but the functional locus of this disturbance and the localization of the responsible lesion have long been disputed. We report here on a woman with PWD resulting from a circumscribed unilateral infarct involving the left superior temporal lobe who demonstrated significant problems processing transitional spectrotemporal cues in both speech and nonspeech sounds. On speech discrimination tasks, she exhibited poor differentiation of stop consonant-vowel syllables distinguished by voicing onset and brief formant frequency transitions. Isolated formant transitions could be reliably discriminated only at very long durations (> 200 ms). By contrast, click fusion threshold, which depends on millisecond-level resolution of brief auditory events, was normal. These results suggest that the problems with speech analysis in this case were not secondary to general constraints on auditory temporal resolution. Rather, they point to a disturbance of left hemisphere auditory mechanisms that preferentially analyze rapid spectrotemporal variations in frequency. The findings have important implications for our conceptualization of PWD and its subtypes.

  12. Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?

    ERIC Educational Resources Information Center

    Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke

    2013-01-01

    These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…

  13. Auditory and tactile gap discrimination by observers with normal and impaired hearing.

    PubMed

    Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Perez, Zachary D; Delhorne, Lorraine A; Villabona, Timothy J

    2014-02-01

    Temporal processing ability for the senses of hearing and touch was examined through the measurement of gap-duration discrimination thresholds (GDDTs) employing the same low-frequency sinusoidal stimuli in both modalities. GDDTs were measured in three groups of observers (normal-hearing, hearing-impaired, and normal-hearing with simulated hearing loss) covering an age range of 21-69 yr. GDDTs for a baseline gap of 6 ms were measured for four different combinations of 100-ms leading and trailing markers (250-250, 250-400, 400-250, and 400-400 Hz). Auditory measurements were obtained for monaural presentation over headphones and tactile measurements were obtained using sinusoidal vibrations presented to the left middle finger. The auditory GDDTs of the hearing-impaired listeners, which were larger than those of the normal-hearing observers, were well-reproduced in the listeners with simulated loss. The magnitude of the GDDT was generally independent of modality and showed effects of age in both modalities. The use of different-frequency compared to same-frequency markers led to a greater deterioration in auditory GDDTs compared to tactile GDDTs and may reflect differences in bandwidth properties between the two sensory systems.

  14. Speaker-Sex Discrimination for Voiced and Whispered Vowels at Short Durations

    PubMed Central

    2016-01-01

    Whispered vowels, produced with no vocal fold vibration, lack the periodic temporal fine structure which in voiced vowels underlies the perceptual attribute of pitch (a salient auditory cue to speaker sex). Voiced vowels possess no temporal fine structure at very short durations (below two glottal cycles). The prediction was that speaker-sex discrimination performance for whispered and voiced vowels would be similar for very short durations but, as stimulus duration increases, voiced vowel performance would improve relative to whispered vowel performance as pitch information becomes available. This pattern of results was shown for women’s but not for men’s voices. A whispered vowel needs to have a duration three times longer than a voiced vowel before listeners can reliably tell whether it’s spoken by a man or woman (∼30 ms vs. ∼10 ms). Listeners were half as sensitive to information about speaker-sex when it is carried by whispered compared with voiced vowels. PMID:27757218

  15. Human Factors Engineering Bibliographic Series. Volume 2: 1960-1964 Literature

    DTIC Science & Technology

    1966-10-01

    flutter discrimination, melodic and temporal) binaural vs. monaural equipment and methods (e.g., anechoic chambers, audiometric devices, communication...brightness, duration, timbre, vocality) stimulus mixtures (e.g., harmonics, beats , combination tones, modulations) thresholds training, nonverbal--see Training...scales and aids) Beats --see Audition (stimulus mixtures) Bells--see Auditory (displays, nonverbal) Belts, Harnesses, and other Restraining Devices--see

  16. Rise Time and Formant Transition Duration in the Discrimination of Speech Sounds: The Ba-Wa Distinction in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Goswami, Usha; Fosker, Tim; Huss, Martina; Mead, Natasha; Szucs, Denes

    2011-01-01

    Across languages, children with developmental dyslexia have a specific difficulty with the neural representation of the sound structure (phonological structure) of speech. One likely cause of their difficulties with phonology is a perceptual difficulty in auditory temporal processing (Tallal, 1980). Tallal (1980) proposed that basic auditory…

  17. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    PubMed

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Mismatch negativity (MMN) reveals inefficient auditory ventral stream function in chronic auditory comprehension impairments.

    PubMed

    Robson, Holly; Cloutman, Lauren; Keidel, James L; Sage, Karen; Drakesmith, Mark; Welbourne, Stephen

    2014-10-01

    Auditory discrimination is significantly impaired in Wernicke's aphasia (WA) and thought to be causatively related to the language comprehension impairment which characterises the condition. This study used mismatch negativity (MMN) to investigate the neural responses corresponding to successful and impaired auditory discrimination in WA. Behavioural auditory discrimination thresholds of consonant-vowel-consonant (CVC) syllables and pure tones (PTs) were measured in WA (n = 7) and control (n = 7) participants. Threshold results were used to develop multiple deviant MMN oddball paradigms containing deviants which were either perceptibly or non-perceptibly different from the standard stimuli. MMN analysis investigated differences associated with group, condition and perceptibility as well as the relationship between MMN responses and comprehension (within which behavioural auditory discrimination profiles were examined). MMN waveforms were observable to both perceptible and non-perceptible auditory changes. Perceptibility was only distinguished by MMN amplitude in the PT condition. The WA group could be distinguished from controls by an increase in MMN response latency to CVC stimuli change. Correlation analyses displayed a relationship between behavioural CVC discrimination and MMN amplitude in the control group, where greater amplitude corresponded to better discrimination. The WA group displayed the inverse effect; both discrimination accuracy and auditory comprehension scores were reduced with increased MMN amplitude. In the WA group, a further correlation was observed between the lateralisation of MMN response and CVC discrimination accuracy; the greater the bilateral involvement the better the discrimination accuracy. The results from this study provide further evidence for the nature of auditory comprehension impairment in WA and indicate that the auditory discrimination deficit is grounded in a reduced ability to engage in efficient hierarchical processing and the construction of invariant auditory objects. Correlation results suggest that people with chronic WA may rely on an inefficient, noisy right hemisphere auditory stream when attempting to process speech stimuli.

  19. Filling the blanks in temporal intervals: the type of filling influences perceived duration and discrimination performance

    PubMed Central

    Horr, Ninja K.; Di Luca, Massimiliano

    2015-01-01

    In this work we investigate how judgments of perceived duration are influenced by the properties of the signals that define the intervals. Participants compared two auditory intervals that could be any combination of the following four types: intervals filled with continuous tones (filled intervals), intervals filled with regularly-timed short tones (isochronous intervals), intervals filled with irregularly-timed short tones (anisochronous intervals), and intervals demarcated by two short tones (empty intervals). Results indicate that the type of intervals to be compared affects discrimination performance and induces distortions in perceived duration. In particular, we find that duration judgments are most precise when comparing two isochronous and two continuous intervals, while the comparison of two anisochronous intervals leads to the worst performance. Moreover, we determined that the magnitude of the distortions in perceived duration (an effect akin to the filled duration illusion) is higher for tone sequences (no matter whether isochronous or anisochronous) than for continuous tones. Further analysis of how duration distortions depend on the type of filling suggests that distortions are not only due to the perceived duration of the two individual intervals, but they may also be due to the comparison of two different filling types. PMID:25717310

  20. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  1. Change detection and difference detection of tone duration discrimination.

    PubMed

    Okazaki, Shuntaro; Kanoh, Shin'ichiro; Takaura, Kana; Tsukada, Minoru; Oka, Kotaro

    2006-03-20

    An event-related potential called mismatch negativity is known to exhibit physiological evidence of sensory memory. Mismatch negativity is believed to represent complicated neuronal mechanisms in a variety of animals and in humans. We employed the auditory oddball paradigm varying sound durations and observed two types of duration mismatch negativity in anesthetized guinea pigs. One was a duration mismatch negativity whose increase in peak amplitude occurred immediately after onset of the stimulus difference in a decrement oddball paradigm. The other exhibited a peak amplitude increase closer to the offset of the longer stimulus in an increment oddball paradigm. These results demonstrated a mechanism to percept the difference of duration change and revealed the importance of the end of a stimulus for this perception.

  2. Isolating spectral cues in amplitude and quasi-frequency modulation discrimination by reducing stimulus duration.

    PubMed

    Borucki, Ewa; Berg, Bruce G

    2017-05-01

    This study investigated the psychophysical effects of distortion products in a listening task traditionally used to estimate the bandwidth of phase sensitivity. For a 2000 Hz carrier, estimates of modulation depth necessary to discriminate amplitude modulated (AM) tones and quasi-frequency modulated (QFM) were measured in a two interval forced choice task as a function modulation frequency. Temporal modulation transfer functions were often non-monotonic at modulation frequencies above 300 Hz. This was likely to be due to a spectral cue arising from the interaction of auditory distortion products and the lower sideband of the stimulus complex. When the stimulus duration was decreased from 200 ms to 20 ms, thresholds for low-frequency modulators rose to near-chance levels, whereas thresholds in the region of non-monotonicities were less affected. The decrease in stimulus duration appears to hinder the listener's ability to use temporal cues in order to discriminate between AM and QFM, whereas spectral information derived from distortion product cues appears more resilient. Copyright © 2017. Published by Elsevier B.V.

  3. Central auditory processing and migraine: a controlled study.

    PubMed

    Agessi, Larissa Mendonça; Villa, Thaís Rodrigues; Dias, Karin Ziliotto; Carvalho, Deusvenir de Souza; Pereira, Liliane Desgualdo

    2014-11-08

    This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of "The International Classification of Headache Disorders" (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Research Ethics Committee (CEP 0480.10) - UNIFESP.

  4. Crossmodal attention switching: auditory dominance in temporal discrimination tasks.

    PubMed

    Lukas, Sarah; Philipp, Andrea M; Koch, Iring

    2014-11-01

    Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this "visual dominance", earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual-auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual-auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Is Auditory Discrimination Mature by Middle Childhood? A Study Using Time-Frequency Analysis of Mismatch Responses from 7 Years to Adulthood

    ERIC Educational Resources Information Center

    Bishop, Dorothy V. M.; Hardiman, Mervyn J.; Barry, Johanna G.

    2011-01-01

    Behavioural and electrophysiological studies give differing impressions of when auditory discrimination is mature. Ability to discriminate frequency and speech contrasts reaches adult levels only around 12 years of age, yet an electrophysiological index of auditory discrimination, the mismatch negativity (MMN), is reported to be as large in…

  6. Examination of the Relation between an Assessment of Skills and Performance on Auditory-Visual Conditional Discriminations for Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…

  7. Discrimination of sound source velocity in human listeners

    NASA Astrophysics Data System (ADS)

    Carlile, Simon; Best, Virginia

    2002-02-01

    The ability of six human subjects to discriminate the velocity of moving sound sources was examined using broadband stimuli presented in virtual auditory space. Subjects were presented with two successive stimuli moving in the frontal horizontal plane level with the ears, and were required to judge which moved the fastest. Discrimination thresholds were calculated for reference velocities of 15, 30, and 60 degrees/s under three stimulus conditions. In one condition, stimuli were centered on 0° azimuth and their duration varied randomly to prevent subjects from using displacement as an indicator of velocity. Performance varied between subjects giving median thresholds of 5.5, 9.1, and 14.8 degrees/s for the three reference velocities, respectively. In a second condition, pairs of stimuli were presented for a constant duration and subjects would have been able to use displacement to assist their judgment as faster stimuli traveled further. It was found that thresholds decreased significantly for all velocities (3.8, 7.1, and 9.8 degrees/s), suggesting that the subjects were using the additional displacement cue. The third condition differed from the second in that the stimuli were ``anchored'' on the same starting location rather than centered on the midline, thus doubling the spatial offset between stimulus endpoints. Subjects showed the lowest thresholds in this condition (2.9, 4.0, and 7.0 degrees/s). The results suggested that the auditory system is sensitive to velocity per se, but velocity comparisons are greatly aided if displacement cues are present.

  8. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    PubMed

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  9. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    PubMed Central

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  10. Mismatch negativity results from bilateral asymmetric dipole sources in the frontal and temporal lobes.

    PubMed

    Jemel, Boutheina; Achenbach, Christiane; Müller, Bernhard W; Röpcke, Bernd; Oades, Robert D

    2002-01-01

    The event-related potential (ERP) reflecting auditory change detection (mismatch negativity, MMN) registers automatic selective processing of a deviant sound with respect to a working memory template resulting from a series of standard sounds. Controversy remains whether MMN can be generated in the frontal as well as the temporal cortex. Our aim was to see if frontal as well as temporal lobe dipoles could explain MMN recorded after pitch-deviants (Pd-MMN) and duration deviants (Dd-MMN). EEG recordings were taken from 32 sites in 14 healthy subjects during a passive 3-tone oddball presented during a simple visual discrimination and an active auditory discrimination condition. Both conditions were repeated after one month. The Pd-MMN was larger, peaked earlier and correlated better between sessions than the Dd-MMN. Two dipoles in the auditory cortex and two in the frontal lobe (left cingulate and right inferior frontal cortex) were found to be similarly placed for Pd- and Dd-MMN, and were well replicated on retest. This study confirms interactions between activity generated in the frontal and auditory temporal cortices in automatic attention-like processes that resemble initial brain imaging reports of unconscious visual change detection. The lack of interference between sessions shows that the situation is likely to be sensitive to treatment or illness effects on fronto-temporal interactions involving repeated measures.

  11. Effects of frequency discrimination training on tinnitus: results from two randomised controlled trials.

    PubMed

    Hoare, Derek J; Kowalkowski, Victoria L; Hall, Deborah A

    2012-08-01

    That auditory perceptual training may alleviate tinnitus draws on two observations: (1) tinnitus probably arises from altered activity within the central auditory system following hearing loss and (2) sound-based training can change central auditory activity. Training that provides sound enrichment across hearing loss frequencies has therefore been hypothesised to alleviate tinnitus. We tested this prediction with two randomised trials of frequency discrimination training involving a total of 70 participants with chronic subjective tinnitus. Participants trained on either (1) a pure-tone standard at a frequency within their region of normal hearing, (2) a pure-tone standard within the region of hearing loss or (3) a high-pass harmonic complex tone spanning a region of hearing loss. Analysis of the primary outcome measure revealed an overall reduction in self-reported tinnitus handicap after training that was maintained at a 1-month follow-up assessment, but there were no significant differences between groups. Secondary analyses also report the effects of different domains of tinnitus handicap on the psychoacoustical characteristics of the tinnitus percept (sensation level, bandwidth and pitch) and on duration of training. Our overall findings and conclusions cast doubt on the superiority of a purely acoustic mechanism to underpin tinnitus remediation. Rather, the nonspecific patterns of improvement are more suggestive that auditory perceptual training affects impact on a contributory mechanism such as selective attention or emotional state.

  12. Enhanced Development of Auditory Change Detection in Musically Trained School-Aged Children: A Longitudinal Event-Related Potential Study

    ERIC Educational Resources Information Center

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Ojala, Pauliina; Huotilainen, Minna

    2014-01-01

    Adult musicians show superior auditory discrimination skills when compared to non-musicians. The enhanced auditory skills of musicians are reflected in the augmented amplitudes of their auditory event-related potential (ERP) responses. In the current study, we investigated longitudinally the development of auditory discrimination skills in…

  13. Examination of the relation between an assessment of skills and performance on auditory-visual conditional discriminations for children with autism spectrum disorder.

    PubMed

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The results of the skills assessment showed that 4 participants failed to demonstrate mastery of at least 1 of the skills. We compared the outcomes of the assessment to the results of auditory-visual conditional discrimination training and found that training outcomes were related to the assessment outcomes for 7 of the 9 participants. One participant who did not demonstrate mastery of all assessment skills subsequently learned several conditional discriminations when blocked training trials were conducted. Another participant who did not demonstrate mastery of the auditory discrimination skill subsequently acquired conditional discriminations in 1 of the training conditions. We discuss the implications of the assessment for practice and suggest additional areas of research on this topic. © Society for the Experimental Analysis of Behavior.

  14. Central auditory processing and migraine: a controlled study

    PubMed Central

    2014-01-01

    Background This study aimed to verify and compare central auditory processing (CAP) performance in migraine with and without aura patients and healthy controls. Methods Forty-one volunteers of both genders, aged between 18 and 40 years, diagnosed with migraine with and without aura by the criteria of “The International Classification of Headache Disorders” (ICDH-3 beta) and a control group of the same age range and with no headache history, were included. Gaps-in-noise (GIN), Duration Pattern test (DPT) and Dichotic Digits Test (DDT) tests were used to assess central auditory processing performance. Results The volunteers were divided into 3 groups: Migraine with aura (11), migraine without aura (15), and control group (15), matched by age and schooling. Subjects with aura and without aura performed significantly worse in GIN test for right ear (p = .006), for left ear (p = .005) and for DPT test (p < .001) when compared with controls without headache, however no significant differences were found in the DDT test for the right ear (p = .362) and for the left ear (p = .190). Conclusions Subjects with migraine performed worsened in auditory gap detection, in the discrimination of short and long duration. They also presented impairment in the physiological mechanism of temporal processing, especially in temporal resolution and temporal ordering when compared with controls. Migraine could be related to an impaired central auditory processing. Clinical trial registration Research Ethics Committee (CEP 0480.10) – UNIFESP PMID:25380661

  15. Pilocarpine Seizures Cause Age-Dependent Impairment in Auditory Location Discrimination

    ERIC Educational Resources Information Center

    Neill, John C.; Liu, Zhao; Mikati, Mohammad; Holmes, Gregory L.

    2005-01-01

    Children who have status epilepticus have continuous or rapidly repeating seizures that may be life-threatening and may cause life-long changes in brain and behavior. The extent to which status epilepticus causes deficits in auditory discrimination is unknown. A naturalistic auditory location discrimination method was used to evaluate this…

  16. Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.

    PubMed

    Ting, H; Yunus, J; Mohd Nordin, M Z

    2005-01-01

    The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.

  17. Auditory spatial processing in Alzheimer’s disease

    PubMed Central

    Golden, Hannah L.; Nicholas, Jennifer M.; Yong, Keir X. X.; Downey, Laura E.; Schott, Jonathan M.; Mummery, Catherine J.; Crutch, Sebastian J.

    2015-01-01

    The location and motion of sounds in space are important cues for encoding the auditory world. Spatial processing is a core component of auditory scene analysis, a cognitively demanding function that is vulnerable in Alzheimer’s disease. Here we designed a novel neuropsychological battery based on a virtual space paradigm to assess auditory spatial processing in patient cohorts with clinically typical Alzheimer’s disease (n = 20) and its major variant syndrome, posterior cortical atrophy (n = 12) in relation to healthy older controls (n = 26). We assessed three dimensions of auditory spatial function: externalized versus non-externalized sound discrimination, moving versus stationary sound discrimination and stationary auditory spatial position discrimination, together with non-spatial auditory and visual spatial control tasks. Neuroanatomical correlates of auditory spatial processing were assessed using voxel-based morphometry. Relative to healthy older controls, both patient groups exhibited impairments in detection of auditory motion, and stationary sound position discrimination. The posterior cortical atrophy group showed greater impairment for auditory motion processing and the processing of a non-spatial control complex auditory property (timbre) than the typical Alzheimer’s disease group. Voxel-based morphometry in the patient cohort revealed grey matter correlates of auditory motion detection and spatial position discrimination in right inferior parietal cortex and precuneus, respectively. These findings delineate auditory spatial processing deficits in typical and posterior Alzheimer’s disease phenotypes that are related to posterior cortical regions involved in both syndromic variants and modulated by the syndromic profile of brain degeneration. Auditory spatial deficits contribute to impaired spatial awareness in Alzheimer’s disease and may constitute a novel perceptual model for probing brain network disintegration across the Alzheimer’s disease syndromic spectrum. PMID:25468732

  18. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  19. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    PubMed Central

    Zaltz, Yael; Globerson, Eitan; Amir, Noam

    2017-01-01

    The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF), intensity discrimination, spectrum discrimination (DLS), and time discrimination (DLT). Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels), and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels), were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant discrimination, demonstrating highly specific effects for auditory linguistic experience as well. Overall, results suggest that auditory superiority is associated with the specific auditory exposure. PMID:29238318

  20. The Chronometry of Mental Ability: An Event-Related Potential Analysis of an Auditory Oddball Discrimination Task

    ERIC Educational Resources Information Center

    Beauchamp, Chris M.; Stelmack, Robert M.

    2006-01-01

    The relation between intelligence and speed of auditory discrimination was investigated during an auditory oddball task with backward masking. In target discrimination conditions that varied in the interval between the target and the masking stimuli and in the tonal frequency of the target and masking stimuli, higher ability participants (HA)…

  1. Noise Equally Degrades Central Auditory Processing in 2- and 4-Year-Old Children.

    PubMed

    Niemitalo-Haapola, Elina; Haapala, Sini; Kujala, Teija; Raappana, Antti; Kujala, Tiia; Jansson-Verkasalo, Eira

    2017-08-16

    The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years. The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise. Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children. https://doi.org/10.23641/asha.5233939.

  2. Deficits of congenital amusia beyond pitch: Evidence from impaired categorical perception of vowels in Cantonese-speaking congenital amusics

    PubMed Central

    Shao, Jing; Huang, Xunan

    2017-01-01

    Congenital amusia is a lifelong disorder of fine-grained pitch processing in music and speech. However, it remains unclear whether amusia is a pitch-specific deficit, or whether it affects frequency/spectral processing more broadly, such as the perception of formant frequency in vowels, apart from pitch. In this study, in order to illuminate the scope of the deficits, we compared the performance of 15 Cantonese-speaking amusics and 15 matched controls on the categorical perception of sound continua in four stimulus contexts: lexical tone, pure tone, vowel, and voice onset time (VOT). Whereas lexical tone, pure tone and vowel continua rely on frequency/spectral processing, the VOT continuum depends on duration/temporal processing. We found that the amusic participants performed similarly to controls in all stimulus contexts in the identification, in terms of the across-category boundary location and boundary width. However, the amusic participants performed systematically worse than controls in discriminating stimuli in those three contexts that depended on frequency/spectral processing (lexical tone, pure tone and vowel), whereas they performed normally when discriminating duration differences (VOT). These findings suggest that the deficit of amusia is probably not pitch specific, but affects frequency/spectral processing more broadly. Furthermore, there appeared to be differences in the impairment of frequency/spectral discrimination in speech and nonspeech contexts. The amusic participants exhibited less benefit in between-category discriminations than controls in speech contexts (lexical tone and vowel), suggesting reduced categorical perception; on the other hand, they performed inferiorly compared to controls across the board regardless of between- and within-category discriminations in nonspeech contexts (pure tone), suggesting impaired general auditory processing. These differences imply that the frequency/spectral-processing deficit might be manifested differentially in speech and nonspeech contexts in amusics—it is manifested as a deficit of higher-level phonological processing in speech sounds, and as a deficit of lower-level auditory processing in nonspeech sounds. PMID:28829808

  3. Vocal Accuracy and Neural Plasticity Following Micromelody-Discrimination Training

    PubMed Central

    Zarate, Jean Mary; Delhommeau, Karine; Wood, Sean; Zatorre, Robert J.

    2010-01-01

    Background Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy. Methodology/Principal Findings We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone) as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control). To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI). Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing. Conclusions/Significance Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production. PMID:20567521

  4. Human Engineer’s Guide to Auditory Displays. Volume 2. Elements of Signal Reception and Resolution Affecting Auditory Displays.

    DTIC Science & Technology

    1984-08-01

    90de It noce..etrv wnd identify by block numberl .’-- This work reviews the areas of monaural and binaural signal detection, auditory discrimination And...AUDITORY DISPLAYS This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to...pertaining to the major areas of auditory processing in humans. The areas covered in the reviews presented here are monaural and binaural siqnal detection

  5. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    PubMed

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Automatic processing of tones and speech stimuli in children with specific language impairment.

    PubMed

    Uwer, Ruth; Albrecht, Ronald; von Suchodoletz, W

    2002-08-01

    It is well known from behavioural experiments that children with specific language impairment (SLI) have difficulties discriminating consonant-vowel (CV) syllables such as /ba/, /da/, and /ga/. Mismatch negativity (MMN) is an auditory event-related potential component that represents the outcome of an automatic comparison process. It could, therefore, be a promising tool for assessing central auditory processing deficits for speech and non-speech stimuli in children with SLI. MMN is typically evoked by occasionally occurring 'deviant' stimuli in a sequence of identical 'standard' sounds. In this study MMN was elicited using simple tone stimuli, which differed in frequency (1000 versus 1200 Hz) and duration (175 versus 100 ms) and to digitized CV syllables which differed in place of articulation (/ba/, /da/, and /ga/) in children with expressive and receptive SLI and healthy control children (n=21 in each group, 46 males and 17 females; age range 5 to 10 years). Mean MMN amplitudes between groups were compared. Additionally, the behavioural discrimination performance was assessed. Children with SLI had attenuated MMN amplitudes to speech stimuli, but there was no significant difference between the two diagnostic subgroups. MMN to tone stimuli did not differ between the groups. Children with SLI made more errors in the discrimination task, but discrimination scores did not correlate with MMN amplitudes. The present data suggest that children with SLI show a specific deficit in automatic discrimination of CV syllables differing in place of articulation, whereas the processing of simple tone differences seems to be unimpaired.

  7. Distortions of Subjective Time Perception Within and Across Senses

    PubMed Central

    van Wassenhove, Virginie; Buonomano, Dean V.; Shimojo, Shinsuke; Shams, Ladan

    2008-01-01

    Background The ability to estimate the passage of time is of fundamental importance for perceptual and cognitive processes. One experience of time is the perception of duration, which is not isomorphic to physical duration and can be distorted by a number of factors. Yet, the critical features generating these perceptual shifts in subjective duration are not understood. Methodology/Findings We used prospective duration judgments within and across sensory modalities to examine the effect of stimulus predictability and feature change on the perception of duration. First, we found robust distortions of perceived duration in auditory, visual and auditory-visual presentations despite the predictability of the feature changes in the stimuli. For example, a looming disc embedded in a series of steady discs led to time dilation, whereas a steady disc embedded in a series of looming discs led to time compression. Second, we addressed whether visual (auditory) inputs could alter the perception of duration of auditory (visual) inputs. When participants were presented with incongruent audio-visual stimuli, the perceived duration of auditory events could be shortened or lengthened by the presence of conflicting visual information; however, the perceived duration of visual events was seldom distorted by the presence of auditory information and was never perceived shorter than their actual durations. Conclusions/Significance These results support the existence of multisensory interactions in the perception of duration and, importantly, suggest that vision can modify auditory temporal perception in a pure timing task. Insofar as distortions in subjective duration can neither be accounted for by the unpredictability of an auditory, visual or auditory-visual event, we propose that it is the intrinsic features of the stimulus that critically affect subjective time distortions. PMID:18197248

  8. Bidirectional Regulation of Innate and Learned Behaviors That Rely on Frequency Discrimination by Cortical Inhibitory Neurons

    PubMed Central

    Aizenberg, Mark; Mwilambwe-Tshilobo, Laetitia; Briguglio, John J.; Natan, Ryan G.; Geffen, Maria N.

    2015-01-01

    The ability to discriminate tones of different frequencies is fundamentally important for everyday hearing. While neurons in the primary auditory cortex (AC) respond differentially to tones of different frequencies, whether and how AC regulates auditory behaviors that rely on frequency discrimination remains poorly understood. Here, we find that the level of activity of inhibitory neurons in AC controls frequency specificity in innate and learned auditory behaviors that rely on frequency discrimination. Photoactivation of parvalbumin-positive interneurons (PVs) improved the ability of the mouse to detect a shift in tone frequency, whereas photosuppression of PVs impaired the performance. Furthermore, photosuppression of PVs during discriminative auditory fear conditioning increased generalization of conditioned response across tone frequencies, whereas PV photoactivation preserved normal specificity of learning. The observed changes in behavioral performance were correlated with bidirectional changes in the magnitude of tone-evoked responses, consistent with predictions of a model of a coupled excitatory-inhibitory cortical network. Direct photoactivation of excitatory neurons, which did not change tone-evoked response magnitude, did not affect behavioral performance in either task. Our results identify a new function for inhibition in the auditory cortex, demonstrating that it can improve or impair acuity of innate and learned auditory behaviors that rely on frequency discrimination. PMID:26629746

  9. Auditory temporal perceptual learning and transfer in Chinese-speaking children with developmental dyslexia.

    PubMed

    Zhang, Manli; Xie, Weiyi; Xu, Yanzhi; Meng, Xiangzhi

    2018-03-01

    Perceptual learning refers to the improvement of perceptual performance as a function of training. Recent studies found that auditory perceptual learning may improve phonological skills in individuals with developmental dyslexia in alphabetic writing system. However, whether auditory perceptual learning could also benefit the reading skills of those learning the Chinese logographic writing system is, as yet, unknown. The current study aimed to investigate the remediation effect of auditory temporal perceptual learning on Mandarin-speaking school children with developmental dyslexia. Thirty children with dyslexia were screened from a large pool of students in 3th-5th grades. They completed a series of pretests and then were assigned to either a non-training control group or a training group. The training group worked on a pure tone duration discrimination task for 7 sessions over 2 weeks with thirty minutes per session. Post-tests immediately after training and a follow-up test 2 months later were conducted. Analyses revealed a significant training effect in the training group relative to non-training group, as well as near transfer to the temporal interval discrimination task and far transfer to phonological awareness, character recognition and reading fluency. Importantly, the training effect and all the transfer effects were stable at the 2-month follow-up session. Further analyses found that a significant correlation between character recognition performance and learning rate mainly existed in the slow learning phase, the consolidation stage of perceptual learning, and this effect was modulated by an individuals' executive function. These findings indicate that adaptive auditory temporal perceptual learning can lead to learning and transfer effects on reading performance, and shed further light on the potential role of basic perceptual learning in the remediation and prevention of developmental dyslexia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Encoding of Discriminative Fear Memory by Input-Specific LTP in the Amygdala.

    PubMed

    Kim, Woong Bin; Cho, Jun-Hyeong

    2017-08-30

    In auditory fear conditioning, experimental subjects learn to associate an auditory conditioned stimulus (CS) with an aversive unconditioned stimulus. With sufficient training, animals fear conditioned to an auditory CS show fear response to the CS, but not to irrelevant auditory stimuli. Although long-term potentiation (LTP) in the lateral amygdala (LA) plays an essential role in auditory fear conditioning, it is unknown whether LTP is induced selectively in the neural pathways conveying specific CS information to the LA in discriminative fear learning. Here, we show that postsynaptically expressed LTP is induced selectively in the CS-specific auditory pathways to the LA in a mouse model of auditory discriminative fear conditioning. Moreover, optogenetically induced depotentiation of the CS-specific auditory pathways to the LA suppressed conditioned fear responses to the CS. Our results suggest that input-specific LTP in the LA contributes to fear memory specificity, enabling adaptive fear responses only to the relevant sensory cue. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  12. Auditory-prosodic processing in bipolar disorder; from sensory perception to emotion.

    PubMed

    Van Rheenen, Tamsyn E; Rossell, Susan L

    2013-12-01

    Accurate emotion processing is critical to understanding the social world. Despite growing evidence of facial emotion processing impairments in patients with bipolar disorder (BD), comprehensive investigations of emotional prosodic processing is limited. The existing (albeit sparse) literature is inconsistent at best, and confounded by failures to control for the effects of gender or low level sensory-perceptual impairments. The present study sought to address this paucity of research by utilizing a novel behavioural battery to comprehensively investigate the auditory-prosodic profile of BD. Fifty BD patients and 52 healthy controls completed tasks assessing emotional and linguistic prosody, and sensitivity for discriminating tones that deviate in amplitude, duration and pitch. BD patients were less sensitive than their control counterparts in discriminating amplitude and durational cues but not pitch cues or linguistic prosody. They also demonstrated impaired ability to recognize happy intonations; although this was specific to male's with the disorder. The recognition of happy in the patient group was correlated with pitch and amplitude sensitivity in female patients only. The small sample size of patients after stratification by current mood state prevented us from conducting subgroup comparisons between symptomatic, euthymic and control participants to explicitly examine the effects of mood. Our findings indicate the existence of a female advantage for the processing of emotional prosody in BD, specifically for the processing of happy. Although male BD patients were impaired in their ability to recognize happy prosody, this was unrelated to reduced tone discrimination sensitivity. This study indicates the importance of examining both gender and low order sensory perceptual capacity when examining emotional prosody. © 2013 Elsevier B.V. All rights reserved.

  13. Prediction of cognitive outcome based on the progression of auditory discrimination during coma.

    PubMed

    Juan, Elsa; De Lucia, Marzia; Tzovara, Athina; Beaud, Valérie; Oddo, Mauro; Clarke, Stephanie; Rossetti, Andrea O

    2016-09-01

    To date, no clinical test is able to predict cognitive and functional outcome of cardiac arrest survivors. Improvement of auditory discrimination in acute coma indicates survival with high specificity. Whether the degree of this improvement is indicative of recovery remains unknown. Here we investigated if progression of auditory discrimination can predict cognitive and functional outcome. We prospectively recorded electroencephalography responses to auditory stimuli of post-anoxic comatose patients on the first and second day after admission. For each recording, auditory discrimination was quantified and its evolution over the two recordings was used to classify survivors as "predicted" when it increased vs. "other" if not. Cognitive functions were tested on awakening and functional outcome was assessed at 3 months using the Cerebral Performance Categories (CPC) scale. Thirty-two patients were included, 14 "predicted survivors" and 18 "other survivors". "Predicted survivors" were more likely to recover basic cognitive functions shortly after awakening (ability to follow a standardized neuropsychological battery: 86% vs. 44%; p=0.03 (Fisher)) and to show a very good functional outcome at 3 months (CPC 1: 86% vs. 33%; p=0.004 (Fisher)). Moreover, progression of auditory discrimination during coma was strongly correlated with cognitive performance on awakening (phonemic verbal fluency: rs=0.48; p=0.009 (Spearman)). Progression of auditory discrimination during coma provides early indication of future recovery of cognitive functions. The degree of improvement is informative of the degree of functional impairment. If confirmed in a larger cohort, this test would be the first to predict detailed outcome at the single-patient level. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. EXEL; Experience for Children in Learning. Parent-Directed Activities to Develop: Oral Expression, Visual Discrimination, Auditory Discrimination, Motor Coordination.

    ERIC Educational Resources Information Center

    Behrmann, Polly; Millman, Joan

    The activities collected in this handbook are planned for parents to use with their children in a learning experience. They can also be used in the classroom. Sections contain games designed to develop visual discrimination, auditory discrimination, motor coordination and oral expression. An objective is given for each game, and directions for…

  15. English Auditory Discrimination Skills of Spanish-Speaking Children.

    ERIC Educational Resources Information Center

    Kramer, Virginia Reyes; Schell, Leo M.

    1982-01-01

    Eighteen Mexican American pupils in the grades 1-3 from two urban Kansas schools were tested, using 18 pairs of sound contrasts, for auditory discrimination problems related to their language-different background. Results showed v-b, ch-sh, and s-sp contrasts were the most difficult for subjects to discriminate. (LC)

  16. Convergent-Discriminant Validity of the Jewish Employment Vocational System (JEVS).

    ERIC Educational Resources Information Center

    Tryjankowski, Elaine M.

    This study investigated the construct validity of five perceptual traits (auditory discrimination, visual discrimination, visual memory, visual-motor coordination, and auditory to visual-motor coordination) with five simulated work samples (union assembly, resistor reading, budgette assembly, lock assembly, and nail and screw sort) from the Jewish…

  17. Working memory resources are shared across sensory modalities.

    PubMed

    Salmela, V R; Moisala, M; Alho, K

    2014-10-01

    A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones-both containing two varying features-were presented simultaneously. In Experiment 2, two gratings and two tones-each containing only one varying feature-were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.

  18. Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.

    PubMed

    Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine

    2017-04-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    PubMed

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.

  20. Thresholds for linear amplitude change of a continuous pure tone.

    PubMed

    Jerlvall, L B; Arlinger, S D; Holmgren, E C

    1978-01-01

    The human auditory sensitivity in detecting linear amplitude change of a continuous pure tone has been studied in normal-hearing subjects. It is shown that for short glide durations (less than 100 ms) the duration of the following plateau exerts a significant influence on the DLI. The average DLI at 1 kHz and 60 dB HL was found to be of the order of 0.8 dB when the intensity glide had a duration of 10 ms and was followed by a much longer plateau. For longer glide durations (greater than or equal to 200 ms) the DLI increased significantly as compared with shorter durations. There was no significant difference between increasing and decreasing intensity change. Significantly larger DLIs were found at 250 and 500 Hz than at 1, 2 and 4 kHz. The sound level was found to have a significant influence on the DLI. At low levels of 40 dB HL, and lower, the increase in DLI for detecting sound levels is highly significant. A falling exponential function offers a mathematical description of the relationship with good fit. It is concluded that an integrating mechanism with an integration time of approx. 200 ms could explain the auditory ability to detect linear amplitude glides of a continuous tone. The results are discussed in relation to previous intensity discrimination data, where pulse pairs, continuous intensity modulation or intensity glides were used as stimuli.

  1. Single electrode micro-stimulation of rat auditory cortex: an evaluation of behavioral performance.

    PubMed

    Rousche, Patrick J; Otto, Kevin J; Reilly, Mark P; Kipke, Daryl R

    2003-05-01

    A combination of electrophysiological mapping, behavioral analysis and cortical micro-stimulation was used to explore the interrelation between the auditory cortex and behavior in the adult rat. Auditory discriminations were evaluated in eight rats trained to discriminate the presence or absence of a 75 dB pure tone stimulus. A probe trial technique was used to obtain intensity generalization gradients that described response probabilities to mid-level tones between 0 and 75 dB. The same rats were then chronically implanted in the auditory cortex with a 16 or 32 channel tungsten microwire electrode array. Implanted animals were then trained to discriminate the presence of single electrode micro-stimulation of magnitude 90 microA (22.5 nC/phase). Intensity generalization gradients were created to obtain the response probabilities to mid-level current magnitudes ranging from 0 to 90 microA on 36 different electrodes in six of the eight rats. The 50% point (the current level resulting in 50% detections) varied from 16.7 to 69.2 microA, with an overall mean of 42.4 (+/-8.1) microA across all single electrodes. Cortical micro-stimulation induced sensory-evoked behavior with similar characteristics as normal auditory stimuli. The results highlight the importance of the auditory cortex in a discrimination task and suggest that micro-stimulation of the auditory cortex might be an effective means for a graded information transfer of auditory information directly to the brain as part of a cortical auditory prosthesis.

  2. Perceptual learning in temporal discrimination: asymmetric cross-modal transfer from audition to vision.

    PubMed

    Bratzke, Daniel; Seifried, Tanja; Ulrich, Rolf

    2012-08-01

    This study assessed possible cross-modal transfer effects of training in a temporal discrimination task from vision to audition as well as from audition to vision. We employed a pretest-training-post-test design including a control group that performed only the pretest and the post-test. Trained participants showed better discrimination performance with their trained interval than the control group. This training effect transferred to the other modality only for those participants who had been trained with auditory stimuli. The present study thus demonstrates for the first time that training on temporal discrimination within the auditory modality can transfer to the visual modality but not vice versa. This finding represents a novel illustration of auditory dominance in temporal processing and is consistent with the notion that time is primarily encoded in the auditory system.

  3. Visual processing affects the neural basis of auditory discrimination.

    PubMed

    Kislyuk, Daniel S; Möttönen, Riikka; Sams, Mikko

    2008-12-01

    The interaction between auditory and visual speech streams is a seamless and surprisingly effective process. An intriguing example is the "McGurk effect": The acoustic syllable /ba/ presented simultaneously with a mouth articulating /ga/ is typically heard as /da/ [McGurk, H., & MacDonald, J. Hearing lips and seeing voices. Nature, 264, 746-748, 1976]. Previous studies have demonstrated the interaction of auditory and visual streams at the auditory cortex level, but the importance of these interactions for the qualitative perception change remained unclear because the change could result from interactions at higher processing levels as well. In our electroencephalogram experiment, we combined the McGurk effect with mismatch negativity (MMN), a response that is elicited in the auditory cortex at a latency of 100-250 msec by any above-threshold change in a sequence of repetitive sounds. An "odd-ball" sequence of acoustic stimuli consisting of frequent /va/ syllables (standards) and infrequent /ba/ syllables (deviants) was presented to 11 participants. Deviant stimuli in the unisensory acoustic stimulus sequence elicited a typical MMN, reflecting discrimination of acoustic features in the auditory cortex. When the acoustic stimuli were dubbed onto a video of a mouth constantly articulating /va/, the deviant acoustic /ba/ was heard as /va/ due to the McGurk effect and was indistinguishable from the standards. Importantly, such deviants did not elicit MMN, indicating that the auditory cortex failed to discriminate between the acoustic stimuli. Our findings show that visual stream can qualitatively change the auditory percept at the auditory cortex level, profoundly influencing the auditory cortex mechanisms underlying early sound discrimination.

  4. Auditory processing deficits in bipolar disorder with and without a history of psychotic features.

    PubMed

    Zenisek, RyAnna; Thaler, Nicholas S; Sutton, Griffin P; Ringdahl, Erik N; Snyder, Joel S; Allen, Daniel N

    2015-11-01

    Auditory perception deficits have been identified in schizophrenia (SZ) and linked to dysfunction in the auditory cortex. Given that psychotic symptoms, including auditory hallucinations, are also seen in bipolar disorder (BD), it may be that individuals with BD who also exhibit psychotic symptoms demonstrate a similar impairment in auditory perception. Fifty individuals with SZ, 30 individuals with bipolar I disorder with a history of psychosis (BD+), 28 individuals with bipolar I disorder with no history of psychotic features (BD-), and 29 normal controls (NC) were administered a tone discrimination task and an emotion recognition task. Mixed-model analyses of covariance with planned comparisons indicated that individuals with BD+ performed at a level that was intermediate between those with BD- and those with SZ on the more difficult condition of the tone discrimination task and on the auditory condition of the emotion recognition task. There were no differences between the BD+ and BD- groups on the visual or auditory-visual affect recognition conditions. Regression analyses indicated that performance on the tone discrimination task predicted performance on all conditions of the emotion recognition task. Auditory hallucinations in BD+ were not related to performance on either task. Our findings suggested that, although deficits in frequency discrimination and emotion recognition are more severe in SZ, these impairments extend to BD+. Although our results did not support the idea that auditory hallucinations may be related to these deficits, they indicated that basic auditory deficits may be a marker for psychosis, regardless of SZ or BD diagnosis. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  5. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  6. Auditory Learning. Dimensions in Early Learning Series.

    ERIC Educational Resources Information Center

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  7. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing.

    PubMed

    Stropahl, Maren; Plotz, Karsten; Schönfeld, Rüdiger; Lenarz, Thomas; Sandmann, Pascale; Yovel, Galit; De Vos, Maarten; Debener, Stefan

    2015-11-01

    There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Sustained attention in language production: an individual differences investigation.

    PubMed

    Jongman, Suzanne R; Roelofs, Ardi; Meyer, Antje S

    2015-01-01

    Whereas it has long been assumed that most linguistic processes underlying language production happen automatically, accumulating evidence suggests that these processes do require some form of attention. Here we investigated the contribution of sustained attention: the ability to maintain alertness over time. In Experiment 1, participants' sustained attention ability was measured using auditory and visual continuous performance tasks. Subsequently, employing a dual-task procedure, participants described pictures using simple noun phrases and performed an arrow-discrimination task while their vocal and manual response times (RTs) and the durations of their gazes to the pictures were measured. Earlier research has demonstrated that gaze duration reflects language planning processes up to and including phonological encoding. The speakers' sustained attention ability correlated with the magnitude of the tail of the vocal RT distribution, reflecting the proportion of very slow responses, but not with individual differences in gaze duration. This suggests that sustained attention was most important after phonological encoding. Experiment 2 showed that the involvement of sustained attention was significantly stronger in a dual-task situation (picture naming and arrow discrimination) than in simple naming. Thus, individual differences in maintaining attention on the production processes become especially apparent when a simultaneous second task also requires attentional resources.

  9. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    ERIC Educational Resources Information Center

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  10. Speech perception in individuals with auditory dys-synchrony.

    PubMed

    Kumar, U A; Jayaram, M

    2011-03-01

    This study aimed to evaluate the effect of lengthening the transition duration of selected speech segments upon the perception of those segments in individuals with auditory dys-synchrony. Thirty individuals with auditory dys-synchrony participated in the study, along with 30 age-matched normal hearing listeners. Eight consonant-vowel syllables were used as auditory stimuli. Two experiments were conducted. Experiment one measured the 'just noticeable difference' time: the smallest prolongation of the speech sound transition duration which was noticeable by the subject. In experiment two, speech sounds were modified by lengthening the transition duration by multiples of the just noticeable difference time, and subjects' speech identification scores for the modified speech sounds were assessed. Subjects with auditory dys-synchrony demonstrated poor processing of temporal auditory information. Lengthening of speech sound transition duration improved these subjects' perception of both the placement and voicing features of the speech syllables used. These results suggest that innovative speech processing strategies which enhance temporal cues may benefit individuals with auditory dys-synchrony.

  11. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  12. Genetic pleiotropy explains associations between musical auditory discrimination and intelligence.

    PubMed

    Mosing, Miriam A; Pedersen, Nancy L; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions.

  13. Genetic Pleiotropy Explains Associations between Musical Auditory Discrimination and Intelligence

    PubMed Central

    Mosing, Miriam A.; Pedersen, Nancy L.; Madison, Guy; Ullén, Fredrik

    2014-01-01

    Musical aptitude is commonly measured using tasks that involve discrimination of different types of musical auditory stimuli. Performance on such different discrimination tasks correlates positively with each other and with intelligence. However, no study to date has explored these associations using a genetically informative sample to estimate underlying genetic and environmental influences. In the present study, a large sample of Swedish twins (N = 10,500) was used to investigate the genetic architecture of the associations between intelligence and performance on three musical auditory discrimination tasks (rhythm, melody and pitch). Phenotypic correlations between the tasks ranged between 0.23 and 0.42 (Pearson r values). Genetic modelling showed that the covariation between the variables could be explained by shared genetic influences. Neither shared, nor non-shared environment had a significant effect on the associations. Good fit was obtained with a two-factor model where one underlying shared genetic factor explained all the covariation between the musical discrimination tasks and IQ, and a second genetic factor explained variance exclusively shared among the discrimination tasks. The results suggest that positive correlations among musical aptitudes result from both genes with broad effects on cognition, and genes with potentially more specific influences on auditory functions. PMID:25419664

  14. Auditory training improves auditory performance in cochlear implanted children.

    PubMed

    Roman, Stephane; Rochette, Françoise; Triglia, Jean-Michel; Schön, Daniele; Bigand, Emmanuel

    2016-07-01

    While the positive benefits of pediatric cochlear implantation on language perception skills are now proven, the heterogeneity of outcomes remains high. The understanding of this heterogeneity and possible strategies to minimize it is of utmost importance. Our scope here is to test the effects of an auditory training strategy, "sound in Hands", using playful tasks grounded on the theoretical and empirical findings of cognitive sciences. Indeed, several basic auditory operations, such as auditory scene analysis (ASA) are not trained in the usual therapeutic interventions in deaf children. However, as they constitute a fundamental basis in auditory cognition, their development should imply general benefit in auditory processing and in turn enhance speech perception. The purpose of the present study was to determine whether cochlear implanted children could improve auditory performances in trained tasks and whether they could develop a transfer of learning to a phonetic discrimination test. Nineteen prelingually unilateral cochlear implanted children without additional handicap (4-10 year-olds) were recruited. The four main auditory cognitive processing (identification, discrimination, ASA and auditory memory) were stimulated and trained in the Experimental Group (EG) using Sound in Hands. The EG followed 20 training weekly sessions of 30 min and the untrained group was the control group (CG). Two measures were taken for both groups: before training (T1) and after training (T2). EG showed a significant improvement in the identification, discrimination and auditory memory tasks. The improvement in the ASA task did not reach significance. CG did not show any significant improvement in any of the tasks assessed. Most importantly, improvement was visible in the phonetic discrimination test for EG only. Moreover, younger children benefited more from the auditory training program to develop their phonetic abilities compared to older children, supporting the idea that rehabilitative care is most efficient when it takes place early on during childhood. These results are important to pinpoint the auditory deficits in CI children, to gather a better understanding of the links between basic auditory skills and speech perception which will in turn allow more efficient rehabilitative programs. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. A Perceptuo-Cognitive-Motor Approach to the Special Child.

    ERIC Educational Resources Information Center

    Kornblum, Rena Beth

    A movement therapist reviews ways in which a perceptuo-cognitive approach can help handicapped children in learning and in social adjustment. She identifies specific auditory problems (hearing loss, sound-ground confusion, auditory discrimination, auditory localization, auditory memory, auditory sequencing), visual problems (visual acuity,…

  16. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    PubMed

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  17. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    PubMed

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  18. Behavioural benefits of multisensory processing in ferrets.

    PubMed

    Hammond-Kenny, Amy; Bajo, Victoria M; King, Andrew J; Nodal, Fernando R

    2017-01-01

    Enhanced detection and discrimination, along with faster reaction times, are the most typical behavioural manifestations of the brain's capacity to integrate multisensory signals arising from the same object. In this study, we examined whether multisensory behavioural gains are observable across different components of the localization response that are potentially under the command of distinct brain regions. We measured the ability of ferrets to localize unisensory (auditory or visual) and spatiotemporally coincident auditory-visual stimuli of different durations that were presented from one of seven locations spanning the frontal hemifield. During the localization task, we recorded the head movements made following stimulus presentation, as a metric for assessing the initial orienting response of the ferrets, as well as the subsequent choice of which target location to approach to receive a reward. Head-orienting responses to auditory-visual stimuli were more accurate and faster than those made to visual but not auditory targets, suggesting that these movements were guided principally by sound alone. In contrast, approach-to-target localization responses were more accurate and faster to spatially congruent auditory-visual stimuli throughout the frontal hemifield than to either visual or auditory stimuli alone. Race model inequality analysis of head-orienting reaction times and approach-to-target response times indicates that different processes, probability summation and neural integration, respectively, are likely to be responsible for the effects of multisensory stimulation on these two measures of localization behaviour. © 2016 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. The effects of context and musical training on auditory temporal-interval discrimination.

    PubMed

    Banai, Karen; Fisher, Shirley; Ganot, Ron

    2012-02-01

    Non sensory factors such as stimulus context and musical experience are known to influence auditory frequency discrimination, but whether the context effect extends to auditory temporal processing remains unknown. Whether individual experiences such as musical training alter the context effect is also unknown. The goal of the present study was therefore to investigate the effects of stimulus context and musical experience on auditory temporal-interval discrimination. In experiment 1, temporal-interval discrimination was compared between fixed context conditions in which a single base temporal interval was presented repeatedly across all trials and variable context conditions in which one of two base intervals was randomly presented on each trial. Discrimination was significantly better in the fixed than in the variable context conditions. In experiment 2 temporal discrimination thresholds of musicians and non-musicians were compared across 3 conditions: a fixed context condition in which the target interval was presented repeatedly across trials, and two variable context conditions differing in the frequencies used for the tones marking the temporal intervals. Musicians outperformed non-musicians on all 3 conditions, but the effects of context were similar for the two groups. Overall, it appears that, like frequency discrimination, temporal-interval discrimination benefits from having a fixed reference. Musical experience, while improving performance, did not alter the context effect, suggesting that improved discrimination skills among musicians are probably not an outcome of more sensitive contextual facilitation or predictive coding mechanisms. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Effects of physiological aging on mismatch negativity: a meta-analysis.

    PubMed

    Cheng, Chia-Hsiung; Hsu, Wan-Yu; Lin, Yung-Yang

    2013-11-01

    Mismatch negativity (MMN) is a promising window on how the functional integrity of auditory sensory memory and change discrimination is modulated by age and relevant clinical conditions. However, the effects of aging on MMN have remained somewhat elusive, particularly at short interstimulus intervals (ISIs). We performed a meta-analysis of peer-reviewed MMN studies that had targeted both young and elderly adults to estimate the mean effect size. Nine studies, consisting of 29 individual investigations, were included and the final total study population consisted of 182 young and 165 elderly subjects. The effects of different deviant types and duration of ISIs on the effect size were assessed. The overall mean effect size was 0.63 (95% CI at 0.43-0.82). The effect sizes for long ISI (>2s, effect size 0.68, 95% CI at 0.31-1.06) and short ISI (<2s, effect size 0.61, 95% CI at 0.39-0.84) were both considered moderate. A further analysis showed a prominent aging-related decrease in MMN responses to duration and frequency changes at short ISIs. It was also interesting to note that the effect size was about 25% larger for duration deviant condition compared to the frequency deviant condition. In conclusion, a reduced MMN response to duration and frequency deviants is a robust feature among the aged adults, which suggests that there has been a decline in the functional integrity of central auditory processing in this population. © 2013.

  1. Enhanced pure-tone pitch discrimination among persons with autism but not Asperger syndrome.

    PubMed

    Bonnel, Anna; McAdams, Stephen; Smith, Bennett; Berthiaume, Claude; Bertone, Armando; Ciocca, Valter; Burack, Jacob A; Mottron, Laurent

    2010-07-01

    Persons with Autism spectrum disorders (ASD) display atypical perceptual processing in visual and auditory tasks. In vision, Bertone, Mottron, Jelenic, and Faubert (2005) found that enhanced and diminished visual processing is linked to the level of neural complexity required to process stimuli, as proposed in the neural complexity hypothesis. Based on these findings, Samson, Mottron, Jemel, Belin, and Ciocca (2006) proposed to extend the neural complexity hypothesis to the auditory modality. They hypothesized that persons with ASD should display enhanced performance for simple tones that are processed in primary auditory cortical regions, but diminished performance for complex tones that require additional processing in associative auditory regions, in comparison to typically developing individuals. To assess this hypothesis, we designed four auditory discrimination experiments targeting pitch, non-vocal and vocal timbre, and loudness. Stimuli consisted of spectro-temporally simple and complex tones. The participants were adolescents and young adults with autism, Asperger syndrome, and typical developmental histories, all with IQs in the normal range. Consistent with the neural complexity hypothesis and enhanced perceptual functioning model of ASD (Mottron, Dawson, Soulières, Hubert, & Burack, 2006), the participants with autism, but not with Asperger syndrome, displayed enhanced pitch discrimination for simple tones. However, no discrimination-thresholds differences were found between the participants with ASD and the typically developing persons across spectrally and temporally complex conditions. These findings indicate that enhanced pure-tone pitch discrimination may be a cognitive correlate of speech-delay among persons with ASD. However, auditory discrimination among this group does not appear to be directly contingent on the spectro-temporal complexity of the stimuli. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. Outline for Remediation of Problem Areas for Children with Learning Disabilities. Revised. = Bosquejo para la Correccion de Areas Problematicas para Ninos con Impedimientos del Aprendizaje.

    ERIC Educational Resources Information Center

    Bornstein, Joan L.

    The booklet outlines ways to help children with learning disabilities in specific subject areas. Characteristic behavior and remedial exercises are listed for seven areas of auditory problems: auditory reception, auditory association, auditory discrimination, auditory figure ground, auditory closure and sound blending, auditory memory, and grammar…

  3. Sex differences in audiovisual discrimination learning by Bengalese finches (Lonchura striata var. domestica).

    PubMed

    Seki, Yoshimasa; Okanoya, Kazuo

    2008-02-01

    Both visual and auditory information are important for songbirds, especially in developmental and sexual contexts. To investigate bimodal cognition in songbirds, the authors conducted audiovisual discrimination training in Bengalese finches. The authors used two types of stimulus: an "artificial stimulus," which is a combination of simple figures and sound, and a "biological stimulus," consisting of video images of singing males along with their songs. The authors found that while both sexes predominantly used visual cues in the discrimination tasks, males tended to be more dependent on auditory information for the biological stimulus. Female responses were always dependent on the visual stimulus for both stimulus types. Only males changed their discrimination strategy according to stimulus type. Although males used both visual and auditory cues for the biological stimulus, they responded to the artificial stimulus depending only on visual information, as the females did. These findings suggest a sex difference in innate auditory sensitivity. (c) 2008 APA.

  4. Behavioral Indications of Auditory Processing Disorders.

    ERIC Educational Resources Information Center

    Hartman, Kerry McGoldrick

    1988-01-01

    Identifies disruptive behaviors of children that may indicate central auditory processing disorders (CAPDs), perceptual handicaps of auditory discrimination or auditory memory not related to hearing ability. Outlines steps to modify the communication environment for CAPD children at home and in the classroom. (SV)

  5. Auditory-Motor Processing of Speech Sounds

    PubMed Central

    Möttönen, Riikka; Dutton, Rebekah; Watkins, Kate E.

    2013-01-01

    The motor regions that control movements of the articulators activate during listening to speech and contribute to performance in demanding speech recognition and discrimination tasks. Whether the articulatory motor cortex modulates auditory processing of speech sounds is unknown. Here, we aimed to determine whether the articulatory motor cortex affects the auditory mechanisms underlying discrimination of speech sounds in the absence of demanding speech tasks. Using electroencephalography, we recorded responses to changes in sound sequences, while participants watched a silent video. We also disrupted the lip or the hand representation in left motor cortex using transcranial magnetic stimulation. Disruption of the lip representation suppressed responses to changes in speech sounds, but not piano tones. In contrast, disruption of the hand representation had no effect on responses to changes in speech sounds. These findings show that disruptions within, but not outside, the articulatory motor cortex impair automatic auditory discrimination of speech sounds. The findings provide evidence for the importance of auditory-motor processes in efficient neural analysis of speech sounds. PMID:22581846

  6. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    PubMed

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    PubMed Central

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/–B/aa) and more intact onset responses for nonword repetition (Baz for/–B/az). Thus visual speech altered both discrimination and identification in the CHL—to a large extent for the /B/ onsets but only minimally for the /G/ onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children’s discrimination skills (i.e., d’ analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets—even after variation due to the other variables was controlled. Conclusions These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL. PMID:28167003

  8. Relationship between Auditory and Cognitive Abilities in Older Adults

    PubMed Central

    Sheft, Stanley

    2015-01-01

    Objective The objective was to evaluate the association of peripheral and central hearing abilities with cognitive function in older adults. Methods Recruited from epidemiological studies of aging and cognition at the Rush Alzheimer’s Disease Center, participants were a community-dwelling cohort of older adults (range 63–98 years) without diagnosis of dementia. The cohort contained roughly equal numbers of Black (n=61) and White (n=63) subjects with groups similar in terms of age, gender, and years of education. Auditory abilities were measured with pure-tone audiometry, speech-in-noise perception, and discrimination thresholds for both static and dynamic spectral patterns. Cognitive performance was evaluated with a 12-test battery assessing episodic, semantic, and working memory, perceptual speed, and visuospatial abilities. Results Among the auditory measures, only the static and dynamic spectral-pattern discrimination thresholds were associated with cognitive performance in a regression model that included the demographic covariates race, age, gender, and years of education. Subsequent analysis indicated substantial shared variance among the covariates race and both measures of spectral-pattern discrimination in accounting for cognitive performance. Among cognitive measures, working memory and visuospatial abilities showed the strongest interrelationship to spectral-pattern discrimination performance. Conclusions For a cohort of older adults without diagnosis of dementia, neither hearing thresholds nor speech-in-noise ability showed significant association with a summary measure of global cognition. In contrast, the two auditory metrics of spectral-pattern discrimination ability significantly contributed to a regression model prediction of cognitive performance, demonstrating association of central auditory ability to cognitive status using auditory metrics that avoided the confounding effect of speech materials. PMID:26237423

  9. Classification of passive auditory event-related potentials using discriminant analysis and self-organizing feature maps.

    PubMed

    Schönweiler, R; Wübbelt, P; Tolloczko, R; Rose, C; Ptok, M

    2000-01-01

    Discriminant analysis (DA) and self-organizing feature maps (SOFM) were used to classify passively evoked auditory event-related potentials (ERP) P(1), N(1), P(2) and N(2). Responses from 16 children with severe behavioral auditory perception deficits, 16 children with marked behavioral auditory perception deficits, and 14 controls were examined. Eighteen ERP amplitude parameters were selected for examination of statistical differences between the groups. Different DA methods and SOFM configurations were trained to the values. SOFM had better classification results than DA methods. Subsequently, measures on another 37 subjects that were unknown for the trained SOFM were used to test the reliability of the system. With 10-dimensional vectors, reliable classifications were obtained that matched behavioral auditory perception deficits in 96%, implying central auditory processing disorder (CAPD). The results also support the assumption that CAPD includes a 'non-peripheral' auditory processing deficit. Copyright 2000 S. Karger AG, Basel.

  10. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    ERIC Educational Resources Information Center

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  11. Kansas Center for Research in Early Childhood Education Annual Report, FY 1973.

    ERIC Educational Resources Information Center

    Horowitz, Frances D.

    This monograph is a collection of papers describing a series of loosely related studies of visual attention, auditory stimulation, and language discrimination in young infants. Titles include: (1) Infant Attention and Discrimination: Methodological and Substantive Issues; (2) The Addition of Auditory Stimulation (Music) and an Interspersed…

  12. [Perception and selectivity of sound duration in the central auditory midbrain].

    PubMed

    Wang, Xin; Li, An-An; Wu, Fei-Jian

    2010-08-25

    Sound duration plays important role in acoustic communication. Information of acoustic signal is mainly encoded in the amplitude and frequency spectrum of different durations. Duration selective neurons exist in the central auditory system including inferior colliculus (IC) of frog, bat, mouse and chinchilla, etc., and they are important in signal recognition and feature detection. Two generally accepted models, which are "coincidence detector model" and "anti-coincidence detector model", have been raised to explain the mechanism of neural selective responses to sound durations based on the study of IC neurons in bats. Although they are different in details, they both emphasize the importance of synaptic integration of excitatory and inhibitory inputs, and are able to explain the responses of most duration-selective neurons. However, both of the hypotheses need to be improved since other sound parameters, such as spectral pattern, amplitude and repetition rate, could affect the duration selectivity of the neurons. The dynamic changes of sound parameters are believed to enable the animal to effectively perform recognition of behavior related acoustic signals. Under free field sound stimulation, we analyzed the neural responses in the IC and auditory cortex of mouse and bat to sounds with different duration, frequency and amplitude, using intracellular or extracellular recording techniques. Based on our work and previous studies, this article reviews the properties of duration selectivity in central auditory system and discusses the mechanisms of duration selectivity and the effect of other sound parameters on the duration coding of auditory neurons.

  13. [Analysis of the speech discrimination scores of patients with congenital unilateral microtia and external auditory canal atresia in noise].

    PubMed

    Zhang, Y; Li, D D; Chen, X W

    2017-06-20

    Objective: Case-control study analysis of the speech discrimination of unilateral microtia and external auditory canal atresia patients with normal hearing subjects in quiet and noisy environment. To understand the speech recognition results of patients with unilateral external auditory canal atresia and provide scientific basis for clinical early intervention. Method: Twenty patients with unilateral congenital microtia malformation combined external auditory canal atresia, 20 age matched normal subjects as control group. All subjects used Mandarin speech audiometry material, to test the speech discrimination scores (SDS) in quiet and noisy environment in sound field. Result: There's no significant difference of speech discrimination scores under the condition of quiet between two groups. There's a statistically significant difference when the speech signal in the affected side and noise in the nomalside (single syllable, double syllable, statements; S/N=0 and S/N=-10) ( P <0.05). There's no significant difference of speech discrimination scores when the speech signal in the nomalside and noise in the affected side. There's a statistically significant difference in condition of the signal and noise in the same side when used one-syllable word recognition (S/N=0 and S/N=-5) ( P <0.05), while double syllable word and statement has no statistically significant difference ( P >0.05). Conclusion: The speech discrimination scores of unilateral congenital microtia malformation patients with external auditory canal atresia under the condition of noise is lower than the normal subjects. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  14. The effect of superior auditory skills on vocal accuracy

    NASA Astrophysics Data System (ADS)

    Amir, Ofer; Amir, Noam; Kishon-Rabin, Liat

    2003-02-01

    The relationship between auditory perception and vocal production has been typically investigated by evaluating the effect of either altered or degraded auditory feedback on speech production in either normal hearing or hearing-impaired individuals. Our goal in the present study was to examine this relationship in individuals with superior auditory abilities. Thirteen professional musicians and thirteen nonmusicians, with no vocal or singing training, participated in this study. For vocal production accuracy, subjects were presented with three tones. They were asked to reproduce the pitch using the vowel /a/. This procedure was repeated three times. The fundamental frequency of each production was measured using an autocorrelation pitch detection algorithm designed for this study. The musicians' superior auditory abilities (compared to the nonmusicians) were established in a frequency discrimination task reported elsewhere. Results indicate that (a) musicians had better vocal production accuracy than nonmusicians (production errors of 1/2 a semitone compared to 1.3 semitones, respectively); (b) frequency discrimination thresholds explain 43% of the variance of the production data, and (c) all subjects with superior frequency discrimination thresholds showed accurate vocal production; the reverse relationship, however, does not hold true. In this study we provide empirical evidence to the importance of auditory feedback on vocal production in listeners with superior auditory skills.

  15. Attention-dependent sound offset-related brain potentials.

    PubMed

    Horváth, János

    2016-05-01

    When performing sensory tasks, knowing the potentially occurring goal-relevant and irrelevant stimulus events allows the establishment of selective attention sets, which result in enhanced sensory processing of goal-relevant events. In the auditory modality, such enhancements are reflected in the increased amplitude of the N1 ERP elicited by the onsets of task-relevant sounds. It has been recently suggested that ERPs to task-relevant sound offsets are similarly enhanced in a tone-focused state in comparison to a distracted one. The goal of the present study was to explore the influence of attention on ERPs elicited by sound offsets. ERPs elicited by tones in a duration-discrimination task were compared to ERPs elicited by the same tones in not-tone-focused attentional setting. Tone offsets elicited a consistent, attention-dependent biphasic (positive-negative--P1-N1) ERP waveform for tone durations ranging from 150 to 450 ms. The evidence, however, did not support the notion that the offset-related ERPs reflected an offset-specific attention set: The offset-related ERPs elicited in a duration-discrimination condition (in which offsets were task relevant) did not significantly differ from those elicited in a pitch-discrimination condition (in which the offsets were task irrelevant). Although an N2 reflecting the processing of offsets in task-related terms contributed to the observed waveform, this contribution was separable from the offset-related P1 and N1. The results demonstrate that when tones are attended, offset-related ERPs may substantially overlap endogenous ERP activity in the postoffset interval irrespective of tone duration, and attention differences may cause ERP differences in such postoffset intervals. © 2016 Society for Psychophysiological Research.

  16. [In Process Citation

    PubMed

    Ackermann; Mathiak

    1999-11-01

    Pure word deafness (auditory verbal agnosia) is characterized by an impairment of auditory comprehension, repetition of verbal material and writing to dictation whereas spontaneous speech production and reading largely remain unaffected. Sometimes, this syndrome is preceded by complete deafness (cortical deafness) of varying duration. Perception of vowels and suprasegmental features of verbal utterances (e.g., intonation contours) seems to be less disrupted than the processing of consonants and, therefore, might mediate residual auditory functions. Often, lip reading and/or slowing of speaking rate allow within some limits to compensate for speech comprehension deficits. Apart from a few exceptions, the available reports of pure word deafness documented a bilateral temporal lesion. In these instances, as a rule, identification of nonverbal (environmental) sounds, perception of music, temporal resolution of sequential auditory cues and/or spatial localization of acoustic events were compromised as well. The observed variable constellation of auditory signs and symptoms in central hearing disorders following bilateral temporal disorders, most probably, reflects the multitude of functional maps at the level of the auditory cortices subserving, as documented in a variety of non-human species, the encoding of specific stimulus parameters each. Thus, verbal/nonverbal auditory agnosia may be considered a paradigm of distorted "auditory scene analysis" (Bregman 1990) affecting both primitive and schema-based perceptual processes. It cannot be excluded, however, that disconnection of the Wernicke-area from auditory input (Geschwind 1965) and/or an impairment of suggested "phonetic module" (Liberman 1996) contribute to the observed deficits as well. Conceivably, these latter mechanisms underly the rare cases of pure word deafness following a lesion restricted to the dominant hemisphere. Only few instances of a rather isolated disruption of the discrimination/identification of nonverbal sound sources, in the presence of uncompromised speech comprehension, have been reported so far (nonverbal auditory agnosia). As a rule, unilateral right-sided damage has been found to be the relevant lesion.

  17. Brain Activation Patterns in Response to Conspecific and Heterospecific Social Acoustic Signals in Female Plainfin Midshipman Fish, Porichthys notatus.

    PubMed

    Mohr, Robert A; Chang, Yiran; Bhandiwad, Ashwin A; Forlano, Paul M; Sisneros, Joseph A

    2018-01-01

    While the peripheral auditory system of fish has been well studied, less is known about how the fish's brain and central auditory system process complex social acoustic signals. The plainfin midshipman fish, Porichthys notatus, has become a good species for investigating the neural basis of acoustic communication because the production and reception of acoustic signals is paramount for this species' reproductive success. Nesting males produce long-duration advertisement calls that females detect and localize among the noise in the intertidal zone to successfully find mates and spawn. How female midshipman are able to discriminate male advertisement calls from environmental noise and other acoustic stimuli is unknown. Using the immediate early gene product cFos as a marker for neural activity, we quantified neural activation of the ascending auditory pathway in female midshipman exposed to conspecific advertisement calls, heterospecific white seabass calls, or ambient environment noise. We hypothesized that auditory hindbrain nuclei would be activated by general acoustic stimuli (ambient noise and other biotic acoustic stimuli) whereas auditory neurons in the midbrain and forebrain would be selectively activated by conspecific advertisement calls. We show that neural activation in two regions of the auditory hindbrain, i.e., the rostral intermediate division of the descending octaval nucleus and the ventral division of the secondary octaval nucleus, did not differ via cFos immunoreactive (cFos-ir) activity when exposed to different acoustic stimuli. In contrast, female midshipman exposed to conspecific advertisement calls showed greater cFos-ir in the nucleus centralis of the midbrain torus semicircularis compared to fish exposed only to ambient noise. No difference in cFos-ir was observed in the torus semicircularis of animals exposed to conspecific versus heterospecific calls. However, cFos-ir was greater in two forebrain structures that receive auditory input, i.e., the central posterior nucleus of the thalamus and the anterior tuberal hypothalamus, when exposed to conspecific calls versus either ambient noise or heterospecific calls. Our results suggest that higher-order neurons in the female midshipman midbrain torus semicircularis, thalamic central posterior nucleus, and hypothalamic anterior tuberal nucleus may be necessary for the discrimination of complex social acoustic signals. Furthermore, neurons in the central posterior and anterior tuberal nuclei are differentially activated by exposure to conspecific versus other acoustic stimuli. © 2018 S. Karger AG, Basel.

  18. Fitting prelingually deafened adult cochlear implant users based on electrode discrimination performance.

    PubMed

    Debruyne, Joke A; Francart, Tom; Janssen, A Miranda L; Douma, Kim; Brokx, Jan P L

    2017-03-01

    This study investigated the hypotheses that (1) prelingually deafened CI users do not have perfect electrode discrimination ability and (2) the deactivation of non-discriminable electrodes can improve auditory performance. Electrode discrimination difference limens were determined for all electrodes of the array. The subjects' basic map was subsequently compared to an experimental map, which contained only discriminable electrodes, with respect to speech understanding in quiet and in noise, listening effort, spectral ripple discrimination and subjective appreciation. Subjects were six prelingually deafened, late implanted adults using the Nucleus cochlear implant. Electrode discrimination difference limens across all subjects and electrodes ranged from 0.5 to 7.125, with significantly larger limens for basal electrodes. No significant differences were found between the basic map and the experimental map on auditory tests. Subjective appreciation was found to be significantly poorer for the experimental map. Prelingually deafened CI users were unable to discriminate between all adjacent electrodes. There was no difference in auditory performance between the basic and experimental map. Potential factors contributing to the absence of improvement with the experimental map include the reduced number of maxima, incomplete adaptation to the new frequency allocation, and the mainly basal location of deactivated electrodes.

  19. Attentional but not pre-attentive neural measures of auditory discrimination are atypical in children with developmental language disorder.

    PubMed

    Kornilov, Sergey A; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L; Magnuson, James S

    2014-01-01

    We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n = 23) and typically developing (n = 16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder.

  20. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Kilgard, Michael P.

    2014-01-01

    Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA) increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field (AAF) responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes. PMID:25140133

  1. Transformation of temporal sequences in the zebra finch auditory system

    PubMed Central

    Lim, Yoonseob; Lagoy, Ryan; Shinn-Cunningham, Barbara G; Gardner, Timothy J

    2016-01-01

    This study examines how temporally patterned stimuli are transformed as they propagate from primary to secondary zones in the thalamorecipient auditory pallium in zebra finches. Using a new class of synthetic click stimuli, we find a robust mapping from temporal sequences in the primary zone to distinct population vectors in secondary auditory areas. We tested whether songbirds could discriminate synthetic click sequences in an operant setup and found that a robust behavioral discrimination is present for click sequences composed of intervals ranging from 11 ms to 40 ms, but breaks down for stimuli composed of longer inter-click intervals. This work suggests that the analog of the songbird auditory cortex transforms temporal patterns to sequence-selective population responses or ‘spatial codes', and that these distinct population responses contribute to behavioral discrimination of temporally complex sounds. DOI: http://dx.doi.org/10.7554/eLife.18205.001 PMID:27897971

  2. Auditory Frequency Discrimination in Children with Specific Language Impairment: A Longitudinal Study

    ERIC Educational Resources Information Center

    Hill, P. R.; Hogben, J. H.; Bishop, D. M. V.

    2005-01-01

    It has been proposed that specific language impairment (SLI) is caused by an impairment of auditory processing, but it is unclear whether this problem affects temporal processing, frequency discrimination (FD), or both. Furthermore, there are few longitudinal studies in this area, making it hard to establish whether any deficit represents a…

  3. A Rapid Assessment of Instructional Strategies to Teach Auditory-Visual Conditional Discriminations to Children with Autism

    ERIC Educational Resources Information Center

    Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany

    2013-01-01

    The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…

  4. Infants' Auditory Enumeration: Evidence for Analog Magnitudes in the Small Number Range

    ERIC Educational Resources Information Center

    vanMarle, Kristy; Wynn, Karen

    2009-01-01

    Vigorous debate surrounds the issue of whether infants use different representational mechanisms to discriminate small and large numbers. We report evidence for ratio-dependent performance in infants' discrimination of small numbers of auditory events, suggesting that infants can use analog magnitudes to represent small values, at least in the…

  5. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  6. Visual adaptation enhances action sound discrimination.

    PubMed

    Barraclough, Nick E; Page, Steve A; Keefe, Bruce D

    2017-01-01

    Prolonged exposure, or adaptation, to a stimulus in 1 modality can bias, but also enhance, perception of a subsequent stimulus presented within the same modality. However, recent research has also found that adaptation in 1 modality can bias perception in another modality. Here, we show a novel crossmodal adaptation effect, where adaptation to a visual stimulus enhances subsequent auditory perception. We found that when compared to no adaptation, prior adaptation to visual, auditory, or audiovisual hand actions enhanced discrimination between 2 subsequently presented hand action sounds. Discrimination was most enhanced when the visual action "matched" the auditory action. In addition, prior adaptation to a visual, auditory, or audiovisual action caused subsequent ambiguous action sounds to be perceived as less like the adaptor. In contrast, these crossmodal action aftereffects were not generated by adaptation to the names of actions. Enhanced crossmodal discrimination and crossmodal perceptual aftereffects may result from separate mechanisms operating in audiovisual action sensitive neurons within perceptual systems. Adaptation-induced crossmodal enhancements cannot be explained by postperceptual responses or decisions. More generally, these results together indicate that adaptation is a ubiquitous mechanism for optimizing perceptual processing of multisensory stimuli.

  7. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    PubMed

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress patterns in infants with CI was reduced compared with that of infants with NH, (3) both groups showed directional asymmetry in discrimination, that is, increased discrimination from the uncommon to the common stress pattern in Hebrew (/dóti/ versus /dotí/) compared with the reversed condition. The CI device transmitted sufficient acoustic information (amplitude, duration, and fundamental frequency) to allow discrimination between stress patterns in young hearing-impaired infants with CI. The present pattern of results is in support of a discrimination model in which both auditory capabilities and "top-down" interactions are involved. That is, the CI infants detected changes between stressed and unstressed syllables after which they developed a bias for the more common weak-strong stress pattern in Hebrew. The latter suggests that infants with CI were able to extract the statistical distribution of stress patterns by listening to the ambient language even after limited auditory experience with the CI device. To conclude, in relation to processing of lexical stress patterns, infants with CI followed similar developmental milestones as hearing infants thus establishing important prerequisites for early language acquisition.

  8. Pre-attentive auditory discrimination skill in Indian classical vocal musicians and non-musicians.

    PubMed

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-09-01

    To test for pre-attentive auditory discrimination skills in Indian classical vocal musicians and non-musicians. Mismatch negativity (MMN) was recorded to test for pre-attentive auditory discrimination skills with a pair of stimuli of /1000 Hz/ and /1100 Hz/, with /1000 Hz/ as the frequent stimulus and /1100 Hz/ as the infrequent stimulus. Onset, offset and peak latencies were the considered latency parameters, whereas peak amplitude and area under the curve were considered for amplitude analysis. Exactly 50 participants, out of which the experimental group had 25 adult Indian classical vocal musicians and 25 age-matched non-musicians served as the control group, were included in the study. Experimental group participants had a minimum professional music experience in Indian classic vocal music of 10 years. However, control group participants did not have any formal training in music. Descriptive statistics showed better waveform morphology in the experimental group as compared to the control. MANOVA showed significantly better onset latency, peak amplitude and area under the curve in the experimental group but no significant difference in the offset and peak latencies between the two groups. The present study probably points towards the enhancement of pre-attentive auditory discrimination skills in Indian classical vocal musicians compared to non-musicians. It indicates that Indian classical musical training enhances pre-attentive auditory discrimination skills in musicians, leading to higher peak amplitude and a greater area under the curve compared to non-musicians.

  9. Utilizing Oral-Motor Feedback in Auditory Conceptualization.

    ERIC Educational Resources Information Center

    Howard, Marilyn

    The Auditory Discrimination in Depth (ADD) program, an oral-motor approach to beginning reading instruction, trains first grade children in auditory skills by a process in which language and oral-motor feedback are used to integrate auditory properties with visual properties. This emphasis of the ADD program makes the child's perceptual…

  10. Executive function deficits in team sport athletes with a history of concussion revealed by a visual-auditory dual task paradigm.

    PubMed

    Tapper, Anthony; Gonzalez, Dave; Roy, Eric; Niechwiej-Szwedo, Ewa

    2017-02-01

    The purpose of this study was to examine executive functions in team sport athletes with and without a history of concussion. Executive functions comprise many cognitive processes including, working memory, attention and multi-tasking. Past research has shown that concussions cause difficulties in vestibular-visual and vestibular-auditory dual-tasking, however, visual-auditory tasks have been examined rarely. Twenty-nine intercollegiate varsity ice hockey athletes (age = 19.13, SD = 1.56; 15 females) performed an experimental dual-task paradigm that required simultaneously processing visual and auditory information. A brief interview, event description and self-report questionnaires were used to assign participants to each group (concussion, no-concussion). Eighteen athletes had a history of concussion and 11 had no concussion history. The two tests involved visuospatial working memory (i.e., Corsi block test) and auditory tone discrimination. Participants completed both tasks individually, then simultaneously. Two outcome variables were measured, Corsi block memory span and auditory tone discrimination accuracy. No differences were shown when each task was performed alone; however, athletes with a history of concussion had a significantly worse performance on the tone discrimination task in the dual-task condition. In conclusion, long-term deficits in executive functions were associated with a prior history of concussion when cognitive resources were stressed. Evaluations of executive functions and divided attention appear to be helpful in discriminating participants with and without a history concussion.

  11. The Persian version of auditory word discrimination test (P-AWDT) for children: Development, validity, and reliability.

    PubMed

    Hashemi, Nassim; Ghorbani, Ali; Soleymani, Zahra; Kamali, Mohmmad; Ahmadi, Zohreh Ziatabar; Mahmoudian, Saeid

    2018-07-01

    Auditory discrimination of speech sounds is an important perceptual ability and a precursor to the acquisition of language. Auditory information is at least partially necessary for the acquisition and organization of phonological rules. There are few standardized behavioral tests to evaluate phonemic distinctive features in children with or without speech and language disorders. The main objective of the present study was the development, validity, and reliability of the Persian version of auditory word discrimination test (P-AWDT) for 4-8-year-old children. A total of 120 typical children and 40 children with speech sound disorder (SSD) participated in the present study. The test comprised of 160 monosyllabic paired-words distributed in the Forms A-1 and the Form A-2 for the initial consonants (80 words) and the Forms B-1 and the Form B-2 for the final consonants (80 words). Moreover, the discrimination of vowels was randomly included in all forms. Content validity was calculated and 50 children repeated the test twice with two weeks of interval (test-retest reliability). Further analysis was also implemented including validity, intraclass correlation coefficient (ICC), Cronbach's alpha (internal consistency), age groups, and gender. The content validity index (CVI) and the test-retest reliability of the P-AWDT were achieved 63%-86% and 81%-96%, respectively. Moreover, the total Cronbach's alpha for the internal consistency was estimated relatively high (0.93). Comparison of the mean scores of the P-AWDT in the typical children and the children with SSD revealed a significant difference. The results revealed that the group with SSD had greater severity of deficit than the typical group in auditory word discrimination. In addition, the difference between the age groups was statistically significant, especially in 4-4.11-year-old children. The performance of the two gender groups was relatively same. The comparison of the P-AWDT scores between the typical children and the children with SSD demonstrated differences in the capabilities of auditory phonological discrimination in both initial and final positions. It supposed that the P-AWDT meets the appropriate validity and reliability criteria. The P-AWDT test can be utilized to measure the distinctive features of phonemes, the auditory discrimination of initial and final consonants and middle vowels of words in 4-8-year-old typical children and children with SSD. Copyright © 2018. Published by Elsevier B.V.

  12. Effects of Long-Term Musical Training on Cortical Evoked Auditory Potentials

    PubMed Central

    Brown, Carolyn J.; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul

    2016-01-01

    Objective Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared to non-musicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the auditory change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and non-musicians. Design Twenty individuals (10 musicians and 10 non-musicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure and the ACC was recorded and used as an objective (i.e. non-behavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. Results As a group, musicians were able to detect smaller changes in pitch than non-musicians. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the rippled noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Conclusions Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than non-musicians. Those differences are evident not only in perceptual/behavioral tests, but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric and/or hearing-impaired listeners. PMID:28225736

  13. Differentiating maturational and training influences on fMRI activation during music processing.

    PubMed

    Ellis, Robert J; Norton, Andrea C; Overy, Katie; Winner, Ellen; Alsop, David C; Schlaug, Gottfried

    2012-04-15

    Two major influences on how the brain processes music are maturational development and active musical training. Previous functional neuroimaging studies investigating music processing have typically focused on either categorical differences between "musicians versus nonmusicians" or "children versus adults." In the present study, we explored a cross-sectional data set (n=84) using multiple linear regression to isolate the performance-independent effects of age (5 to 33 years) and cumulative duration of musical training (0 to 21,000 practice hours) on fMRI activation similarities and differences between melodic discrimination (MD) and rhythmic discrimination (RD). Age-related effects common to MD and RD were present in three left hemisphere regions: temporofrontal junction, ventral premotor cortex, and the inferior part of the intraparietal sulcus, regions involved in active attending to auditory rhythms, sensorimotor integration, and working memory transformations of pitch and rhythmic patterns. By contrast, training-related effects common to MD and RD were localized to the posterior portion of the left superior temporal gyrus/planum temporale, an area implicated in spectrotemporal pattern matching and auditory-motor coordinate transformations. A single cluster in right superior temporal gyrus showed significantly greater activation during MD than RD. This is the first fMRI which has distinguished maturational from training effects during music processing. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. The impact of negative affect on reality discrimination.

    PubMed

    Smailes, David; Meins, Elizabeth; Fernyhough, Charles

    2014-09-01

    People who experience auditory hallucinations tend to show weak reality discrimination skills, so that they misattribute internal, self-generated events to an external, non-self source. We examined whether inducing negative affect in healthy young adults would increase their tendency to make external misattributions on a reality discrimination task. Participants (N = 54) received one of three mood inductions (one positive, two negative) and then performed an auditory signal detection task to assess reality discrimination. Participants who received either of the two negative inductions made more false alarms, but not more hits, than participants who received the neutral induction, indicating that negative affect makes participants more likely to misattribute internal, self-generated events to an external, non-self source. These findings are drawn from an analogue sample, and research that examines whether negative affect also impairs reality discrimination in patients who experience auditory hallucinations is required. These findings show that negative affect disrupts reality discrimination and suggest one way in which negative affect may lead to hallucinatory experiences. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. fMRI-based Multivariate Pattern Analyses Reveal Imagery Modality and Imagery Content Specific Representations in Primary Somatosensory, Motor and Auditory Cortices.

    PubMed

    de Borst, Aline W; de Gelder, Beatrice

    2017-08-01

    Previous studies have shown that the early visual cortex contains content-specific representations of stimuli during visual imagery, and that these representational patterns of imagery content have a perceptual basis. To date, there is little evidence for the presence of a similar organization in the auditory and tactile domains. Using fMRI-based multivariate pattern analyses we showed that primary somatosensory, auditory, motor, and visual cortices are discriminative for imagery of touch versus sound. In the somatosensory, motor and visual cortices the imagery modality discriminative patterns were similar to perception modality discriminative patterns, suggesting that top-down modulations in these regions rely on similar neural representations as bottom-up perceptual processes. Moreover, we found evidence for content-specific representations of the stimuli during auditory imagery in the primary somatosensory and primary motor cortices. Both the imagined emotions and the imagined identities of the auditory stimuli could be successfully classified in these regions. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Effects of Long-Term Musical Training on Cortical Auditory Evoked Potentials.

    PubMed

    Brown, Carolyn J; Jeon, Eun-Kyung; Driscoll, Virginia; Mussoi, Bruna; Deshpande, Shruti Balvalli; Gfeller, Kate; Abbas, Paul J

    Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.

  17. Oxytocin receptor activation in the basolateral complex of the amygdala enhances discrimination between discrete cues and promotes configural processing of cues.

    PubMed

    Fam, Justine; Holmes, Nathan; Delaney, Andrew; Crane, James; Westbrook, R Frederick

    2018-06-14

    Oxytocin (OT) is a neuropeptide which influences the expression of social behavior and regulates its distribution according to the social context - OT is associated with increased pro-social effects in the absence of social threat and defensive aggression when threats are present. The present experiments investigated the effects of OT beyond that of social behavior by using a discriminative Pavlovian fear conditioning protocol with rats. In Experiment 1, an OT receptor agonist (TGOT) microinjected into the basolateral amygdala facilitated the discrimination between an auditory cue that signaled shock and another auditory cue that signaled the absence of shock. This TGOT-facilitated discrimination was replicated in a second experiment where the shocked and non-shocked auditory cues were accompanied by a common visual cue. Conditioned responding on probe trials of the auditory and visual elements indicated that TGOT administration produced a qualitative shift in the learning mechanisms underlying the discrimination between the two compounds. This was confirmed by comparisons between the present results and simulated predictions of elemental and configural associative learning models. Overall, the present findings demonstrate that the neuromodulatory effects of OT influence behavior outside of the social domain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study.

    PubMed

    Chang, Ming; Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between 'l' and 'r' sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words 'light' and 'right'. There was also improvement in the recognition of other words containing 'l' and 'r' (e.g., 'blight' and 'bright'), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses.

  19. Unconscious improvement in foreign language learning using mismatch negativity neurofeedback: A preliminary study

    PubMed Central

    Iizuka, Hiroyuki; Kashioka, Hideki; Naruse, Yasushi; Furukawa, Masahiro; Ando, Hideyuki; Maeda, Taro

    2017-01-01

    When people learn foreign languages, they find it difficult to perceive speech sounds that are nonexistent in their native language, and extensive training is consequently necessary. Our previous studies have shown that by using neurofeedback based on the mismatch negativity event-related brain potential, participants could unconsciously achieve learning in the auditory discrimination of pure tones that could not be consciously discriminated without the neurofeedback. Here, we examined whether mismatch negativity neurofeedback is effective for helping someone to perceive new speech sounds in foreign language learning. We developed a task for training native Japanese speakers to discriminate between ‘l’ and ‘r’ sounds in English, as they usually cannot discriminate between these two sounds. Without participants attending to auditory stimuli or being aware of the nature of the experiment, neurofeedback training helped them to achieve significant improvement in unconscious auditory discrimination and recognition of the target words ‘light’ and ‘right’. There was also improvement in the recognition of other words containing ‘l’ and ‘r’ (e.g., ‘blight’ and ‘bright’), even though these words had not been presented during training. This method could be used to facilitate foreign language learning and can be extended to other fields of auditory and clinical research and even other senses. PMID:28617861

  20. Statistical learning and auditory processing in children with music training: An ERP study.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  1. [Examination of relationship between level of hearing and written language skills in 10-14-year-old hearing impaired children].

    PubMed

    Turğut, Nedim; Karlıdağ, Turgut; Başar, Figen; Yalçın, Şinasi; Kaygusuz, İrfan; Keleş, Erol; Birkent, Ömer Faruk

    2015-01-01

    This study aims to review the relationship between written language skills and factors which are thought to affect this skill such as mean hearing loss, duration of auditory deprivation, speech discrimination score, and pre-school education attendance and socioeconomic status of hearing impaired children who attend 4th-7th grades in primary school in inclusive environment. The study included 25 hearing impaired children (14 males, 11 females; mean age 11.4±1.4 years; range 10 to 14 years) (study group) and 20 children (9 males, 11 females; mean age 11.5±1.3 years; range 10 to 14 years) (control group) with normal hearing in the same age group and studying in the same class. Study group was separated into two subgroups as group 1a and group 1b since some of the children with hearing disability used hearing aid while some used cochlear implant. Intragroup comparisons and relational screening were performed for those who use hearing aids and cochlear implants. Intergroup comparisons were performed to evaluate the effect of the parameters on written language skills. Written expression skill level of children with hearing disability was significantly lower than their normal hearing peers (p=0.001). A significant relationship was detected between written language skills and mean hearing loss (p=0.048), duration of auditory deprivation (p=0.021), speech discrimination score (p=0.014), and preschool attendance (p=0.005), when it comes to socioeconomic status we were not able to find any significant relationship (p=0.636). It can be said that hearing loss affects written language skills negatively and hearing impaired individuals develop low-level written language skills compared to their normal hearing peers.

  2. Learning Auditory Discrimination with Computer-Assisted Instruction: A Comparison of Two Different Performance Objectives.

    ERIC Educational Resources Information Center

    Steinhaus, Kurt A.

    A 12-week study of two groups of 14 college freshmen music majors was conducted to determine which group demonstrated greater achievement in learning auditory discrimination using computer-assisted instruction (CAI). The method employed was a pre-/post-test experimental design using subjects randomly assigned to a control group or an experimental…

  3. Auditory Temporal Order Discrimination and Backward Recognition Masking in Adults with Dyslexia

    ERIC Educational Resources Information Center

    Griffiths, Yvonne M.; Hill, Nicholas I.; Bailey, Peter J.; Snowling, Margaret J.

    2003-01-01

    The ability of 20 adult dyslexic readers to extract frequency information from successive tone pairs was compared with that of IQ-matched controls using temporal order discrimination and auditory backward recognition masking (ABRM) tasks. In both paradigms, the interstimulus interval (ISI) between tones in a pair was either short (20 ms) or long…

  4. A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training

    ERIC Educational Resources Information Center

    Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.

    2012-01-01

    This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…

  5. Attentional but not Pre-Attentive Neural Measures of Auditory Discrimination are Atypical in Children with Developmental Language Disorder

    PubMed Central

    Kornilov, Sergey A.; Landi, Nicole; Rakhlin, Natalia; Fang, Shin-Yi; Grigorenko, Elena L.; Magnuson, James S.

    2015-01-01

    We examined neural indices of pre-attentive phonological and attentional auditory discrimination in children with developmental language disorder (DLD, n=23) and typically developing (n=16) peers from a geographically isolated Russian-speaking population with an elevated prevalence of DLD. Pre-attentive phonological MMN components were robust and did not differ in two groups. Children with DLD showed attenuated P3 and atypically distributed P2 components in the attentional auditory discrimination task; P2 and P3 amplitudes were linked to working memory capacity, development of complex syntax, and vocabulary. The results corroborate findings of reduced processing capacity in DLD and support a multifactorial view of the disorder. PMID:25350759

  6. Top-down and bottom-up modulation of brain structures involved in auditory discrimination.

    PubMed

    Diekhof, Esther K; Biedermann, Franziska; Ruebsamen, Rudolf; Gruber, Oliver

    2009-11-10

    Auditory deviancy detection comprises both automatic and voluntary processing. Here, we investigated the neural correlates of different components of the sensory discrimination process using functional magnetic resonance imaging. Subliminal auditory processing of deviant events that were not detected led to activation in left superior temporal gyrus. On the other hand, both correct detection of deviancy and false alarms activated a frontoparietal network of attentional processing and response selection, i.e. this network was activated regardless of the physical presence of deviant events. Finally, activation in the putamen, anterior cingulate and middle temporal cortex depended on factual stimulus representations and occurred only during correct deviancy detection. These results indicate that sensory discrimination may rely on dynamic bottom-up and top-down interactions.

  7. Magnetoencephalographic signatures of numerosity discrimination in fetuses and neonates.

    PubMed

    Schleger, Franziska; Landerl, Karin; Muenssinger, Jana; Draganova, Rossitza; Reinl, Maren; Kiefer-Schmidt, Isabelle; Weiss, Magdalene; Wacker-Gußmann, Annette; Huotilainen, Minna; Preissl, Hubert

    2014-01-01

    Numerosity discrimination has been demonstrated in newborns, but not in fetuses. Fetal magnetoencephalography allows non-invasive investigation of neural responses in neonates and fetuses. During an oddball paradigm with auditory sequences differing in numerosity, evoked responses were recorded and mismatch responses were quantified as an indicator for auditory discrimination. Thirty pregnant women with healthy fetuses (last trimester) and 30 healthy term neonates participated. Fourteen adults were included as a control group. Based on measurements eligible for analysis, all adults, all neonates, and 74% of fetuses showed numerical mismatch responses. Numerosity discrimination appears to exist in the last trimester of pregnancy.

  8. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  9. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment

    PubMed Central

    Rota-Donahue, Christine; Schwartz, Richard G.; Shafer, Valerie; Sussman, Elyse S.

    2016-01-01

    Background Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children’s auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. Purpose This study examined the perception of small frequency differences (Δf) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. Research Design An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of Δf from the 1000-Hz base frequency. Study Sample Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Data Collection and Analysis Behavioral data collected using headphone delivery were analyzed using the sensitivity index d′, calculated for three Δf was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d′ and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. Results TD children and children with APD and/or SLI differed in the detection of small-tone Δf. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d′ showed different strengths of correlation based on the magnitudes of the Δf. Auditory processing scores showed stronger correlation to the sensitivity index d′ for the small Δf, while language scores showed stronger correlation to the sensitivity index d′ for the large Δf. Conclusion Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. PMID:27310407

  10. Perception of Small Frequency Differences in Children with Auditory Processing Disorder or Specific Language Impairment.

    PubMed

    Rota-Donahue, Christine; Schwartz, Richard G; Shafer, Valerie; Sussman, Elyse S

    2016-06-01

    Frequency discrimination is often impaired in children developing language atypically. However, findings in the detection of small frequency changes in these children are conflicting. Previous studies on children's auditory perceptual abilities usually involved establishing differential sensitivity thresholds in sample populations who were not tested for auditory deficits. To date, there are no data comparing suprathreshold frequency discrimination ability in children tested for both auditory processing and language skills. : This study examined the perception of small frequency differences (∆ƒ) in children with auditory processing disorder (APD) and/or specific language impairment (SLI). The aim was to determine whether children with APD and children with SLI showed differences in their behavioral responses to frequency changes. Results were expected to identify different degrees of impairment and shed some light on the auditory perceptual overlap between pediatric APD and SLI. An experimental group design using a two-alternative forced-choice procedure was used to determine frequency discrimination ability for three magnitudes of ∆ƒ from the 1000-Hz base frequency. Thirty children between 10 years of age and 12 years, 11 months of age: 17 children with APD and/or SLI, and 13 typically developing (TD) peers participated. The clinical groups included four children with APD only, four children with SLI only, and nine children with both APD and SLI. Behavioral data collected using headphone delivery were analyzed using the sensitivity index d', calculated for three ∆ƒ was 2%, 5%, and 15% of the base frequency or 20, 50, and 150 Hz. Correlations between the dependent variable d' and the independent variables measuring auditory processing and language skills were also obtained. A stepwise regression analysis was then performed. TD children and children with APD and/or SLI differed in the detection of small-tone ∆ƒ. In addition, APD or SLI status affected behavioral results differently. Comparisons between auditory processing test scores or language test scores and the sensitivity index d' showed different strengths of correlation based on the magnitudes of the ∆ƒ. Auditory processing scores showed stronger correlation to the sensitivity index d' for the small ∆ƒ, while language scores showed stronger correlation to the sensitivity index d' for the large ∆ƒ. Although children with APD and/or SLI have difficulty with behavioral frequency discrimination, this difficulty may stem from two different levels: a basic auditory level for children with APD and a higher language processing level for children with SLI; the frequency discrimination performance seemed to be affected by the labeling demands of the same versus different frequency discrimination task for the children with SLI. American Academy of Audiology.

  11. The Dynamic Range Paradox: A Central Auditory Model of Intensity Change Detection

    PubMed Central

    Simpson, Andrew J.R.; Reiss, Joshua D.

    2013-01-01

    In this paper we use empirical loudness modeling to explore a perceptual sub-category of the dynamic range problem of auditory neuroscience. Humans are able to reliably report perceived intensity (loudness), and discriminate fine intensity differences, over a very large dynamic range. It is usually assumed that loudness and intensity change detection operate upon the same neural signal, and that intensity change detection may be predicted from loudness data and vice versa. However, while loudness grows as intensity is increased, improvement in intensity discrimination performance does not follow the same trend and so dynamic range estimations of the underlying neural signal from loudness data contradict estimations based on intensity just-noticeable difference (JND) data. In order to account for this apparent paradox we draw on recent advances in auditory neuroscience. We test the hypothesis that a central model, featuring central adaptation to the mean loudness level and operating on the detection of maximum central-loudness rate of change, can account for the paradoxical data. We use numerical optimization to find adaptation parameters that fit data for continuous-pedestal intensity change detection over a wide dynamic range. The optimized model is tested on a selection of equivalent pseudo-continuous intensity change detection data. We also report a supplementary experiment which confirms the modeling assumption that the detection process may be modeled as rate-of-change. Data are obtained from a listening test (N = 10) using linearly ramped increment-decrement envelopes applied to pseudo-continuous noise with an overall level of 33 dB SPL. Increments with half-ramp durations between 5 and 50,000 ms are used. The intensity JND is shown to increase towards long duration ramps (p<10−6). From the modeling, the following central adaptation parameters are derived; central dynamic range of 0.215 sones, 95% central normalization, and a central loudness JND constant of 5.5×10−5 sones per ms. Through our findings, we argue that loudness reflects peripheral neural coding, and the intensity JND reflects central neural coding. PMID:23536749

  12. Neural potentials disorder during differential psychoacoustic experiment evaluated by discrete wavelet analysis.

    PubMed

    Tsakiraki, Eleni S; Tsiaparas, Nikolaos N; Christopoulou, Maria I; Papageorgiou, Charalabos Ch; Nikita, Konstantina S

    2014-01-01

    The aim of the paper is the assessment of neural potentials disorder during a differential sensitivity psychoacoustic procedure. Ten volunteers were asked to compare the duration of two acoustic pulses: one reference with stable duration of 500 ms and one trial which varied from 420 ms to 620 ms. During the discrimination task, Electroencephalogram (EEG) and Event Related Potential (ERP) signals were recorded. The mean Relative Wavelet Energy (mRWE) and the normalized Shannon Wavelet Entropy (nSWE) are computed based on the Discrete Wavelet analysis. The results are correlated to the data derived by the psychoacoustic analysis on the volunteers responses. In most of the electrodes, when the duration of the trial pulse is 460 ms and 560 ms, there is an increase and a decrease in nSWE value, respectively, which is determined mostly by the mRWE in delta rhythm. These extrema are correlated to the Just Noticeable Difference (JND) in pulses duration, calculated by psychoacoustic analysis. The dominance of delta rhythm during the whole auditory experiment is noteworthy. The lowest values of nSWE are noted in temporal lobe.

  13. Teaching Equivalence Relations to Individuals with Minimal Verbal Repertoires: Are Visual and Auditory-Visual Discriminations Predictive of Stimulus Equivalence?

    ERIC Educational Resources Information Center

    Vause, Tricia; Martin, Garry L.; Yu, C.T.; Marion, Carole; Sakko, Gina

    2005-01-01

    The relationship between language, performance on the Assessment of Basic Learning Abilities (ABLA) test, and stimulus equivalence was examined. Five participants with minimal verbal repertoires were studied; 3 who passed up to ABLA Level 4, a visual quasi-identity discrimination and 2 who passed ABLA Level 6, an auditory-visual nonidentity…

  14. Auditory Evoked Responses in Neonates by MEG

    NASA Astrophysics Data System (ADS)

    Hernandez-Pavon, J. C.; Sosa, M.; Lutter, W. J.; Maier, M.; Wakai, R. T.

    2008-08-01

    Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age.

  15. Auditory Evoked Responses in Neonates by MEG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernandez-Pavon, J. C.; Department of Medical Physics, University of Wisconsin Madison, Wisconsin; Sosa, M.

    2008-08-11

    Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age.

  16. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory.

    PubMed

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  17. The Influence of Eye Closure on Somatosensory Discrimination: A Trade-off Between Simple Perception and Discrimination.

    PubMed

    Götz, Theresa; Hanke, David; Huonker, Ralph; Weiss, Thomas; Klingner, Carsten; Brodoehl, Stefan; Baumbach, Philipp; Witte, Otto W

    2017-06-01

    We often close our eyes to improve perception. Recent results have shown a decrease of perception thresholds accompanied by an increase in somatosensory activity after eye closure. However, does somatosensory spatial discrimination also benefit from eye closure? We previously showed that spatial discrimination is accompanied by a reduction of somatosensory activity. Using magnetoencephalography, we analyzed the magnitude of primary somatosensory (somatosensory P50m) and primary auditory activity (auditory P50m) during a one-back discrimination task in 21 healthy volunteers. In complete darkness, participants were requested to pay attention to either the somatosensory or auditory stimulation and asked to open or close their eyes every 6.5 min. Somatosensory P50m was reduced during a task requiring the distinguishing of stimulus location changes at the distal phalanges of different fingers. The somatosensory P50m was further reduced and detection performance was higher during eyes open. A similar reduction was found for the auditory P50m during a task requiring the distinguishing of changing tones. The function of eye closure is more than controlling visual input. It might be advantageous for perception because it is an effective way to reduce interference from other modalities, but disadvantageous for spatial discrimination because it requires at least one top-down processing stage. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Stability of auditory discrimination and novelty processing in physiological aging.

    PubMed

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  19. Dynamic crossmodal links revealed by steady-state responses in auditory-visual divided attention.

    PubMed

    de Jong, Ritske; Toffanin, Paolo; Harbers, Marten

    2010-01-01

    Frequency tagging has been often used to study intramodal attention but not intermodal attention. We used EEG and simultaneous frequency tagging of auditory and visual sources to study intermodal focused and divided attention in detection and discrimination performance. Divided-attention costs were smaller, but still significant, in detection than in discrimination. The auditory steady-state response (SSR) showed no effects of attention at frontocentral locations, but did so at occipital locations where it was evident only when attention was divided between audition and vision. Similarly, the visual SSR at occipital locations was substantially enhanced when attention was divided across modalities. Both effects were equally present in detection and discrimination. We suggest that both effects reflect a common cause: An attention-dependent influence of auditory information processing on early cortical stages of visual information processing, mediated by enhanced effective connectivity between the two modalities under conditions of divided attention. Copyright (c) 2009 Elsevier B.V. All rights reserved.

  20. Effects of Age and Hearing Loss on the Relationship between Discrimination of Stochastic Frequency Modulation and Speech Perception

    PubMed Central

    Sheft, Stanley; Shafiro, Valeriy; Lorenzi, Christian; McMullen, Rachel; Farrell, Caitlin

    2012-01-01

    Objective The frequency modulation (FM) of speech can convey linguistic information and also enhance speech-stream coherence and segmentation. Using a clinically oriented approach, the purpose of the present study was to examine the effects of age and hearing loss on the ability to discriminate between stochastic patterns of low-rate FM and determine whether difficulties in speech perception experienced by older listeners relate to a deficit in this ability. Design Data were collected from 18 normal-hearing young adults, and 18 participants who were at least 60 years old, nine normal-hearing and nine with a mild-to-moderate sensorineural hearing loss. Using stochastic frequency modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, discrimination thresholds were measured in terms of frequency excursion (ΔF) both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio (SNRFM) in the presence of a speech-babble masker. Speech perception ability was evaluated using Quick Speech-in-Noise (QuickSIN) sentences in four-talker babble. Results Results showed a significant effect of age, but not of hearing loss among the older listeners, for FM discrimination conditions with masking present (ΔF and SNRFM). The effect of age was not significant for the FM measures based on stimulus duration. ΔF and SNRFM were also the two conditions for which performance was significantly correlated with listener age when controlling for effect of hearing loss as measured by pure-tone average. With respect to speech-in-noise ability, results from the SNRFM condition were significantly correlated with QuickSIN performance. Conclusions Results indicate that aging is associated with reduced ability to discriminate moderate-duration patterns of low-rate stochastic FM. Furthermore, the relationship between QuickSIN performance and the SNRFM thresholds suggests that the difficulty experienced by older listeners with speech-in-noise processing may in part relate to diminished ability to process slower fine-structure modulation at low sensation levels. Results thus suggest that clinical consideration of stochastic FM discrimination measures may offer a fuller picture of auditory processing abilities. PMID:22790319

  1. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    ERIC Educational Resources Information Center

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  2. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    ERIC Educational Resources Information Center

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  3. Perceptual consequences of disrupted auditory nerve activity.

    PubMed

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique contribution of neural synchrony to sensory perception but also provide guidance for translational research in terms of better diagnosis and management of human communication disorders.

  4. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning.

    PubMed

    Lau, Bonnie K; Ruggles, Dorea R; Katyal, Sucharit; Engel, Stephen A; Oxenham, Andrew J

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects.

  5. Sustained Cortical and Subcortical Measures of Auditory and Visual Plasticity following Short-Term Perceptual Learning

    PubMed Central

    Katyal, Sucharit; Engel, Stephen A.; Oxenham, Andrew J.

    2017-01-01

    Short-term training can lead to improvements in behavioral discrimination of auditory and visual stimuli, as well as enhanced EEG responses to those stimuli. In the auditory domain, fluency with tonal languages and musical training has been associated with long-term cortical and subcortical plasticity, but less is known about the effects of shorter-term training. This study combined electroencephalography (EEG) and behavioral measures to investigate short-term learning and neural plasticity in both auditory and visual domains. Forty adult participants were divided into four groups. Three groups trained on one of three tasks, involving discrimination of auditory fundamental frequency (F0), auditory amplitude modulation rate (AM), or visual orientation (VIS). The fourth (control) group received no training. Pre- and post-training tests, as well as retention tests 30 days after training, involved behavioral discrimination thresholds, steady-state visually evoked potentials (SSVEP) to the flicker frequencies of visual stimuli, and auditory envelope-following responses simultaneously evoked and measured in response to rapid stimulus F0 (EFR), thought to reflect subcortical generators, and slow amplitude modulation (ASSR), thought to reflect cortical generators. Enhancement of the ASSR was observed in both auditory-trained groups, not specific to the AM-trained group, whereas enhancement of the SSVEP was found only in the visually-trained group. No evidence was found for changes in the EFR. The results suggest that some aspects of neural plasticity can develop rapidly and may generalize across tasks but not across modalities. Behaviorally, the pattern of learning was complex, with significant cross-task and cross-modal learning effects. PMID:28107359

  6. Mismatch Negativity in essential tremor: role of age at onset in pre-attentive auditory discrimination.

    PubMed

    Pauletti, C; Mannarelli, D; Locuratolo, N; Vanacore, N; De Lucia, M C; Fattapposta, F

    2014-04-01

    To investigate whether pre-attentive auditory discrimination is impaired in patients with essential tremor (ET) and to evaluate the role of age at onset in this function. Seventeen non-demented patients with ET and seventeen age- and sex-matched healthy controls underwent an EEG recording during a classical auditory MMN paradigm. MMN latency was significantly prolonged in patients with elderly-onset ET (>65 years) (p=0.046), while no differences emerged in either latency or amplitude between young-onset ET patients and controls. This study represents a tentative indication of a dysfunction of auditory automatic change detection in elderly-onset ET patients, pointing to a selective attentive deficit in this subgroup of ET patients. The delay in pre-attentive auditory discrimination, which affects elderly-onset ET patients alone, further supports the hypothesis that ET represents a heterogeneous family of diseases united by tremor; these diseases are characterized by cognitive differences that may range from a disturbance in a selective cognitive function, such as the automatic part of the orienting response, to more widespread and complex cognitive dysfunctions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Discrimination of brief speech sounds is impaired in rats with auditory cortex lesions

    PubMed Central

    Porter, Benjamin A.; Rosenthal, Tara R.; Ranasinghe, Kamalini G.; Kilgard, Michael P.

    2011-01-01

    Auditory cortex (AC) lesions impair complex sound discrimination. However, a recent study demonstrated spared performance on an acoustic startle response test of speech discrimination following AC lesions (Floody et al., 2010). The current study reports the effects of AC lesions on two operant speech discrimination tasks. AC lesions caused a modest and quickly recovered impairment in the ability of rats to discriminate consonant-vowel-consonant speech sounds. This result seems to suggest that AC does not play a role in speech discrimination. However, the speech sounds used in both studies differed in many acoustic dimensions and an adaptive change in discrimination strategy could allow the rats to use an acoustic difference that does not require an intact AC to discriminate. Based on our earlier observation that the first 40 ms of the spatiotemporal activity patterns elicited by speech sounds best correlate with behavioral discriminations of these sounds (Engineer et al., 2008), we predicted that eliminating additional cues by truncating speech sounds to the first 40 ms would render the stimuli indistinguishable to a rat with AC lesions. Although the initial discrimination of truncated sounds took longer to learn, the final performance paralleled rats using full-length consonant-vowel-consonant sounds. After 20 days of testing, half of the rats using speech onsets received bilateral AC lesions. Lesions severely impaired speech onset discrimination for at least one-month post lesion. These results support the hypothesis that auditory cortex is required to accurately discriminate the subtle differences between similar consonant and vowel sounds. PMID:21167211

  8. Thinking about touch facilitates tactile but not auditory processing.

    PubMed

    Anema, Helen A; de Haan, Alyanne M; Gebuis, Titia; Dijkerman, H Chris

    2012-05-01

    Mental imagery is considered to be important for normal conscious experience. It is most frequently investigated in the visual, auditory and motor domain (imagination of movement), while the studies on tactile imagery (imagination of touch) are scarce. The current study investigated the effect of tactile and auditory imagery on the left/right discriminations of tactile and auditory stimuli. In line with our hypothesis, we observed that after tactile imagery, tactile stimuli were responded to faster as compared to auditory stimuli and vice versa. On average, tactile stimuli were responded to faster as compared to auditory stimuli, and stimuli in the imagery condition were on average responded to slower as compared to baseline performance (left/right discrimination without imagery assignment). The former is probably due to the spatial and somatotopic proximity of the fingers receiving the taps and the thumbs performing the response (button press), the latter to a dual task cost. Together, these results provide the first evidence of a behavioural effect of a tactile imagery assignment on the perception of real tactile stimuli.

  9. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns.

    PubMed

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10- and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities.

  10. Long Term Memory for Noise: Evidence of Robust Encoding of Very Short Temporal Acoustic Patterns

    PubMed Central

    Viswanathan, Jayalakshmi; Rémy, Florence; Bacon-Macé, Nadège; Thorpe, Simon J.

    2016-01-01

    Recent research has demonstrated that humans are able to implicitly encode and retain repeating patterns in meaningless auditory noise. Our study aimed at testing the robustness of long-term implicit recognition memory for these learned patterns. Participants performed a cyclic/non-cyclic discrimination task, during which they were presented with either 1-s cyclic noises (CNs) (the two halves of the noise were identical) or 1-s plain random noises (Ns). Among CNs and Ns presented once, target CNs were implicitly presented multiple times within a block, and implicit recognition of these target CNs was tested 4 weeks later using a similar cyclic/non-cyclic discrimination task. Furthermore, robustness of implicit recognition memory was tested by presenting participants with looped (shifting the origin) and scrambled (chopping sounds into 10− and 20-ms bits before shuffling) versions of the target CNs. We found that participants had robust implicit recognition memory for learned noise patterns after 4 weeks, right from the first presentation. Additionally, this memory was remarkably resistant to acoustic transformations, such as looping and scrambling of the sounds. Finally, implicit recognition of sounds was dependent on participant's discrimination performance during learning. Our findings suggest that meaningless temporal features as short as 10 ms can be implicitly stored in long-term auditory memory. Moreover, successful encoding and storage of such fine features may vary between participants, possibly depending on individual attention and auditory discrimination abilities. Significance Statement Meaningless auditory patterns could be implicitly encoded and stored in long-term memory.Acoustic transformations of learned meaningless patterns could be implicitly recognized after 4 weeks.Implicit long-term memories can be formed for meaningless auditory features as short as 10 ms.Successful encoding and long-term implicit recognition of meaningless patterns may strongly depend on individual attention and auditory discrimination abilities. PMID:27932941

  11. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    PubMed

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  12. Topographic EEG activations during timbre and pitch discrimination tasks using musical sounds.

    PubMed

    Auzou, P; Eustache, F; Etevenon, P; Platel, H; Rioux, P; Lambert, J; Lechevalier, B; Zarifian, E; Baron, J C

    1995-01-01

    Successive auditory stimulation sequences were presented binaurally to 18 young normal volunteers. Five conditions were investigated: two reference tasks, assumed to involve passive listening to couples of musical sounds, and three discrimination tasks, one dealing with pitch, and two with timbre (either with or without the attack). A symmetrical montage of 16 EEG channels was recorded for each subject across the different conditions. Two quantitative parameters of EEG activity were compared among the different sequences within five distinct frequency bands. As compared to a rest (no stimulation) condition, both passive listening conditions led to changes in primary auditory cortex areas. Both discrimination tasks for pitch and timbre led to right hemisphere EEG changes, organized in two poles: an anterior one and a posterior one. After discussing the electrophysiological aspects of this work, these results are interpreted in terms of a network including the right temporal neocortex and the right frontal lobe to maintain the acoustical information in an auditory working memory necessary to carry out the discrimination task.

  13. Assessment of Auditory Functioning of Deaf-Blind Multihandicapped Children.

    ERIC Educational Resources Information Center

    Kukla, Deborah; Connolly, Theresa Thomas

    The manual describes a procedure to assess to what extent a deaf-blind multiply handicapped student uses his residual hearing in the classroom. Six levels of auditory functioning (awareness/reflexive, attention/alerting, localization, auditory discrimination, recognition, and comprehension) are analyzed, and assessment activities are detailed for…

  14. Vocal development and auditory perception in CBA/CaJ mice

    NASA Astrophysics Data System (ADS)

    Radziwon, Kelly E.

    Mice are useful laboratory subjects because of their small size, their modest cost, and the fact that researchers have created many different strains to study a variety of disorders. In particular, researchers have found nearly 100 naturally occurring mouse mutations with hearing impairments. For these reasons, mice have become an important model for studies of human deafness. Although much is known about the genetic makeup and physiology of the laboratory mouse, far less is known about mouse auditory behavior. To fully understand the effects of genetic mutations on hearing, it is necessary to determine the hearing abilities of these mice. Two experiments here examined various aspects of mouse auditory perception using CBA/CaJ mice, a commonly used mouse strain. The frequency difference limens experiment tested the mouse's ability to discriminate one tone from another based solely on the frequency of the tone. The mice had similar thresholds as wild mice and gerbils but needed a larger change in frequency than humans and cats. The second psychoacoustic experiment sought to determine which cue, frequency or duration, was more salient when the mice had to identify various tones. In this identification task, the mice overwhelmingly classified the tones based on frequency instead of duration, suggesting that mice are using frequency when differentiating one mouse vocalization from another. The other two experiments were more naturalistic and involved both auditory perception and mouse vocal production. Interest in mouse vocalizations is growing because of the potential for mice to become a model of human speech disorders. These experiments traced mouse vocal development from infant to adult, and they tested the mouse's preference for various vocalizations. This was the first known study to analyze the vocalizations of individual mice across development. Results showed large variation in calling rates among the three cages of adult mice but results were highly consistent across all infant vocalizations. Although the preference experiment did not reveal significant differences between various mouse vocalizations, suggestions are given for future attempts to identify mouse preferences for auditory stimuli.

  15. Two-Stage Processing of Sounds Explains Behavioral Performance Variations due to Changes in Stimulus Contrast and Selective Attention: An MEG Study

    PubMed Central

    Kauramäki, Jaakko; Jääskeläinen, Iiro P.; Hänninen, Jarno L.; Auranen, Toni; Nummenmaa, Aapo; Lampinen, Jouko; Sams, Mikko

    2012-01-01

    Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally () replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at 100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300–400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds. PMID:23071654

  16. INDIVIDUAL DIFFERENCES IN AUDITORY PROCESSING IN SPECIFIC LANGUAGE IMPAIRMENT: A FOLLOW-UP STUDY USING EVENT-RELATED POTENTIALS AND BEHAVIOURAL THRESHOLDS

    PubMed Central

    Bishop, Dorothy V.M.; McArthur, Genevieve M.

    2005-01-01

    It has frequently been claimed that children with specific language impairment (SLI) have impaired auditory perception, but there is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area. Discrepancies in this field may arise because (a) a focus on mean results obscures the heterogeneity in the population and (b) insufficient attention has been paid to maturational aspects of auditory processing. We conducted a study of 16 young people with specific language impairment (SLI) and 16 control participants, 24 of whom had had auditory event-related potentials (ERPs) and frequency discrimination thresholds assessed 18 months previously. When originally assessed, around one third of the listeners with SLI had poor behavioural frequency discrimination thresholds, and these tended to be the younger participants. However, most of the SLI group had age-inappropriate late components of the auditory ERP, regardless of their frequency discrimination. At follow-up, the behavioural thresholds of those with poor frequency discrimination improved, though some remained outside the control range. At follow-up, ERPs for many of the individuals in the SLI group were still not age-appropriate. In several cases, waveforms of individuals in the SLI group resembled those of younger typically-developing children, though in other cases the waveform was unlike that of control cases at any age. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. This study emphasises the variability seen in SLI, and the importance of studying individual cases rather than focusing on group means. PMID:15871598

  17. Blocking c-Fos Expression Reveals the Role of Auditory Cortex Plasticity in Sound Frequency Discrimination Learning.

    PubMed

    de Hoz, Livia; Gierej, Dorota; Lioudyno, Victoria; Jaworski, Jacek; Blazejczyk, Magda; Cruces-Solís, Hugo; Beroun, Anna; Lebitko, Tomasz; Nikolaev, Tomasz; Knapska, Ewelina; Nelken, Israel; Kaczmarek, Leszek

    2018-05-01

    The behavioral changes that comprise operant learning are associated with plasticity in early sensory cortices as well as with modulation of gene expression, but the connection between the behavioral, electrophysiological, and molecular changes is only partially understood. We specifically manipulated c-Fos expression, a hallmark of learning-induced synaptic plasticity, in auditory cortex of adult mice using a novel approach based on RNA interference. Locally blocking c-Fos expression caused a specific behavioral deficit in a sound discrimination task, in parallel with decreased cortical experience-dependent plasticity, without affecting baseline excitability or basic auditory processing. Thus, c-Fos-dependent experience-dependent cortical plasticity is necessary for frequency discrimination in an operant behavioral task. Our results connect behavioral, molecular and physiological changes and demonstrate a role of c-Fos in experience-dependent plasticity and learning.

  18. Single Neurons in the Avian Auditory Cortex Encode Individual Identity and Propagation Distance in Naturally Degraded Communication Calls.

    PubMed

    Mouterde, Solveig C; Elie, Julie E; Mathevon, Nicolas; Theunissen, Frédéric E

    2017-03-29

    One of the most complex tasks performed by sensory systems is "scene analysis": the interpretation of complex signals as behaviorally relevant objects. The study of this problem, universal to species and sensory modalities, is particularly challenging in audition, where sounds from various sources and localizations, degraded by propagation through the environment, sum to form a single acoustical signal. Here we investigated in a songbird model, the zebra finch, the neural substrate for ranging and identifying a single source. We relied on ecologically and behaviorally relevant stimuli, contact calls, to investigate the neural discrimination of individual vocal signature as well as sound source distance when calls have been degraded through propagation in a natural environment. Performing electrophysiological recordings in anesthetized birds, we found neurons in the auditory forebrain that discriminate individual vocal signatures despite long-range degradation, as well as neurons discriminating propagation distance, with varying degrees of multiplexing between both information types. Moreover, the neural discrimination performance of individual identity was not affected by propagation-induced degradation beyond what was induced by the decreased intensity. For the first time, neurons with distance-invariant identity discrimination properties as well as distance-discriminant neurons are revealed in the avian auditory cortex. Because these neurons were recorded in animals that had prior experience neither with the vocalizers of the stimuli nor with long-range propagation of calls, we suggest that this neural population is part of a general-purpose system for vocalizer discrimination and ranging. SIGNIFICANCE STATEMENT Understanding how the brain makes sense of the multitude of stimuli that it continually receives in natural conditions is a challenge for scientists. Here we provide a new understanding of how the auditory system extracts behaviorally relevant information, the vocalizer identity and its distance to the listener, from acoustic signals that have been degraded by long-range propagation in natural conditions. We show, for the first time, that single neurons, in the auditory cortex of zebra finches, are capable of discriminating the individual identity and sound source distance in conspecific communication calls. The discrimination of identity in propagated calls relies on a neural coding that is robust to intensity changes, signals' quality, and decreases in the signal-to-noise ratio. Copyright © 2017 Mouterde et al.

  19. Brainstem Correlates of Temporal Auditory Processing in Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Basu, Madhavi; Krishnan, Ananthanarayan; Weber-Fox, Christine

    2010-01-01

    Deficits in identification and discrimination of sounds with short inter-stimulus intervals or short formant transitions in children with specific language impairment (SLI) have been taken to reflect an underlying temporal auditory processing deficit. Using the sustained frequency following response (FFR) and the onset auditory brainstem responses…

  20. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory

    PubMed Central

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263

  1. Level-tolerant duration selectivity in the auditory cortex of the velvety free-tailed bat Molossus molossus.

    PubMed

    Macías, Silvio; Hernández-Abad, Annette; Hechavarría, Julio C; Kössl, Manfred; Mora, Emanuel C

    2015-05-01

    It has been reported previously that in the inferior colliculus of the bat Molossus molossus, neuronal duration tuning is ambiguous because the tuning type of the neurons dramatically changes with the sound level. In the present study, duration tuning was examined in the auditory cortex of M. molossus to describe if it is as ambiguous as the collicular tuning. From a population of 174 cortical 104 (60 %) neurons did not show duration selectivity (all-pass). Around 5 % (9 units) responded preferentially to stimuli having longer durations showing long-pass duration response functions, 35 (20 %) responded to a narrow range of stimulus durations showing band-pass duration response functions, 24 (14 %) responded most strongly to short stimulus durations showing short-pass duration response functions and two neurons (1 %) responded best to two different stimulus durations showing a two-peaked duration-response function. The majority of neurons showing short- (16 out of 24) and band-pass (24 out 35) selectivity displayed "O-shaped" duration response areas. In contrast to the inferior colliculus, duration tuning in the auditory cortex of M. molossus appears level tolerant. That is, the type of duration selectivity and the stimulus duration eliciting the maximum response were unaffected by changing sound level.

  2. [A case of transient auditory agnosia and schizophrenia].

    PubMed

    Kanzaki, Jin; Harada, Tatsuhiko; Kanzaki, Sho

    2011-03-01

    We report a case of transient functional auditory agnosia and schizophrenia and discuss their relationship. A 30-year-old woman with schizophrenia reporting bilateral hearing loss was found in history taking to be able to hear but could neither understand speech nor discriminate among environmental sounds. Audiometry clarified normal but low speech discrimination. Otoacoustic emission and auditory brainstem response were normal. Magnetic resonance imaging (MRI) elsewhere evidenced no abnormal findings. We assumed that taking care of her grandparents who had been discharged from the hospital had unduly stressed her, and her condition improved shortly after she stopped caring for them, returned home and started taking a minor tranquilizer.

  3. [Auditory training in workshops: group therapy option].

    PubMed

    Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa

    2006-01-01

    auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.

  4. Auditory phase and frequency discrimination: a comparison of nine procedures.

    PubMed

    Creelman, C D; Macmillan, N A

    1979-02-01

    Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.

  5. The Duration of Auditory Sensory Memory for Vowel Processing: Neurophysiological and Behavioral Measures.

    PubMed

    Yu, Yan H; Shafer, Valerie L; Sussman, Elyse S

    2018-01-01

    Speech perception behavioral research suggests that rates of sensory memory decay are dependent on stimulus properties at more than one level (e.g., acoustic level, phonemic level). The neurophysiology of sensory memory decay rate has rarely been examined in the context of speech processing. In a lexical tone study, we showed that long-term memory representation of lexical tone slows the decay rate of sensory memory for these tones. Here, we tested the hypothesis that long-term memory representation of vowels slows the rate of auditory sensory memory decay in a similar way to that of lexical tone. Event-related potential (ERP) responses were recorded to Mandarin non-words contrasting the vowels /i/ vs. /u/ and /y/ vs. /u/ from first-language (L1) Mandarin and L1 American English participants under short and long interstimulus interval (ISI) conditions (short ISI: an average of 575 ms, long ISI: an average of 2675 ms). Results revealed poorer discrimination of the vowel contrasts for English listeners than Mandarin listeners, but with different patterns for behavioral perception and neural discrimination. As predicted, English listeners showed the poorest discrimination and identification for the vowel contrast /y/ vs. /u/, and poorer performance in the long ISI condition. In contrast to Yu et al. (2017), however, we found no effect of ISI reflected in the neural responses, specifically the mismatch negativity (MMN), P3a and late negativity ERP amplitudes. We did see a language group effect, with Mandarin listeners generally showing larger MMN and English listeners showing larger P3a. The behavioral results revealed that native language experience plays a role in echoic sensory memory trace maintenance, but the failure to find an effect of ISI on the ERP results suggests that vowel and lexical tone memory traces decay at different rates. Highlights : We examined the interaction between auditory sensory memory decay and language experience. We compared MMN, P3a, LN and behavioral responses in short vs. long interstimulus intervals. We found that different from lexical tone contrast, MMN, P3a, and LN changes to vowel contrasts are not influenced by lengthening the ISI to 2.6 s. We also found that the English listeners discriminated the non-native vowel contrast with lower accuracy under the long ISI condition.

  6. Infant auditory short-term memory for non-linguistic sounds.

    PubMed

    Ross-Sheehy, Shannon; Newman, Rochelle S

    2015-04-01

    This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    PubMed

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  8. Hox2 Genes Are Required for Tonotopic Map Precision and Sound Discrimination in the Mouse Auditory Brainstem.

    PubMed

    Karmakar, Kajari; Narita, Yuichi; Fadok, Jonathan; Ducret, Sebastien; Loche, Alberto; Kitazawa, Taro; Genoud, Christel; Di Meglio, Thomas; Thierry, Raphael; Bacelo, Joao; Lüthi, Andreas; Rijli, Filippo M

    2017-01-03

    Tonotopy is a hallmark of auditory pathways and provides the basis for sound discrimination. Little is known about the involvement of transcription factors in brainstem cochlear neurons orchestrating the tonotopic precision of pre-synaptic input. We found that in the absence of Hoxa2 and Hoxb2 function in Atoh1-derived glutamatergic bushy cells of the anterior ventral cochlear nucleus, broad input topography and sound transmission were largely preserved. However, fine-scale synaptic refinement and sharpening of isofrequency bands of cochlear neuron activation upon pure tone stimulation were impaired in Hox2 mutants, resulting in defective sound-frequency discrimination in behavioral tests. These results establish a role for Hox factors in tonotopic refinement of connectivity and in ensuring the precision of sound transmission in the mammalian auditory circuit. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  9. Auditory discrimination therapy (ADT) for tinnitus management.

    PubMed

    Herraiz, C; Diges, I; Cobo, P

    2007-01-01

    Auditory discrimination training (ADT) designs a procedure to increase cortical areas responding to trained frequencies (damaged cochlear areas with cortical misrepresentation) and to shrink the neighboring over-represented ones (tinnitus pitch). In a prospective descriptive study of 27 patients with high frequency tinnitus, the severity of the tinnitus was measured using a visual analog scale (VAS) and the tinnitus handicap inventory (THI). Patients performed a 10-min auditory discrimination task twice a day during one month. Discontinuous 4 kHz pure tones were mixed randomly with short broadband noise sounds through an MP3 system. After the treatment mean VAS scores were reduced from 5.2 to 4.5 (p=0.000) and the THI decreased from 26.2% to 21.3% (p=0.000). Forty percent of the patients had improvement in tinnitus perception (RESP). Comparing the ADT group with a control group showed statistically significant improvement of their tinnitus as assessed by RESP, VAS, and THI.

  10. Odors Bias Time Perception in Visual and Auditory Modalities

    PubMed Central

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps). PMID:27148143

  11. Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.

    PubMed

    Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine

    2017-03-22

    Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language-learning. Results from this study suggest that brain responses to deviant sounds in an oddball paradigm follow a cascade of oscillatory modulations. This cascade begins with a gamma response that later emerges as a beta synchronization, which is temporally coupled with a theta modulation, and followed by a second, subsequent theta modulation. The difference in frequency and timing of the theta modulations appears to reflect a measure of surprise. These insights into the neurophysiological mechanisms of auditory discrimination provide a basis for exploring the clinically utility of the MMR TF and other auditory oddball responses.

  12. Visually induced gains in pitch discrimination: Linking audio-visual processing with auditory abilities.

    PubMed

    Møller, Cecilie; Højlund, Andreas; Bærentsen, Klaus B; Hansen, Niels Chr; Skewes, Joshua C; Vuust, Peter

    2018-05-01

    Perception is fundamentally a multisensory experience. The principle of inverse effectiveness (PoIE) states how the multisensory gain is maximal when responses to the unisensory constituents of the stimuli are weak. It is one of the basic principles underlying multisensory processing of spatiotemporally corresponding crossmodal stimuli that are well established at behavioral as well as neural levels. It is not yet clear, however, how modality-specific stimulus features influence discrimination of subtle changes in a crossmodally corresponding feature belonging to another modality. Here, we tested the hypothesis that reliance on visual cues to pitch discrimination follow the PoIE at the interindividual level (i.e., varies with varying levels of auditory-only pitch discrimination abilities). Using an oddball pitch discrimination task, we measured the effect of varying visually perceived vertical position in participants exhibiting a wide range of pitch discrimination abilities (i.e., musicians and nonmusicians). Visual cues significantly enhanced pitch discrimination as measured by the sensitivity index d', and more so in the crossmodally congruent than incongruent condition. The magnitude of gain caused by compatible visual cues was associated with individual pitch discrimination thresholds, as predicted by the PoIE. This was not the case for the magnitude of the congruence effect, which was unrelated to individual pitch discrimination thresholds, indicating that the pitch-height association is robust to variations in auditory skills. Our findings shed light on individual differences in multisensory processing by suggesting that relevant multisensory information that crucially aids some perceivers' performance may be of less importance to others, depending on their unisensory abilities.

  13. Auditory experience controls the maturation of song discrimination and sexual response in Drosophila

    PubMed Central

    Li, Xiaodong; Ishimoto, Hiroshi

    2018-01-01

    In birds and higher mammals, auditory experience during development is critical to discriminate sound patterns in adulthood. However, the neural and molecular nature of this acquired ability remains elusive. In fruit flies, acoustic perception has been thought to be innate. Here we report, surprisingly, that auditory experience of a species-specific courtship song in developing Drosophila shapes adult song perception and resultant sexual behavior. Preferences in the song-response behaviors of both males and females were tuned by social acoustic exposure during development. We examined the molecular and cellular determinants of this social acoustic learning and found that GABA signaling acting on the GABAA receptor Rdl in the pC1 neurons, the integration node for courtship stimuli, regulated auditory tuning and sexual behavior. These findings demonstrate that maturation of auditory perception in flies is unexpectedly plastic and is acquired socially, providing a model to investigate how song learning regulates mating preference in insects. PMID:29555017

  14. Cortical activity patterns predict speech discrimination ability

    PubMed Central

    Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P

    2010-01-01

    Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123

  15. The effects of real and illusory glides on pure-tone frequency discrimination.

    PubMed

    Lyzenga, J; Carlyon, R P; Moore, B C J

    2004-07-01

    Experiment 1 measured pure-tone frequency difference limens (DLs) at 1 and 4 kHz. The stimuli had two steady-state portions, which differed in frequency for the target. These portions were separated by a middle section of varying length, which consisted of a silent gap, a frequency glide, or a noise burst (conditions: gap, glide, and noise, respectively). The noise burst created an illusion of the tone continuing through the gap. In the first condition, the stimuli had an overall duration of 500 ms. In the second condition, stimuli had a fixed 50-ms middle section, and the overall duration was varied. DLs were lower for the glide than for the gap condition, consistent with the idea that the auditory system contains a mechanism specific for the detection of dynamic changes. DLs were generally lower for the noise than for the gap condition, suggesting that this mechanism extracts information from an illusory glide. In a second experiment, pure-tone frequency direction-discrimination thresholds were measured using similar stimuli as for the first experiment. For this task, the type of the middle section hardly affected the thresholds, suggesting that the frequency-change detection mechanism does not facilitate the identification of the direction of frequency changes.

  16. Fundamental deficits of auditory perception in Wernicke's aphasia.

    PubMed

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    ERIC Educational Resources Information Center

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  18. To Modulate and Be Modulated: Estrogenic Influences on Auditory Processing of Communication Signals within a Socio-Neuro-Endocrine Framework

    PubMed Central

    Yoder, Kathleen M.; Vicario, David S.

    2012-01-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281

  19. Duration judgments in children with ADHD suggest deficient utilization of temporal information rather than general impairment in timing.

    PubMed

    Radonovich, Krestin J; Mostofsky, Stewart H

    2004-09-01

    Clinicians, parents, and teachers alike have noted that individuals with ADHD often have difficulties with "time management," which has led some to suggest a primary deficit in time perception in ADHD. Previous studies have implicated the basal ganglia, cerebellum, and frontal lobes in time estimation and production, with each region purported to make different contributions to the processing and utilization of temporal information. Given the observed involvement of the frontal-subcortical networks in ADHD, we examined judgment of durations in children with ADHD (N = 27) and age- and gender-matched control subjects (N = 15). Two judgment tasks were administered: short duration (550 ms) and long duration (4 s). The two groups did not differ significantly in their judgments of short interval durations; however, subjects with ADHD performed more poorly when making judgments involving long intervals. The groups also did not differ on a judgment-of-pitch task, ruling out a generalized deficit in auditory discrimination. Selective impairment in making judgments involving long intervals is consistent with performance by patients with frontal lobe lesions and suggests that there is a deficiency in the utilization of temporal information in ADHD (possibly secondary to deficits in working memory and/or strategy utilization), rather than a problem involving a central timing mechanism.

  20. Rhythmic grouping biases constrain infant statistical learning

    PubMed Central

    Hay, Jessica F.; Saffran, Jenny R.

    2012-01-01

    Linguistic stress and sequential statistical cues to word boundaries interact during speech segmentation in infancy. However, little is known about how the different acoustic components of stress constrain statistical learning. The current studies were designed to investigate whether intensity and duration each function independently as cues to initial prominence (trochaic-based hypothesis) or whether, as predicted by the Iambic-Trochaic Law (ITL), intensity and duration have characteristic and separable effects on rhythmic grouping (ITL-based hypothesis) in a statistical learning task. Infants were familiarized with an artificial language (Experiments 1 & 3) or a tone stream (Experiment 2) in which there was an alternation in either intensity or duration. In addition to potential acoustic cues, the familiarization sequences also contained statistical cues to word boundaries. In speech (Experiment 1) and non-speech (Experiment 2) conditions, 9-month-old infants demonstrated discrimination patterns consistent with an ITL-based hypothesis: intensity signaled initial prominence and duration signaled final prominence. The results of Experiment 3, in which 6.5-month-old infants were familiarized with the speech streams from Experiment 1, suggest that there is a developmental change in infants’ willingness to treat increased duration as a cue to word offsets in fluent speech. Infants’ perceptual systems interact with linguistic experience to constrain how infants learn from their auditory environment. PMID:23730217

  1. Effect of musical training on static and dynamic measures of spectral-pattern discrimination.

    PubMed

    Sheft, Stanley; Smayda, Kirsten; Shafiro, Valeriy; Maddox, W Todd; Chandrasekaran, Bharath

    2013-06-01

    Both behavioral and physiological studies have demonstrated enhanced processing of speech in challenging listening environments attributable to musical training. The relationship, however, of this benefit to auditory abilities as assessed by psychoacoustic measures remains unclear. Using tasks previously shown to relate to speech-in-noise perception, the present study evaluated discrimination ability for static and dynamic spectral patterns by 49 listeners grouped as either musicians or nonmusicians. The two static conditions measured the ability to detect a change in the phase of a logarithmic sinusoidal spectral ripple of wideband noise with ripple densities of 1.5 and 3.0 cycles per octave chosen to emphasize either timbre or pitch distinctions, respectively. The dynamic conditions assessed temporal-pattern discrimination of 1-kHz pure tones frequency modulated by different lowpass noise samples with thresholds estimated in terms of either stimulus duration or signal-to-noise ratio. Musicians performed significantly better than nonmusicians on all four tasks. Discriminant analysis showed that group membership was correctly predicted for 88% of the listeners with the structure coefficient of each measure greater than 0.51. Results suggest that enhanced processing of static and dynamic spectral patterns defined by low-rate modulation may contribute to the relationship between musical training and speech-in-noise perception. [Supported by NIH.].

  2. Speech Discrimination Difficulties in High-Functioning Autism Spectrum Disorder Are Likely Independent of Auditory Hypersensitivity

    PubMed Central

    Dunlop, William A.; Enticott, Peter G.; Rajan, Ramesh

    2016-01-01

    Autism Spectrum Disorder (ASD), characterized by impaired communication skills and repetitive behaviors, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD) individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants. PMID:27555814

  3. Development of auditory sensory memory from 2 to 6 years: an MMN study.

    PubMed

    Glass, Elisabeth; Sachse, Steffi; von Suchodoletz, Waldemar

    2008-08-01

    Short-term storage of auditory information is thought to be a precondition for cognitive development, and deficits in short-term memory are believed to underlie learning disabilities and specific language disorders. We examined the development of the duration of auditory sensory memory in normally developing children between the ages of 2 and 6 years. To probe the lifetime of auditory sensory memory we elicited the mismatch negativity (MMN), a component of the late auditory evoked potential, with tone stimuli of two different frequencies presented with various interstimulus intervals between 500 and 5,000 ms. Our findings suggest that memory traces for tone characteristics have a duration of 1-2 s in 2- and 3-year-old children, more than 2 s in 4-year-olds and 3-5 s in 6-year-olds. The results provide insights into the maturational processes involved in auditory sensory memory during the sensitive period of cognitive development.

  4. Cerebral activation associated with speech sound discrimination during the diotic listening task: an fMRI study.

    PubMed

    Ikeda, Yumiko; Yahata, Noriaki; Takahashi, Hidehiko; Koeda, Michihiko; Asai, Kunihiko; Okubo, Yoshiro; Suzuki, Hidenori

    2010-05-01

    Comprehending conversation in a crowd requires appropriate orienting and sustainment of auditory attention to and discrimination of the target speaker. While a multitude of cognitive functions such as voice perception and language processing work in concert to subserve this ability, it is still unclear which cognitive components critically determine successful discrimination of speech sounds under constantly changing auditory conditions. To investigate this, we present a functional magnetic resonance imaging (fMRI) study of changes in cerebral activities associated with varying challenge levels of speech discrimination. Subjects participated in a diotic listening paradigm that presented them with two news stories read simultaneously but independently by a target speaker and a distracting speaker of incongruent or congruent sex. We found that the voice of distracter of congruent rather than incongruent sex made the listening more challenging, resulting in enhanced activities mainly in the left temporal and frontal gyri. Further, the activities at the left inferior, left anterior superior and right superior loci in the temporal gyrus were shown to be significantly correlated with accuracy of the discrimination performance. The present results suggest that the subregions of bilateral temporal gyri play a key role in the successful discrimination of speech under constantly changing auditory conditions as encountered in daily life. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  5. The Time Is Up: Compression of Visual Time Interval Estimations of Bimodal Aperiodic Patterns

    PubMed Central

    Duarte, Fabiola; Lemus, Luis

    2017-01-01

    The ability to estimate time intervals subserves many of our behaviors and perceptual experiences. However, it is not clear how aperiodic (AP) stimuli affect our perception of time intervals across sensory modalities. To address this question, we evaluated the human capacity to discriminate between two acoustic (A), visual (V) or audiovisual (AV) time intervals of trains of scattered pulses. We first measured the periodicity of those stimuli and then sought for correlations with the accuracy and reaction times (RTs) of the subjects. We found that, for all time intervals tested in our experiment, the visual system consistently perceived AP stimuli as being shorter than the periodic (P) ones. In contrast, such a compression phenomenon was not apparent during auditory trials. Our conclusions are: first, the subjects exposed to P stimuli are more likely to measure their durations accurately. Second, perceptual time compression occurs for AP visual stimuli. Lastly, AV discriminations are determined by A dominance rather than by AV enhancement. PMID:28848406

  6. Auditory mismatch negativity deficits in long-term heavy cannabis users.

    PubMed

    Roser, Patrik; Della, Beate; Norra, Christine; Uhl, Idun; Brüne, Martin; Juckel, Georg

    2010-09-01

    Mismatch negativity (MMN) is an auditory event-related potential indicating auditory sensory memory and information processing. The present study tested the hypothesis that chronic cannabis use is associated with deficient MMN generation. MMN was investigated in age- and gender-matched chronic cannabis users (n = 30) and nonuser controls (n = 30). The cannabis users were divided into two groups according to duration and quantity of cannabis consumption. The MMNs resulting from a pseudorandomized sequence of 2 × 900 auditory stimuli were recorded by 32-channel EEG. The standard stimuli were 1,000 Hz, 80 dB SPL and 90 ms duration. The deviant stimuli differed in duration (50 ms) or frequency (1,200 Hz). There were no significant differences in MMN values between cannabis users and nonuser controls in both deviance conditions. With regard to subgroups, reduced amplitudes of frequency MMN at frontal electrodes were found in long-term (≥8 years of use) and heavy (≥15 joints/week) users compared to short-term and light users. The results indicate that chronic cannabis use may cause a specific impairment of auditory information processing. In particular, duration and quantity of cannabis use could be identified as important factors of deficient MMN generation.

  7. How auditory discontinuities and linguistic experience affect the perception of speech and non-speech in English- and Spanish-speaking listeners

    NASA Astrophysics Data System (ADS)

    Hay, Jessica F.; Holt, Lori L.; Lotto, Andrew J.; Diehl, Randy L.

    2005-04-01

    The present study was designed to investigate the effects of long-term linguistic experience on the perception of non-speech sounds in English and Spanish speakers. Research using tone-onset-time (TOT) stimuli, a type of non-speech analogue of voice-onset-time (VOT) stimuli, has suggested that there is an underlying auditory basis for the perception of stop consonants based on a threshold for detecting onset asynchronies in the vicinity of +20 ms. For English listeners, stop consonant labeling boundaries are congruent with the positive auditory discontinuity, while Spanish speakers place their VOT labeling boundaries and discrimination peaks in the vicinity of 0 ms VOT. The present study addresses the question of whether long-term linguistic experience with different VOT categories affects the perception of non-speech stimuli that are analogous in their acoustic timing characteristics. A series of synthetic VOT stimuli and TOT stimuli were created for this study. Using language appropriate labeling and ABX discrimination tasks, labeling boundaries (VOT) and discrimination peaks (VOT and TOT) are assessed for 24 monolingual English speakers and 24 monolingual Spanish speakers. The interplay between language experience and auditory biases are discussed. [Work supported by NIDCD.

  8. Hearing gestures, seeing music: vision influences perceived tone duration.

    PubMed

    Schutz, Michael; Lipscomb, Scott

    2007-01-01

    Percussionists inadvertently use visual information to strategically manipulate audience perception of note duration. Videos of long (L) and short (S) notes performed by a world-renowned percussionist were separated into visual (Lv, Sv) and auditory (La, Sa) components. Visual components contained only the gesture used to perform the note, auditory components the acoustic note itself. Audio and visual components were then crossed to create realistic musical stimuli. Participants were informed of the mismatch, and asked to rate note duration of these audio-visual pairs based on sound alone. Ratings varied based on visual (Lv versus Sv), but not auditory (La versus Sa) components. Therefore while longer gestures do not make longer notes, longer gestures make longer sounding notes through the integration of sensory information. This finding contradicts previous research showing that audition dominates temporal tasks such as duration judgment.

  9. Visual and auditory perception in preschool children at risk for dyslexia.

    PubMed

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Tailoring auditory training to patient needs with single and multiple talkers: transfer-appropriate gains on a four-choice discrimination test.

    PubMed

    Barcroft, Joe; Sommers, Mitchell S; Tye-Murray, Nancy; Mauzé, Elizabeth; Schroy, Catherine; Spehar, Brent

    2011-11-01

    Our long-term objective is to develop an auditory training program that will enhance speech recognition in those situations where patients most want improvement. As a first step, the current investigation trained participants using either a single talker or multiple talkers to determine if auditory training leads to transfer-appropriate gains. The experiment implemented a 2 × 2 × 2 mixed design, with training condition as a between-participants variable and testing interval and test version as repeated-measures variables. Participants completed a computerized six-week auditory training program wherein they heard either the speech of a single talker or the speech of six talkers. Training gains were assessed with single-talker and multi-talker versions of the Four-choice discrimination test. Participants in both groups were tested on both versions. Sixty-nine adult hearing-aid users were randomly assigned to either single-talker or multi-talker auditory training. Both groups showed significant gains on both test versions. Participants who trained with multiple talkers showed greater improvement on the multi-talker version whereas participants who trained with a single talker showed greater improvement on the single-talker version. Transfer-appropriate gains occurred following auditory training, suggesting that auditory training can be designed to target specific patient needs.

  11. Auditory processing disorders and problems with hearing-aid fitting in old age.

    PubMed

    Antonelli, A R

    1978-01-01

    The hearing handicap experienced by elderly subjects depends only partially on end-organ impairment. Not only the neural unit loss along the central auditory pathways contributes to decreased speech discrimination, but also learning processes are slowed down. Diotic listening in elderly people seems to fasten learning of discrimination in critical conditions, as in the case of sensitized speech. This fact, and the binaural gain through the binaural release from masking, stress the superiority, on theoretical grounds, of binaural over monaural hearing-aid fitting.

  12. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    ERIC Educational Resources Information Center

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-01-01

    Purpose: The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Method: Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for…

  13. Auditory-motor integration of subliminal phase shifts in tapping: better than auditory discrimination would predict.

    PubMed

    Kagerer, Florian A; Viswanathan, Priya; Contreras-Vidal, Jose L; Whitall, Jill

    2014-04-01

    Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (nine per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high-threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set. Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier sub-cortical circuitry in those with higher thresholds.

  14. Auditory-motor integration of subliminal phase shifts in tapping: Better than auditory discrimination would predict

    PubMed Central

    Kagerer, Florian A.; Viswanathan, Priya; Contreras-Vidal, Jose L.; Whitall, Jill

    2014-01-01

    Unilateral tapping studies have shown that adults adjust to both perceptible and subliminal changes in phase or frequency. This study focuses on the phase responses to abrupt/perceptible and gradual/subliminal changes in auditory-motor relations during alternating bilateral tapping. We investigated these responses in participants with and without good perceptual acuity as determined by an auditory threshold test. Non-musician adults (9 per group) alternately tapped their index fingers in synchrony with auditory cues set at a frequency of 1.4 Hz. Both groups modulated their responses (with no after-effects) to perceptible and to subliminal changes as low as a 5° change in phase. The high threshold participants were more variable than the adults with low threshold in their responses in the gradual condition set (p=0.05). Both groups demonstrated a synchronization asymmetry between dominant and non-dominant hands associated with the abrupt condition and the later blocks of the gradual condition. Our findings extend previous work in unilateral tapping and suggest (1) no relationship between a discrimination threshold and perceptible auditory-motor integration and (2) a noisier subcortical circuitry in those with higher thresholds. PMID:24449013

  15. Spatiotemporal differentiation in auditory and motor regions during auditory phoneme discrimination.

    PubMed

    Aerts, Annelies; Strobbe, Gregor; van Mierlo, Pieter; Hartsuiker, Robert J; Corthals, Paul; Santens, Patrick; De Letter, Miet

    2017-06-01

    Auditory phoneme discrimination (APD) is supported by both auditory and motor regions through a sensorimotor interface embedded in a fronto-temporo-parietal cortical network. However, the specific spatiotemporal organization of this network during APD with respect to different types of phonemic contrasts is still unclear. Here, we use source reconstruction, applied to event-related potentials in a group of 47 participants, to uncover a potential spatiotemporal differentiation in these brain regions during a passive and active APD task with respect to place of articulation (PoA), voicing and manner of articulation (MoA). Results demonstrate that in an early stage (50-110 ms), auditory, motor and sensorimotor regions elicit more activation during the passive and active APD task with MoA and active APD task with voicing compared to PoA. In a later stage (130-175 ms), the same auditory and motor regions elicit more activation during the APD task with PoA compared to MoA and voicing, yet only in the active condition, implying important timing differences. Degree of attention influences a frontal network during the APD task with PoA, whereas auditory regions are more affected during the APD task with MoA and voicing. Based on these findings, it can be carefully suggested that APD is supported by the integration of early activation of auditory-acoustic properties in superior temporal regions, more perpetuated for MoA and voicing, and later auditory-to-motor integration in sensorimotor areas, more perpetuated for PoA.

  16. Auditory processing deficits are sometimes necessary and sometimes sufficient for language difficulties in children: Evidence from mild to moderate sensorineural hearing loss.

    PubMed

    Halliday, Lorna F; Tuomainen, Outi; Rosen, Stuart

    2017-09-01

    There is a general consensus that many children and adults with dyslexia and/or specific language impairment display deficits in auditory processing. However, how these deficits are related to developmental disorders of language is uncertain, and at least four categories of model have been proposed: single distal cause models, risk factor models, association models, and consequence models. This study used children with mild to moderate sensorineural hearing loss (MMHL) to investigate the link between auditory processing deficits and language disorders. We examined the auditory processing and language skills of 46, 8-16year-old children with MMHL and 44 age-matched typically developing controls. Auditory processing abilities were assessed using child-friendly psychophysical techniques in order to obtain discrimination thresholds. Stimuli incorporated three different timescales (µs, ms, s) and three different levels of complexity (simple nonspeech tones, complex nonspeech sounds, speech sounds), and tasks required discrimination of frequency or amplitude cues. Language abilities were assessed using a battery of standardised assessments of phonological processing, reading, vocabulary, and grammar. We found evidence that three different auditory processing abilities showed different relationships with language: Deficits in a general auditory processing component were necessary but not sufficient for language difficulties, and were consistent with a risk factor model; Deficits in slow-rate amplitude modulation (envelope) detection were sufficient but not necessary for language difficulties, and were consistent with either a single distal cause or a consequence model; And deficits in the discrimination of a single speech contrast (/bɑ/ vs /dɑ/) were neither necessary nor sufficient for language difficulties, and were consistent with an association model. Our findings suggest that different auditory processing deficits may constitute distinct and independent routes to the development of language difficulties in children. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Speech perception in individuals with auditory dys-synchrony: effect of lengthening of voice onset time and burst duration of speech segments.

    PubMed

    Kumar, U A; Jayaram, M

    2013-07-01

    The purpose of this study was to evaluate the effect of lengthening of voice onset time and burst duration of selected speech stimuli on perception by individuals with auditory dys-synchrony. This is the second of a series of articles reporting the effect of signal enhancing strategies on speech perception by such individuals. Two experiments were conducted: (1) assessment of the 'just-noticeable difference' for voice onset time and burst duration of speech sounds; and (2) assessment of speech identification scores when speech sounds were modified by lengthening the voice onset time and the burst duration in units of one just-noticeable difference, both in isolation and in combination with each other plus transition duration modification. Lengthening of voice onset time as well as burst duration improved perception of voicing. However, the effect of voice onset time modification was greater than that of burst duration modification. Although combined lengthening of voice onset time, burst duration and transition duration resulted in improved speech perception, the improvement was less than that due to lengthening of transition duration alone. These results suggest that innovative speech processing strategies that enhance temporal cues may benefit individuals with auditory dys-synchrony.

  18. Assessment of Spectral and Temporal Resolution in Cochlear Implant Users Using Psychoacoustic Discrimination and Speech Cue Categorization.

    PubMed

    Winn, Matthew B; Won, Jong Ho; Moon, Il Joon

    This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). The authors hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. The authors further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Nineteen cochlear implant listeners and 10 listeners with normal hearing participated in a suite of tasks that included spectral ripple discrimination, temporal modulation detection, and syllable categorization, which was split into a spectral cue-based task (targeting the /ba/-/da/ contrast) and a timing cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for cochlear implant listeners. Cochlear implant users were generally less successful at utilizing both spectral and temporal cues for categorization compared with listeners with normal hearing. For the cochlear implant listener group, spectral ripple discrimination was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. Temporal modulation detection using 100- and 10-Hz-modulated noise was not correlated either with the cochlear implant subjects' categorization of voice onset time or with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart nonlinguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (voice onset time) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language.

  19. Different Timescales for the Neural Coding of Consonant and Vowel Sounds

    PubMed Central

    Perez, Claudia A.; Engineer, Crystal T.; Jakkamsetti, Vikram; Carraway, Ryan S.; Perry, Matthew S.

    2013-01-01

    Psychophysical, clinical, and imaging evidence suggests that consonant and vowel sounds have distinct neural representations. This study tests the hypothesis that consonant and vowel sounds are represented on different timescales within the same population of neurons by comparing behavioral discrimination with neural discrimination based on activity recorded in rat inferior colliculus and primary auditory cortex. Performance on 9 vowel discrimination tasks was highly correlated with neural discrimination based on spike count and was not correlated when spike timing was preserved. In contrast, performance on 11 consonant discrimination tasks was highly correlated with neural discrimination when spike timing was preserved and not when spike timing was eliminated. These results suggest that in the early stages of auditory processing, spike count encodes vowel sounds and spike timing encodes consonant sounds. These distinct coding strategies likely contribute to the robust nature of speech sound representations and may help explain some aspects of developmental and acquired speech processing disorders. PMID:22426334

  20. Decreased echolocation performance following high-frequency hearing loss in the false killer whale (Pseudorca crassidens).

    PubMed

    Kloepper, L N; Nachtigall, P E; Gisiner, R; Breese, M

    2010-11-01

    Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.

  1. Input from the medial geniculate nucleus modulates amygdala encoding of fear memory discrimination.

    PubMed

    Ferrara, Nicole C; Cullen, Patrick K; Pullins, Shane P; Rotondo, Elena K; Helmstetter, Fred J

    2017-09-01

    Generalization of fear can involve abnormal responding to cues that signal safety and is common in people diagnosed with post-traumatic stress disorder. Differential auditory fear conditioning can be used as a tool to measure changes in fear discrimination and generalization. Most prior work in this area has focused on elevated amygdala activity as a critical component underlying generalization. The amygdala receives input from auditory cortex as well as the medial geniculate nucleus (MgN) of the thalamus, and these synapses undergo plastic changes in response to fear conditioning and are major contributors to the formation of memory related to both safe and threatening cues. The requirement for MgN protein synthesis during auditory discrimination and generalization, as well as the role of MgN plasticity in amygdala encoding of discrimination or generalization, have not been directly tested. GluR1 and GluR2 containing AMPA receptors are found at synapses throughout the amygdala and their expression is persistently up-regulated after learning. Some of these receptors are postsynaptic to terminals from MgN neurons. We found that protein synthesis-dependent plasticity in MgN is necessary for elevated freezing to both aversive and safe auditory cues, and that this is accompanied by changes in the expressions of AMPA receptor and synaptic scaffolding proteins (e.g., SHANK) at amygdala synapses. This work contributes to understanding the neural mechanisms underlying increased fear to safety signals after stress. © 2017 Ferrara et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Spectral context affects temporal processing in awake auditory cortex

    PubMed Central

    Beitel, Ralph E.; Vollmer, Maike; Heiser, Marc A; Schreiner, Christoph E.

    2013-01-01

    Amplitude modulation encoding is critical for human speech perception and complex sound processing in general. The modulation transfer function (MTF) is a staple of auditory psychophysics, and has been shown to predict speech intelligibility performance in a range of adverse listening conditions and hearing impairments, including cochlear implant-supported hearing. Although both tonal and broadband carriers have been employed in psychophysical studies of modulation detection and discrimination, relatively little is known about differences in the cortical representation of such signals. We obtained MTFs in response to sinusoidal amplitude modulation (SAM) for both narrowband tonal carriers and 2-octave bandwidth noise carriers in the auditory core of awake squirrel monkeys. MTFs spanning modulation frequencies from 4 to 512 Hz were obtained using 16 channel linear recording arrays sampling across all cortical laminae. Carrier frequency for tonal SAM and center frequency for noise SAM was set at the estimated best frequency for each penetration. Changes in carrier type affected both rate and temporal MTFs in many neurons. Using spike discrimination techniques, we found that discrimination of modulation frequency was significantly better for tonal SAM than for noise SAM, though the differences were modest at the population level. Moreover, spike trains elicited by tonal and noise SAM could be readily discriminated in most cases. Collectively, our results reveal remarkable sensitivity to the spectral content of modulated signals, and indicate substantial interdependence between temporal and spectral processing in neurons of the core auditory cortex. PMID:23719811

  3. Intrinsic, stimulus-driven and task-dependent connectivity in human auditory cortex.

    PubMed

    Häkkinen, Suvi; Rinne, Teemu

    2018-06-01

    A hierarchical and modular organization is a central hypothesis in the current primate model of auditory cortex (AC) but lacks validation in humans. Here we investigated whether fMRI connectivity at rest and during active tasks is informative of the functional organization of human AC. Identical pitch-varying sounds were presented during a visual discrimination (i.e. no directed auditory attention), pitch discrimination, and two versions of pitch n-back memory tasks. Analysis based on fMRI connectivity at rest revealed a network structure consisting of six modules in supratemporal plane (STP), temporal lobe, and inferior parietal lobule (IPL) in both hemispheres. In line with the primate model, in which higher-order regions have more longer-range connections than primary regions, areas encircling the STP module showed the highest inter-modular connectivity. Multivariate pattern analysis indicated significant connectivity differences between the visual task and rest (driven by the presentation of sounds during the visual task), between auditory and visual tasks, and between pitch discrimination and pitch n-back tasks. Further analyses showed that these differences were particularly due to connectivity modulations between the STP and IPL modules. While the results are generally in line with the primate model, they highlight the important role of human IPL during the processing of both task-irrelevant and task-relevant auditory information. Importantly, the present study shows that fMRI connectivity at rest, during presentation of sounds, and during active listening provides novel information about the functional organization of human AC.

  4. Meaning in the avian auditory cortex: Neural representation of communication calls

    PubMed Central

    Elie, Julie E; Theunissen, Frédéric E

    2014-01-01

    Understanding how the brain extracts the behavioral meaning carried by specific vocalization types that can be emitted by various vocalizers and in different conditions is a central question in auditory research. This semantic categorization is a fundamental process required for acoustic communication and presupposes discriminative and invariance properties of the auditory system for conspecific vocalizations. Songbirds have been used extensively to study vocal learning, but the communicative function of all their vocalizations and their neural representation has yet to be examined. In our research, we first generated a library containing almost the entire zebra finch vocal repertoire and organized communication calls along 9 different categories based on their behavioral meaning. We then investigated the neural representations of these semantic categories in the primary and secondary auditory areas of 6 anesthetized zebra finches. To analyze how single units encode these call categories, we described neural responses in terms of their discrimination, selectivity and invariance properties. Quantitative measures for these neural properties were obtained using an optimal decoder based both on spike counts and spike patterns. Information theoretic metrics show that almost half of the single units encode semantic information. Neurons achieve higher discrimination of these semantic categories by being more selective and more invariant. These results demonstrate that computations necessary for semantic categorization of meaningful vocalizations are already present in the auditory cortex and emphasize the value of a neuro-ethological approach to understand vocal communication. PMID:25728175

  5. Basic Auditory Processing and Developmental Dyslexia in Chinese

    ERIC Educational Resources Information Center

    Wang, Hsiao-Lan Sharon; Huss, Martina; Hamalainen, Jarmo A.; Goswami, Usha

    2012-01-01

    The present study explores the relationship between basic auditory processing of sound rise time, frequency, duration and intensity, phonological skills (onset-rime and tone awareness, sound blending, RAN, and phonological memory) and reading disability in Chinese. A series of psychometric, literacy, phonological, auditory, and character…

  6. Equivalent mismatch negativity deficits across deviant types in early illness schizophrenia-spectrum patients.

    PubMed

    Hay, Rachel A; Roach, Brian J; Srihari, Vinod H; Woods, Scott W; Ford, Judith M; Mathalon, Daniel H

    2015-02-01

    Neurophysiological abnormalities in auditory deviance processing, as reflected by the mismatch negativity (MMN), have been observed across the course of schizophrenia. Studies in early schizophrenia patients have typically shown varying degrees of MMN amplitude reduction for different deviant types, suggesting that different auditory deviants are uniquely processed and may be differentially affected by duration of illness. To explore this further, we examined the MMN response to 4 auditory deviants (duration, frequency, duration+frequency "double deviant", and intensity) in 24 schizophrenia-spectrum patients early in the illness (ESZ) and 21 healthy controls. ESZ showed significantly reduced MMN relative to healthy controls for all deviant types (p<0.05), with no significant interaction with deviant type. No correlations with clinical symptoms were present (all ps>0.05). These findings support the conclusion that neurophysiological mechanisms underlying processing of auditory deviants are compromised early in illness, and these deficiencies are not specific to the type of deviant presented. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. 'Sorry, I meant the patient's left side': impact of distraction on left-right discrimination.

    PubMed

    McKinley, John; Dempster, Martin; Gormley, Gerard J

    2015-04-01

    Medical students can have difficulty in distinguishing left from right. Many infamous medical errors have occurred when a procedure has been performed on the wrong side, such as in the removal of the wrong kidney. Clinicians encounter many distractions during their work. There is limited information on how these affect performance. Using a neuropsychological paradigm, we aim to elucidate the impacts of different types of distraction on left-right (LR) discrimination ability. Medical students were recruited to a study with four arms: (i) control arm (no distraction); (ii) auditory distraction arm (continuous ambient ward noise); (iii) cognitive distraction arm (interruptions with clinical cognitive tasks), and (iv) auditory and cognitive distraction arm. Participants' LR discrimination ability was measured using the validated Bergen Left-Right Discrimination Test (BLRDT). Multivariate analysis of variance was used to analyse the impacts of the different forms of distraction on participants' performance on the BLRDT. Additional analyses looked at effects of demographics on performance and correlated participants' self-perceived LR discrimination ability and their actual performance. A total of 234 students were recruited. Cognitive distraction had a greater negative impact on BLRDT performance than auditory distraction. Combined auditory and cognitive distraction had a negative impact on performance, but only in the most difficult LR task was this negative impact found to be significantly greater than that of cognitive distraction alone. There was a significant medium-sized correlation between perceived LR discrimination ability and actual overall BLRDT performance. Distraction has a significant impact on performance and multifaceted approaches are required to reduce LR errors. Educationally, greater emphasis on the linking of theory and clinical application is required to support patient safety and human factor training in medical school curricula. Distraction has the potential to impair an individual's ability to make accurate LR decisions and students should be trained from undergraduate level to be mindful of this. © 2015 John Wiley & Sons Ltd.

  8. Factors of Predicted Learning Disorders and their Interaction with Attentional and Perceptual Training Procedures.

    ERIC Educational Resources Information Center

    Friar, John T.

    Two factors of predicted learning disorders were investigated: (1) inability to maintain appropriate classroom behavior (BEH), (2) perceptual discrimination deficit (PERC). Three groups of first-graders (BEH, PERC, normal control) were administered measures of impulse control, distractability, auditory discrimination, and visual discrimination.…

  9. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans.

    PubMed

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach-avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans.

  10. Gaze Duration Biases for Colours in Combination with Dissonant and Consonant Sounds: A Comparative Eye-Tracking Study with Orangutans

    PubMed Central

    Mühlenbeck, Cordelia; Liebal, Katja; Pritsch, Carla; Jacobsen, Thomas

    2015-01-01

    Research on colour preferences in humans and non-human primates suggests similar patterns of biases for and avoidance of specific colours, indicating that these colours are connected to a psychological reaction. Similarly, in the acoustic domain, approach reactions to consonant sounds (considered as positive) and avoidance reactions to dissonant sounds (considered as negative) have been found in human adults and children, and it has been demonstrated that non-human primates are able to discriminate between consonant and dissonant sounds. Yet it remains unclear whether the visual and acoustic approach–avoidance patterns remain consistent when both types of stimuli are combined, how they relate to and influence each other, and whether these are similar for humans and other primates. Therefore, to investigate whether gaze duration biases for colours are similar across primates and whether reactions to consonant and dissonant sounds cumulate with reactions to specific colours, we conducted an eye-tracking study in which we compared humans with one species of great apes, the orangutans. We presented four different colours either in isolation or in combination with consonant and dissonant sounds. We hypothesised that the viewing time for specific colours should be influenced by dissonant sounds and that previously existing avoidance behaviours with regard to colours should be intensified, reflecting their association with negative acoustic information. The results showed that the humans had constant gaze durations which were independent of the auditory stimulus, with a clear avoidance of yellow. In contrast, the orangutans did not show any clear gaze duration bias or avoidance of colours, and they were also not influenced by the auditory stimuli. In conclusion, our findings only partially support the previously identified pattern of biases for and avoidance of specific colours in humans and do not confirm such a pattern for orangutans. PMID:26466351

  11. Perception-Production Link in L2 Japanese Vowel Duration: Training with Technology

    ERIC Educational Resources Information Center

    Okuno, Tomoko; Hardison, Debra M.

    2016-01-01

    This study examined factors affecting perception training of vowel duration in L2 Japanese with transfer to production. In a pre-test, training, post-test design, 48 L1 English speakers were assigned to one of three groups: auditory-visual (AV) training using waveform displays, auditory-only (A-only), or no training. Within-group variables were…

  12. The Effect of Short-Term Auditory Deprivation on the Control of Intraoral Pressure in Pediatric Cochlear Implant Users.

    ERIC Educational Resources Information Center

    Jones, David L.; Gao, Sujuan; Svirsky, Mario A.

    2003-01-01

    A study investigated whether two speech measures (peak intraoral air pressure (IOP) and IOP duration) obtained during production of intervocalic stops would be altered by the presence or absence of a cochlear implant in five children (ages 7-10). The auditory condition affected peak IOP more than IOP duration. (Contains references.) (Author/CR)

  13. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  14. Effect of transcranial direct current stimulation (tDCS) on MMN-indexed auditory discrimination: a pilot study.

    PubMed

    Impey, Danielle; Knott, Verner

    2015-08-01

    Membrane potentials and brain plasticity are basic modes of cerebral information processing. Both can be externally (non-invasively) modulated by weak transcranial direct current stimulation (tDCS). Polarity-dependent tDCS-induced reversible circumscribed increases and decreases in cortical excitability and functional changes have been observed following stimulation of motor and visual cortices but relatively little research has been conducted with respect to the auditory cortex. The aim of this pilot study was to examine the effects of tDCS on auditory sensory discrimination in healthy participants (N = 12) assessed with the mismatch negativity (MMN) brain event-related potential (ERP). In a randomized, double-blind, sham-controlled design, participants received anodal tDCS over the primary auditory cortex (2 mA for 20 min) in one session and 'sham' stimulation (i.e., no stimulation except initial ramp-up for 30 s) in the other session. MMN elicited by changes in auditory pitch was found to be enhanced after receiving anodal tDCS compared to 'sham' stimulation, with the effects being evidenced in individuals with relatively reduced (vs. increased) baseline amplitudes and with relatively small (vs. large) pitch deviants. Additional studies are needed to further explore relationships between tDCS-related parameters, auditory stimulus features and individual differences prior to assessing the utility of this tool for treating auditory processing deficits in psychiatric and/or neurological disorders.

  15. The effect of short-term auditory deprivation on the control of intraoral pressure in pediatric cochlear implant users.

    PubMed

    Jones, David L; Gao, Sujuan; Svirsky, Mario A

    2003-06-01

    The purpose of this study was to determine whether 2 speech measures (peak intraoral air pressure [IOP] and IOP duration) obtained during the production of intervocalic stops would be altered as a function of the presence or absence of auditory stimulation provided by a cochlear implant (CI). Five pediatric CI users were required to produce repetitions of the words puppy and baby with their CIs turned on. The CIs were then turned off for 1 hr, at which time the speech sample was repeated with the CI still turned off. Seven children with normal hearing formed a comparison group. They were also tested twice, with a 1-hr intermediate interval. IOP and IOP duration were measured for the medial consonant in both auditory conditions. The results show that auditory condition affected peak IOP more so than IOP duration. Peak IOP was greater for /p/ than /b/ with the CI off, but some participants reduced or reversed this contrast when the CI was on. The findings suggest that different speakers with CIs may use different speech production strategies as they learn to use the auditory signal for speech.

  16. Evaluating the Precision of Auditory Sensory Memory as an Index of Intrusion in Tinnitus.

    PubMed

    Barrett, Doug J K; Pilling, Michael

    The purpose of this study was to investigate the potential of measures of auditory short-term memory (ASTM) to provide a clinical measure of intrusion in tinnitus. Response functions for six normal listeners on a delayed pitch discrimination task were contrasted in three conditions designed to manipulate attention in the presence and absence of simulated tinnitus: (1) no-tinnitus, (2) ignore-tinnitus, and (3) attend-tinnitus. Delayed pitch discrimination functions were more variable in the presence of simulated tinnitus when listeners were asked to divide attention between the primary task and the amplitude of the tinnitus tone. Changes in the variability of auditory short-term memory may provide a novel means of quantifying the level of intrusion associated with the tinnitus percept during listening.

  17. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  18. Reading strategies of Chinese students with severe to profound hearing loss.

    PubMed

    Cheung, Ka Yan; Leung, Man Tak; McPherson, Bradley

    2013-01-01

    The present study investigated the significance of auditory discrimination and the use of phonological and orthographic codes during the course of reading development in Chinese students who are deaf or hard of hearing (D/HH). In this study, the reading behaviors of D/HH students in 2 tasks-a task on auditory perception of onset rime and a synonym decision task-were compared with those of their chronological age-matched and reading level (RL)-matched controls. Cross-group comparison of the performances of participants in the task on auditory perception suggests that poor auditory discrimination ability may be a possible cause of reading problems for D/HH students. In addition, results of the synonym decision task reveal that D/HH students with poor reading ability demonstrate a significantly greater preference for orthographic rather than phonological information, when compared with the D/HH students with good reading ability and their RL-matched controls. Implications for future studies and educational planning are discussed.

  19. The organization and reorganization of audiovisual speech perception in the first year of life.

    PubMed

    Danielson, D Kyle; Bruderer, Alison G; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F

    2017-04-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.

  20. The organization and reorganization of audiovisual speech perception in the first year of life

    PubMed Central

    Danielson, D. Kyle; Bruderer, Alison G.; Kandhadai, Padmapriya; Vatikiotis-Bateson, Eric; Werker, Janet F.

    2017-01-01

    The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone. PMID:28970650

  1. Assessment of spectral and temporal resolution in cochlear implant users using psychoacoustic discrimination and speech cue categorization

    PubMed Central

    Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon

    2016-01-01

    Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with categorization of the controlled speech cues than with performance on the psychophysical discrimination tasks. Conclusions When evaluating people with cochlear implants, controlled speech-based stimuli are feasible to use in tests of auditory cue categorization, to complement traditional measures of auditory discrimination. Stimuli based on specific speech cues correspond to counterpart non-linguistic measures of discrimination, but potentially show better correspondence with speech perception more generally. The ubiquity of the spectral (formant transition) and temporal (VOT) stimulus dimensions across languages highlights the potential to use this testing approach even in cases where English is not the native language. PMID:27438871

  2. Evaluation of selected auditory tests in school-age children suspected of auditory processing disorders.

    PubMed

    Vanniasegaram, Iyngaram; Cohen, Mazal; Rosen, Stuart

    2004-12-01

    To compare the auditory function of normal-hearing children attending mainstream schools who were referred for an auditory evaluation because of listening/hearing problems (suspected auditory processing disorders [susAPD]) with that of normal-hearing control children. Sixty-five children with a normal standard audiometric evaluation, ages 6-14 yr (32 of whom were referred for susAPD, with the rest age-matched control children), completed a battery of four auditory tests: a dichotic test of competing sentences; a simple discrimination of short tone pairs differing in fundamental frequency at varying interstimulus intervals (TDT); a discrimination task using consonant cluster minimal pairs of real words (CCMP), and an adaptive threshold task for detecting a brief tone presented either simultaneously with a masker (simultaneous masking) or immediately preceding it (backward masking). Regression analyses, including age as a covariate, were performed to determine the extent to which the performance of the two groups differed on each task. Age-corrected z-scores were calculated to evaluate the effectiveness of the complete battery in discriminating the groups. The performance of the susAPD group was significantly poorer than the control group on all but the masking tasks, which failed to differentiate the two groups. The CCMP discriminated the groups most effectively, as it yielded the lowest number of control children with abnormal scores, and performance in both groups was independent of age. By contrast, the proportion of control children who performed poorly on the competing sentences test was unacceptably high. Together, the CCMP (verbal) and TDT (nonverbal) tasks detected impaired listening skills in 56% of the children who were referred to the clinic, compared with 6% of the control children. Performance on the two tasks was not correlated. Two of the four tests evaluated, the CCMP and TDT, proved effective in differentiating the two groups of children of this study. The application of both tests increased the proportion of susAPD children who performed poorly compared with the application of each test alone, while reducing the proportion of control subjects who performed poorly. The findings highlight the importance of carrying out a complete auditory evaluation in children referred for medical attention, even if their standard audiometric evaluation is unremarkable.

  3. Evaluating dedicated and intrinsic models of temporal encoding by varying context

    PubMed Central

    Spencer, Rebecca M.C.; Karmarkar, Uma; Ivry, Richard B.

    2009-01-01

    Two general classes of models have been proposed to account for how people process temporal information in the milliseconds range. Dedicated models entail a mechanism in which time is explicitly encoded; examples include clock–counter models and functional delay lines. Intrinsic models, such as state-dependent networks (SDN), represent time as an emergent property of the dynamics of neural processing. An important property of SDN is that the encoding of duration is context dependent since the representation of an interval will vary as a function of the initial state of the network. Consistent with this assumption, duration discrimination thresholds for auditory intervals spanning 100 ms are elevated when an irrelevant tone is presented at varying times prior to the onset of the test interval. We revisit this effect in two experiments, considering attentional issues that may also produce such context effects. The disruptive effect of a variable context was eliminated or attenuated when the intervals between the irrelevant tone and test interval were made dissimilar or the duration of the test interval was increased to 300 ms. These results indicate how attentional processes can influence the perception of brief intervals, as well as point to important constraints for SDN models. PMID:19487188

  4. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE PAGES

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.; ...

    2017-06-30

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  5. Restoring auditory cortex plasticity in adult mice by restricting thalamic adenosine signaling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blundon, Jay A.; Roy, Noah C.; Teubner, Brett J. W.

    Circuits in the auditory cortex are highly susceptible to acoustic influences during an early postnatal critical period. The auditory cortex selectively expands neural representations of enriched acoustic stimuli, a process important for human language acquisition. Adults lack this plasticity. We show in the murine auditory cortex that juvenile plasticity can be reestablished in adulthood if acoustic stimuli are paired with disruption of ecto-5'-nucleotidase–dependent adenosine production or A1–adenosine receptor signaling in the auditory thalamus. This plasticity occurs at the level of cortical maps and individual neurons in the auditory cortex of awake adult mice and is associated with long-term improvement ofmore » tone-discrimination abilities. We determined that, in adult mice, disrupting adenosine signaling in the thalamus rejuvenates plasticity in the auditory cortex and improves auditory perception.« less

  6. Discrimination Training of Phonemic Contrasts Enhances Phonological Processing in Mainstream School Children

    ERIC Educational Resources Information Center

    Moore, D.R.; Rosenberg, J.F.; Coleman, J.S.

    2005-01-01

    Auditory perceptual learning has been proposed as effective for remediating impaired language and for enhancing normal language development. We examined the effect of phonemic contrast discrimination training on the discrimination of whole words and on phonological awareness in 8- to 10-year-old mainstream school children. Eleven phonemic contrast…

  7. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  8. Understanding response proclivity and the limits of sensory capability: What do we hear and what can we hear?

    NASA Astrophysics Data System (ADS)

    Leek, Marjorie R.; Neff, Donna L.

    2004-05-01

    Charles Watson's studies of informational masking and the effects of stimulus uncertainty on auditory perception have had a profound impact on auditory research. His series of seminal studies in the mid-1970s on the detection and discrimination of target sounds in sequences of brief tones with uncertain properties addresses the fundamental problem of extracting target signals from background sounds. As conceptualized by Chuck and others, informational masking results from more central (even ``cogneetive'') processes as a consequence of stimulus uncertainty, and can be distinguished from ``energetic'' masking, which primarily arises from the auditory periphery. Informational masking techniques are now in common use to study the detection, discrimination, and recognition of complex sounds, the capacity of auditory memory and aspects of auditory selective attention, the often large effects of training to reduce detrimental effects of uncertainty, and the perceptual segregation of target sounds from irrelevant context sounds. This paper will present an overview of past and current research on informational masking, and show how Chuck's work has been expanded in several directions by other scientists to include the effects of informational masking on speech perception and on perception by listeners with hearing impairment. [Work supported by NIDCD.

  9. Active listening: task-dependent plasticity of spectrotemporal receptive fields in primary auditory cortex.

    PubMed

    Fritz, Jonathan; Elhilali, Mounya; Shamma, Shihab

    2005-08-01

    Listening is an active process in which attentive focus on salient acoustic features in auditory tasks can influence receptive field properties of cortical neurons. Recent studies showing rapid task-related changes in neuronal spectrotemporal receptive fields (STRFs) in primary auditory cortex of the behaving ferret are reviewed in the context of current research on cortical plasticity. Ferrets were trained on spectral tasks, including tone detection and two-tone discrimination, and on temporal tasks, including gap detection and click-rate discrimination. STRF changes could be measured on-line during task performance and occurred within minutes of task onset. During spectral tasks, there were specific spectral changes (enhanced response to tonal target frequency in tone detection and discrimination, suppressed response to tonal reference frequency in tone discrimination). However, only in the temporal tasks, the STRF was changed along the temporal dimension by sharpening temporal dynamics. In ferrets trained on multiple tasks, distinctive and task-specific STRF changes could be observed in the same cortical neurons in successive behavioral sessions. These results suggest that rapid task-related plasticity is an ongoing process that occurs at a network and single unit level as the animal switches between different tasks and dynamically adapts cortical STRFs in response to changing acoustic demands.

  10. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners.

    PubMed

    Schumann, Annette; Serman, Maja; Gefeller, Olaf; Hoppe, Ulrich

    2015-03-01

    Specific computer-based auditory training may be a useful completion in the rehabilitation process for cochlear implant (CI) listeners to achieve sufficient speech intelligibility. This study evaluated the effectiveness of a computerized, phoneme-discrimination training programme. The study employed a pretest-post-test design; participants were randomly assigned to the training or control group. Over a period of three weeks, the training group was instructed to train in phoneme discrimination via computer, twice a week. Sentence recognition in different noise conditions (moderate to difficult) was tested pre- and post-training, and six months after the training was completed. The control group was tested and retested within one month. Twenty-seven adult CI listeners who had been using cochlear implants for more than two years participated in the programme; 15 adults in the training group, 12 adults in the control group. Besides significant improvements for the trained phoneme-identification task, a generalized training effect was noted via significantly improved sentence recognition in moderate noise. No significant changes were noted in the difficult noise conditions. Improved performance was maintained over an extended period. Phoneme-discrimination training improves experienced CI listeners' speech perception in noise. Additional research is needed to optimize auditory training for individual benefit.

  11. "Change deafness" arising from inter-feature masking within a single auditory object.

    PubMed

    Barascud, Nicolas; Griffiths, Timothy D; McAlpine, David; Chait, Maria

    2014-03-01

    Our ability to detect prominent changes in complex acoustic scenes depends not only on the ear's sensitivity but also on the capacity of the brain to process competing incoming information. Here, employing a combination of psychophysics and magnetoencephalography (MEG), we investigate listeners' sensitivity in situations when two features belonging to the same auditory object change in close succession. The auditory object under investigation is a sequence of tone pips characterized by a regularly repeating frequency pattern. Signals consisted of an initial, regularly alternating sequence of three short (60 msec) pure tone pips (in the form ABCABC…) followed by a long pure tone with a frequency that is either expected based on the on-going regular pattern ("LONG expected"-i.e., "LONG-expected") or constitutes a pattern violation ("LONG-unexpected"). The change in LONG-expected is manifest as a change in duration (when the long pure tone exceeds the established duration of a tone pip), whereas the change in LONG-unexpected is manifest as a change in both the frequency pattern and a change in the duration. Our results reveal a form of "change deafness," in that although changes in both the frequency pattern and the expected duration appear to be processed effectively by the auditory system-cortical signatures of both changes are evident in the MEG data-listeners often fail to detect changes in the frequency pattern when that change is closely followed by a change in duration. By systematically manipulating the properties of the changing features and measuring behavioral and MEG responses, we demonstrate that feature changes within the same auditory object, which occur close together in time, appear to compete for perceptual resources.

  12. Speech perception task with pseudowords.

    PubMed

    Appezzato, Mariana Martins; Hackerott, Maria Mercedes Saraiva; Avila, Clara Regina Brandão de

    2018-01-01

    Purpose Prepare a list of pseudowords in Brazilian Portuguese to assess the auditory discrimination ability of schoolchildren and investigate the internal consistency of test items and the effect of school grade on discrimination performance. Methods Study participants were 60 schoolchildren (60% female) enrolled in the 3rd (n=14), 4th (n=24) and 5th (n=22) grades of an elementary school in the city of Sao Paulo, Brazil, aged between eight years and two months and 11 years and eight months (99 to 136 months; mean=120.05; SD=10.26), with average school performance score of 7.21 (minimum 5.0; maximum 10; SD=1.23). Forty-eight minimal pairs of Brazilian Portuguese pseudowords distinguished by a single phoneme were prepared. The participants' responses (whether the elements of the pairs were the same or different) were noted and analyzed. The data were analyzed using the Cronbach's Alpha Coefficient, Spearman's Correlation Coefficient, and Bonferroni Post-hoc Test at significance level of 0.05. Results Internal consistency analysis indicated the deletion of 20 pairs. The 28 items with results showed good internal consistency (α=0.84). The maximum and minimum scores of correct discrimination responses were 34 and 16, respectively (mean=30.79; SD=3.68). No correlation was observed between age, school performance, and discrimination performance, and no difference between school grades was found. Conclusion Most of the items proposed for assessing the auditory discrimination of speech sounds showed good internal consistency in relation to the task. Age and school grade did not improve the auditory discrimination of speech sounds.

  13. Subcortical Plasticity Following Perceptual Learning in a Pitch Discrimination Task

    PubMed Central

    Plack, Christopher J.

    2010-01-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change. PMID:20878201

  14. Subcortical plasticity following perceptual learning in a pitch discrimination task.

    PubMed

    Carcagno, Samuele; Plack, Christopher J

    2011-02-01

    Practice can lead to dramatic improvements in the discrimination of auditory stimuli. In this study, we investigated changes of the frequency-following response (FFR), a subcortical component of the auditory evoked potentials, after a period of pitch discrimination training. Twenty-seven adult listeners were trained for 10 h on a pitch discrimination task using one of three different complex tone stimuli. One had a static pitch contour, one had a rising pitch contour, and one had a falling pitch contour. Behavioral measures of pitch discrimination and FFRs for all the stimuli were measured before and after the training phase for these participants, as well as for an untrained control group (n = 12). Trained participants showed significant improvements in pitch discrimination compared to the control group for all three trained stimuli. These improvements were partly specific for stimuli with the same pitch modulation (dynamic vs. static) and with the same pitch trajectory (rising vs. falling) as the trained stimulus. Also, the robustness of FFR neural phase locking to the sound envelope increased significantly more in trained participants compared to the control group for the static and rising contour, but not for the falling contour. Changes in FFR strength were partly specific for stimuli with the same pitch modulation (dynamic vs. static) of the trained stimulus. Changes in FFR strength, however, were not specific for stimuli with the same pitch trajectory (rising vs. falling) as the trained stimulus. These findings indicate that even relatively low-level processes in the mature auditory system are subject to experience-related change.

  15. A selective impairment of perception of sound motion direction in peripheral space: A case study.

    PubMed

    Thaler, Lore; Paciocco, Joseph; Daley, Mark; Lesniak, Gabriella D; Purcell, David W; Fraser, J Alexander; Dutton, Gordon N; Rossit, Stephanie; Goodale, Melvyn A; Culham, Jody C

    2016-01-08

    It is still an open question if the auditory system, similar to the visual system, processes auditory motion independently from other aspects of spatial hearing, such as static location. Here, we report psychophysical data from a patient (female, 42 and 44 years old at the time of two testing sessions), who suffered a bilateral occipital infarction over 12 years earlier, and who has extensive damage in the occipital lobe bilaterally, extending into inferior posterior temporal cortex bilaterally and into right parietal cortex. We measured the patient's spatial hearing ability to discriminate static location, detect motion and perceive motion direction in both central (straight ahead), and right and left peripheral auditory space (50° to the left and right of straight ahead). Compared to control subjects, the patient was impaired in her perception of direction of auditory motion in peripheral auditory space, and the deficit was more pronounced on the right side. However, there was no impairment in her perception of the direction of auditory motion in central space. Furthermore, detection of motion and discrimination of static location were normal in both central and peripheral space. The patient also performed normally in a wide battery of non-spatial audiological tests. Our data are consistent with previous neuropsychological and neuroimaging results that link posterior temporal cortex and parietal cortex with the processing of auditory motion. Most importantly, however, our data break new ground by suggesting a division of auditory motion processing in terms of speed and direction and in terms of central and peripheral space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Modulation of Auditory Cortex Response to Pitch Variation Following Training with Microtonal Melodies

    PubMed Central

    Zatorre, Robert J.; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  17. Auditory Memory for Timbre

    ERIC Educational Resources Information Center

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  18. A Study of Semantic Features: Electrophysiological Correlates.

    ERIC Educational Resources Information Center

    Wetzel, Frederick; And Others

    This study investigates whether words differing in a single contrastive semantic feature (positive/negative) can be discriminated by auditory evoked responses (AERs). Ten right-handed college students were provided with auditory stimuli consisting of 20 relational words (more/less; high/low, etc.) spoken with a middle American accent and computer…

  19. Effect of Auditory Motion Velocity on Reaction Time and Cortical Processes

    ERIC Educational Resources Information Center

    Getzmann, Stephan

    2009-01-01

    The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…

  20. Effect of Three Classroom Listening Conditions on Speech Intelligibility

    ERIC Educational Resources Information Center

    Ross, Mark; Giolas, Thomas G.

    1971-01-01

    Speech discrimination scores for 13 deaf children were obtained in a classroom under: usual listening condition (hearing aid or not), binaural listening situation using auditory trainer/FM receiver with wireless microphone transmitter turned off, and binaural condition with inputs from auditory trainer/FM receiver and wireless microphone/FM…

  1. Assessing tinnitus and prospective tinnitus therapeutics using a psychophysical animal model.

    PubMed

    Bauer, C A; Brozoski, T J

    2001-03-01

    Subjective tinnitus is a common and often debilitating disorder that is difficult to study because it is a perceptual state without an objective stimulus correlate. Studying tinnitus in humans is further complicated by the heterogeneity of tinnitus quality, severity, and associated hearing loss. As a consequence, the pathophysiology of tinnitus is poorly understood and treatments are often unsuccessful. In the present study, an animal psychophysical model was developed to reflect several features of tinnitus observed in humans. Chronic tinnitus was induced in rats by a single intense unilateral exposure to noise. The tinnitus was measured using a psychophysical procedure, which required the animals to discriminate between auditory test stimuli consisting of tones, noise, and 0 dB. Tinnitus was indicated by a frequency-specific shift in discrimination functions with respect to control subjects not exposed to noise. The psychophysical consequences of the noise exposure were best explained by a tinnitus hypothesis and could not be explained easily by other consequences of noise exposure such as hearing loss. The qualitative features of the tinnitus were determined and related to the duration of noise exposure and the associated cochlear trauma. The tinnitus was found to persist and intensify over 17 months of testing. Finally, the tinnitus was reversibly attenuated by treatment with gabapentin, a GABA agonist. It was concluded that this model reflected several features of human tinnitus, such as its tonality and persistence, and could be useful as a screen for potential therapeutics as well as a tool to help unravel the pathophysiology of the disorder of phantom auditory perception.

  2. Temporal processing and long-latency auditory evoked potential in stutterers.

    PubMed

    Prestes, Raquel; de Andrade, Adriana Neves; Santos, Renata Beatriz Fernandes; Marangoni, Andrea Tortosa; Schiefer, Ana Maria; Gil, Daniela

    Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n=20) and non-stutters (n=21), compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders

    PubMed Central

    Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.

    2013-01-01

    Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845

  4. Auditory processing and morphological anomalies in medial geniculate nucleus of Cntnap2 mutant mice.

    PubMed

    Truong, Dongnhu T; Rendall, Amanda R; Castelluccio, Brian C; Eigsti, Inge-Marie; Fitch, R Holly

    2015-12-01

    Genetic epidemiological studies support a role for CNTNAP2 in developmental language disorders such as autism spectrum disorder, specific language impairment, and dyslexia. Atypical language development and function represent a core symptom of autism spectrum disorder (ASD), with evidence suggesting that aberrant auditory processing-including impaired spectrotemporal processing and enhanced pitch perception-may both contribute to an anomalous language phenotype. Investigation of gene-brain-behavior relationships in social and repetitive ASD symptomatology have benefited from experimentation on the Cntnap2 knockout (KO) mouse. However, auditory-processing behavior and effects on neural structures within the central auditory pathway have not been assessed in this model. Thus, this study examined whether auditory-processing abnormalities were associated with mutation of the Cntnap2 gene in mice. Cntnap2 KO mice were assessed on auditory-processing tasks including silent gap detection, embedded tone detection, and pitch discrimination. Cntnap2 knockout mice showed deficits in silent gap detection but a surprising superiority in pitch-related discrimination as compared with controls. Stereological analysis revealed a reduction in the number and density of neurons, as well as a shift in neuronal size distribution toward smaller neurons, in the medial geniculate nucleus of mutant mice. These findings are consistent with a central role for CNTNAP2 in the ontogeny and function of neural systems subserving auditory processing and suggest that developmental disruption of these neural systems could contribute to the atypical language phenotype seen in autism spectrum disorder. (c) 2015 APA, all rights reserved).

  5. Dividing time: concurrent timing of auditory and visual events by young and elderly adults.

    PubMed

    McAuley, J Devin; Miller, Jonathan P; Wang, Mo; Pang, Kevin C H

    2010-07-01

    This article examines age differences in individual's ability to produce the durations of learned auditory and visual target events either in isolation (focused attention) or concurrently (divided attention). Young adults produced learned target durations equally well in focused and divided attention conditions. Older adults, in contrast, showed an age-related increase in timing variability in divided attention conditions that tended to be more pronounced for visual targets than for auditory targets. Age-related impairments were associated with a decrease in working memory span; moreover, the relationship between working memory and timing performance was largest for visual targets in divided attention conditions.

  6. Cortical potentials evoked by confirming and disconfirming feedback following an auditory discrimination.

    NASA Technical Reports Server (NTRS)

    Squires, K. C.; Hillyard, S. A.; Lindsay, P. H.

    1973-01-01

    Vertex potentials elicited by visual feedback signals following an auditory intensity discrimination have been studied with eight subjects. Feedback signals which confirmed the prior sensory decision elicited small P3s, while disconfirming feedback elicited P3s that were larger. On the average, the latency of P3 was also found to increase with increasing disparity between the judgment and the feedback information. These effects were part of an overall dichotomy in wave shape following confirming vs disconfirming feedback. These findings are incorporated in a general model of the role of P3 in perceptual decision making.

  7. Auditory Processing in Specific Language Impairment (SLI): Relations With the Perception of Lexical and Phrasal Stress.

    PubMed

    Richards, Susan; Goswami, Usha

    2015-08-01

    We investigated whether impaired acoustic processing is a factor in developmental language disorders. The amplitude envelope of the speech signal is known to be important in language processing. We examined whether impaired perception of amplitude envelope rise time is related to impaired perception of lexical and phrasal stress in children with specific language impairment (SLI). Twenty-two children aged between 8 and 12 years participated in this study. Twelve had SLI; 10 were typically developing controls. All children completed psychoacoustic tasks measuring rise time, intensity, frequency, and duration discrimination. They also completed 2 linguistic stress tasks measuring lexical and phrasal stress perception. The SLI group scored significantly below the typically developing controls on both stress perception tasks. Performance on stress tasks correlated with individual differences in auditory sensitivity. Rise time and frequency thresholds accounted for the most unique variance. Digit Span also contributed to task success for the SLI group. The SLI group had difficulties with both acoustic and stress perception tasks. Our data suggest that poor sensitivity to amplitude rise time and sound frequency significantly contributes to the stress perception skills of children with SLI. Other cognitive factors such as phonological memory are also implicated.

  8. Magnetoencephalographic recording of auditory mismatch negativity in response to duration and frequency deviants in a single session in patients with schizophrenia.

    PubMed

    Suga, Motomu; Nishimura, Yukika; Kawakubo, Yuki; Yumoto, Masato; Kasai, Kiyoto

    2016-07-01

    Auditory mismatch negativity (MMN) and its magnetoencephalographic (MEG) counterpart (MMNm) are an established biological index in schizophrenia research. MMN in response to duration and frequency deviants may have differential relevance to the pathophysiology and clinical stages of schizophrenia. MEG has advantage in that it almost purely detects MMNm arising from the auditory cortex. However, few previous MEG studies on schizophrenia have simultaneously assessed MMNm in response to duration and frequency deviants or examined the effect of chronicity on the group difference. Forty-two patients with chronic schizophrenia and 74 matched control subjects participated in the study. Using a whole-head MEG, MMNm in response to duration and frequency deviants of tones was recorded while participants passively listened to an auditory sequence. Compared to healthy subjects, patients with schizophrenia exhibited significantly reduced powers of MMNm in response to duration deviant in both hemispheres, whereas MMNm in response to frequency deviant did not differ between the two groups. These results did not change according to the chronicity of the illness. These results, obtained by using a sequence-enabling simultaneous assessment of both types of MMNm, suggest that MEG recording of MMN in response to duration deviant may be a more sensitive biological marker of schizophrenia than MMN in response to frequency deviant. Our findings represent an important first step towards establishment of MMN as a biomarker for schizophrenia in real-world clinical psychiatry settings. © 2016 The Authors. Psychiatry and Clinical Neurosciences © 2016 Japanese Society of Psychiatry and Neurology.

  9. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    PubMed

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Revealing and quantifying the impaired phonological analysis underpinning impaired comprehension in Wernicke's aphasia.

    PubMed

    Robson, Holly; Keidel, James L; Ralph, Matthew A Lambon; Sage, Karen

    2012-01-01

    Wernicke's aphasia is a condition which results in severely disrupted language comprehension following a lesion to the left temporo-parietal region. A phonological analysis deficit has traditionally been held to be at the root of the comprehension impairment in Wernicke's aphasia, a view consistent with current functional neuroimaging which finds areas in the superior temporal cortex responsive to phonological stimuli. However behavioural evidence to support the link between a phonological analysis deficit and auditory comprehension has not been yet shown. This study extends seminal work by Blumstein, Baker, and Goodglass (1977) to investigate the relationship between acoustic-phonological perception, measured through phonological discrimination, and auditory comprehension in a case series of Wernicke's aphasia participants. A novel adaptive phonological discrimination task was used to obtain reliable thresholds of the phonological perceptual distance required between nonwords before they could be discriminated. Wernicke's aphasia participants showed significantly elevated thresholds compared to age and hearing matched control participants. Acoustic-phonological thresholds correlated strongly with auditory comprehension abilities in Wernicke's aphasia. In contrast, nonverbal semantic skills showed no relationship with auditory comprehension. The results are evaluated in the context of recent neurobiological models of language and suggest that impaired acoustic-phonological perception underlies the comprehension impairment in Wernicke's aphasia and favour models of language which propose a leftward asymmetry in phonological analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Pitch perception prior to cortical maturation

    NASA Astrophysics Data System (ADS)

    Lau, Bonnie K.

    Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.

  12. Speech-feature discrimination in children with Asperger syndrome as determined with the multi-feature mismatch negativity paradigm.

    PubMed

    Kujala, T; Kuuluvainen, S; Saalasti, S; Jansson-Verkasalo, E; von Wendt, L; Lepistö, T

    2010-09-01

    Asperger syndrome, belonging to the autistic spectrum of disorders, involves deficits in social interaction and prosodic use of language but normal development of formal language abilities. Auditory processing involves both hyper- and hypoactive reactivity to acoustic changes. Responses composed of mismatch negativity (MMN) and obligatory components were recorded for five types of deviations in syllables (vowel, vowel duration, consonant, syllable frequency, syllable intensity) with the multi-feature paradigm from 8-12-year old children with Asperger syndrome. Children with Asperger syndrome had larger MMNs for intensity and smaller MMNs for frequency changes than typically developing children, whereas no MMN group differences were found for the other deviant stimuli. Furthermore, children with Asperger syndrome performed more poorly than controls in Comprehension of Instructions subtest of a language test battery. Cortical speech-sound discrimination is aberrant in children with Asperger syndrome. This is evident both as hypersensitive and depressed neural reactions to speech-sound changes, and is associated with features (frequency, intensity) which are relevant for prosodic processing. The multi-feature MMN paradigm, which includes variation and thereby resembles natural speech hearing circumstances, suggests abnormal pattern of speech discrimination in Asperger syndrome, including both hypo- and hypersensitive responses for speech features. 2010 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  13. [Auditory processing and high frequency audiometry in students of São Paulo].

    PubMed

    Ramos, Cristina Silveira; Pereira, Liliane Desgualdo

    2005-01-01

    Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.

  14. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    PubMed

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. The development of involuntary and voluntary attention from childhood to adulthood: a combined behavioral and event-related potential study.

    PubMed

    Wetzel, Nicole; Widmann, Andreas; Berti, Stefan; Schröger, Erich

    2006-10-01

    This study investigated auditory involuntary and voluntary attention in children aged 6-8, 10-12 and young adults. The strength of distracting stimuli (20% and 5% pitch changes) and the amount of allocation of attention were varied. In an auditory distraction paradigm event-related potentials (ERPs) and behavioral data were measured from subjects either performing a sound duration discrimination task or watching a silent video. Pitch changed sounds caused prolonged reaction times and decreased hit rates in all age groups. Larger distractors (20%) caused stronger distraction in children, but not in adults. The amplitudes of mismatch negativity (MMN), P3a, and reorienting negativity (RON) were modulated by age and by voluntary attention. P3a was additionally affected by distractor strength. Maturational changes were also observed in the amplitudes of P1 (decreasing with age) and N1 (increasing with age). P2-modulation by voluntary attention was opposite in young children and adults. Results suggest quantitative and qualitative changes in auditory voluntary and involuntary attention and distraction during development. The processing steps involved in distraction (pre-attentive change detection, attention switch, reorienting) are functional in children aged 6-8 but reveal characteristic differences to those of young adults. In general, distractibility as indicated by behavioral and ERP measures decreases from childhood to adulthood. Behavioral and ERP markers for different processing stages involved in voluntary and involuntary attention reveal characteristic developmental changes from childhood to young adulthood.

  16. An Experimental Investigation of the Effect of Altered Auditory Feedback on the Conversational Speech of Adults Who Stutter

    ERIC Educational Resources Information Center

    Lincoln, Michelle; Packman, Ann; Onslow, Mark; Jones, Mark

    2010-01-01

    Purpose: To investigate the impact on percentage of syllables stuttered of various durations of delayed auditory feedback (DAF), levels of frequency-altered feedback (FAF), and masking auditory feedback (MAF) during conversational speech. Method: Eleven adults who stuttered produced 10-min conversational speech samples during a control condition…

  17. Duration of Auditory Sensory Memory in Parents of Children with SLI: A Mismatch Negativity Study

    ERIC Educational Resources Information Center

    Barry, Johanna G.; Hardiman, Mervyn J.; Line, Elizabeth; White, Katherine B.; Yasin, Ifat; Bishop, Dorothy V. M.

    2008-01-01

    In a previous behavioral study, we showed that parents of children with SLI had a subclinical deficit in phonological short-term memory. Here, we tested the hypothesis that they also have a deficit in nonverbal auditory sensory memory. We measured auditory sensory memory using a paradigm involving an electrophysiological component called the…

  18. Late Maturation of Auditory Perceptual Learning

    ERIC Educational Resources Information Center

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  19. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    DTIC Science & Technology

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  20. Auditory Stream Segregation and the Perception of Across-Frequency Synchrony

    ERIC Educational Resources Information Center

    Micheyl, Christophe; Hunter, Cynthia; Oxenham, Andrew J.

    2010-01-01

    This study explored the extent to which sequential auditory grouping affects the perception of temporal synchrony. In Experiment 1, listeners discriminated between 2 pairs of asynchronous "target" tones at different frequencies, A and B, in which the B tone either led or lagged. Thresholds were markedly higher when the target tones were temporally…

  1. Establishing Auditory-Tactile-Visual Equivalence Classes in Children with Autism and Developmental Delays

    ERIC Educational Resources Information Center

    Mullen, Stuart; Dixon, Mark R.; Belisle, Jordan; Stanley, Caleb

    2017-01-01

    The current study sought to evaluate the efficacy of a stimulus equivalence training procedure in establishing auditory-tactile-visual stimulus classes with 2 children with autism and developmental delays. Participants were exposed to vocal-tactile (A-B) and tactile-picture (B-C) conditional discrimination training and were tested for the…

  2. Auditory Cortex Is Required for Fear Potentiation of Gap Detection

    PubMed Central

    Weible, Aldis P.; Liu, Christine; Niell, Cristopher M.

    2014-01-01

    Auditory cortex is necessary for the perceptual detection of brief gaps in noise, but is not necessary for many other auditory tasks such as frequency discrimination, prepulse inhibition of startle responses, or fear conditioning with pure tones. It remains unclear why auditory cortex should be necessary for some auditory tasks but not others. One possibility is that auditory cortex is causally involved in gap detection and other forms of temporal processing in order to associate meaning with temporally structured sounds. This predicts that auditory cortex should be necessary for associating meaning with gaps. To test this prediction, we developed a fear conditioning paradigm for mice based on gap detection. We found that pairing a 10 or 100 ms gap with an aversive stimulus caused a robust enhancement of gap detection measured 6 h later, which we refer to as fear potentiation of gap detection. Optogenetic suppression of auditory cortex during pairing abolished this fear potentiation, indicating that auditory cortex is critically involved in associating temporally structured sounds with emotionally salient events. PMID:25392510

  3. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    PubMed

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  4. The relationship of phonological ability, speech perception, and auditory perception in adults with dyslexia

    PubMed Central

    Law, Jeremy M.; Vandermosten, Maaike; Ghesquiere, Pol; Wouters, Jan

    2014-01-01

    This study investigated whether auditory, speech perception, and phonological skills are tightly interrelated or independently contributing to reading. We assessed each of these three skills in 36 adults with a past diagnosis of dyslexia and 54 matched normal reading adults. Phonological skills were tested by the typical threefold tasks, i.e., rapid automatic naming, verbal short-term memory and phonological awareness. Dynamic auditory processing skills were assessed by means of a frequency modulation (FM) and an amplitude rise time (RT); an intensity discrimination task (ID) was included as a non-dynamic control task. Speech perception was assessed by means of sentences and words-in-noise tasks. Group analyses revealed significant group differences in auditory tasks (i.e., RT and ID) and in phonological processing measures, yet no differences were found for speech perception. In addition, performance on RT discrimination correlated with reading but this relation was mediated by phonological processing and not by speech-in-noise. Finally, inspection of the individual scores revealed that the dyslexic readers showed an increased proportion of deviant subjects on the slow-dynamic auditory and phonological tasks, yet each individual dyslexic reader does not display a clear pattern of deficiencies across the processing skills. Although our results support phonological and slow-rate dynamic auditory deficits which relate to literacy, they suggest that at the individual level, problems in reading and writing cannot be explained by the cascading auditory theory. Instead, dyslexic adults seem to vary considerably in the extent to which each of the auditory and phonological factors are expressed and interact with environmental and higher-order cognitive influences. PMID:25071512

  5. Discrimination of Male Voice Quality by 8 and 9 Week Old Infants.

    ERIC Educational Resources Information Center

    Culp, Rex E.; Gallas, Howard B.

    This paper reports a study which investigated 2-month-old infants' auditory discrimination of tone quality in the male voice, extending a previous study which found that voice quality changes (soft versus harsh) in a female voice were discriminable by infants at this age. Subjects were 20 infants, tested at 8 and 9 weeks of age. Each infant was…

  6. Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.

    PubMed

    Murai, Yuki; Yotsumoto, Yuko

    2016-01-01

    When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.

  7. Patterns of language and auditory dysfunction in 6-year-old children with epilepsy.

    PubMed

    Selassie, Gunilla Rejnö-Habte; Olsson, Ingrid; Jennische, Margareta

    2009-01-01

    In a previous study we reported difficulty with expressive language and visuoperceptual ability in preschool children with epilepsy and otherwise normal development. The present study analysed speech and language dysfunction for each individual in relation to epilepsy variables, ear preference, and intelligence in these children and described their auditory function. Twenty 6-year-old children with epilepsy (14 females, 6 males; mean age 6:5 y, range 6 y-6 y 11 mo) and 30 reference children without epilepsy (18 females, 12 males; mean age 6:5 y, range 6 y-6 y 11 mo) were assessed for language and auditory ability. Low scores for the children with epilepsy were analysed with respect to speech-language domains, type of epilepsy, site of epileptiform activity, intelligence, and language laterality. Auditory attention, perception, discrimination, and ear preference were measured with a dichotic listening test, and group comparisons were performed. Children with left-sided partial epilepsy had extensive language dysfunction. Most children with partial epilepsy had phonological dysfunction. Language dysfunction was also found in children with generalized and unclassified epilepsies. The children with epilepsy performed significantly worse than the reference children in auditory attention, perception of vowels and discrimination of consonants for the right ear and had more left ear advantage for vowels, indicating undeveloped language laterality.

  8. Auditory discrimination therapy (ADT) for tinnitus managment: preliminary results.

    PubMed

    Herraiz, C; Diges, I; Cobo, P; Plaza, G; Aparicio, J M

    2006-12-01

    This clinical assay has demonstrated the efficacy of auditory discrimination therapy (ADT) in tinnitus management compared with a waiting-list group. In all, 43% of the ADT patients improved their tinnitus, and its intensity together with its handicap were statistically decreased (EMB rating: B-2). To describe the effect of sound discrimination training on tinnitus. ADT designs a procedure to increase the cortical representation of trained frequencies (damaged cochlear areas with a secondary reduction of cortical stimulation) and to shrink the neighbouring over-represented ones (corresponding to tinnitus pitch). This prospective descriptive study included 14 patients with high frequency matched tinnitus. Tinnitus severity was measured according to a visual analogue scale (VAS) and the Tinnitus Handicap Inventory (THI). Patients performed a 10-min auditory discrimination task twice a day for 1 month. Discontinuous 8 kHz pure tones were randomly mixed with 500 ms 'white noise' sounds through a MP3 system. ADT group results were compared with a waiting-list group (n=21). In all, 43% of our patients had improvement in their tinnitus. A significant improvement in VAS (p=0.004) and THI mean scores was achieved (p=0.038). Statistical differences between ADT and the waiting-list group have been proved, considering patients' self-evaluations (p=0.043) and VAS scores (p=0.004). A non-significant reduction of THI was achieved (p=0.113).

  9. Discrimination of timbre in early auditory responses of the human brain.

    PubMed

    Seol, Jaeho; Oh, MiAe; Kim, June Sic; Jin, Seung-Hyun; Kim, Sun Il; Chung, Chun Kee

    2011-01-01

    The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1)-testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  10. Auditory sensory memory and language abilities in former late talkers: a mismatch negativity study.

    PubMed

    Grossheinrich, Nicola; Kademann, Stefanie; Bruder, Jennifer; Bartling, Juergen; Von Suchodoletz, Waldemar

    2010-09-01

    The present study investigated whether (a) a reduced duration of auditory sensory memory is found in late talking children and (b) whether deficits of sensory memory are linked to persistent difficulties in language acquisition. Former late talkers and children without delayed language development were examined at the age of 4 years and 7 months using mismatch negativity (MMN) with interstimulus intervals (ISIs) of 500 ms and 2000 ms. Additionally, short-term memory, language skills, and nonverbal intelligence were assessed. MMN mean amplitude was reduced for the ISI of 2000 ms in former late talking children both with and without persistent language deficits. In summary, our findings suggest that late talkers are characterized by a reduced duration of auditory sensory memory. However, deficits in auditory sensory memory are not sufficient for persistent language difficulties and may be compensated for by some children.

  11. Promises of formal and informal musical activities in advancing neurocognitive development throughout childhood.

    PubMed

    Putkinen, Vesa; Tervaniemi, Mari; Saarikivi, Katri; Huotilainen, Minna

    2015-03-01

    Adult musicians show superior neural sound discrimination when compared to nonmusicians. However, it is unclear whether these group differences reflect the effects of experience or preexisting neural enhancement in individuals who seek out musical training. Tracking how brain function matures over time in musically trained and nontrained children can shed light on this issue. Here, we review our recent longitudinal event-related potential (ERP) studies that examine how formal musical training and less formal musical activities influence the maturation of brain responses related to sound discrimination and auditory attention. These studies found that musically trained school-aged children and preschool-aged children attending a musical playschool show more rapid maturation of neural sound discrimination than their control peers. Importantly, we found no evidence for pretraining group differences. In a related cross-sectional study, we found ERP and behavioral evidence for improved executive functions and control over auditory novelty processing in musically trained school-aged children and adolescents. Taken together, these studies provide evidence for the causal role of formal musical training and less formal musical activities in shaping the development of important neural auditory skills and suggest transfer effects with domain-general implications. © 2015 New York Academy of Sciences.

  12. Keeping Timbre in Mind: Working Memory for Complex Sounds that Can't Be Verbalized

    ERIC Educational Resources Information Center

    Golubock, Jason L.; Janata, Petr

    2013-01-01

    Properties of auditory working memory for sounds that lack strong semantic associations and are not readily verbalized or sung are poorly understood. We investigated auditory working memory capacity for lists containing 2-6 easily discriminable abstract sounds synthesized within a constrained timbral space, at delays of 1-6 s (Experiment 1), and…

  13. Encoding frequency contrast in primate auditory cortex

    PubMed Central

    Scott, Brian H.; Semple, Malcolm N.

    2014-01-01

    Changes in amplitude and frequency jointly determine much of the communicative significance of complex acoustic signals, including human speech. We have previously described responses of neurons in the core auditory cortex of awake rhesus macaques to sinusoidal amplitude modulation (SAM) signals. Here we report a complementary study of sinusoidal frequency modulation (SFM) in the same neurons. Responses to SFM were analogous to SAM responses in that changes in multiple parameters defining SFM stimuli (e.g., modulation frequency, modulation depth, carrier frequency) were robustly encoded in the temporal dynamics of the spike trains. For example, changes in the carrier frequency produced highly reproducible changes in shapes of the modulation period histogram, consistent with the notion that the instantaneous probability of discharge mirrors the moment-by-moment spectrum at low modulation rates. The upper limit for phase locking was similar across SAM and SFM within neurons, suggesting shared biophysical constraints on temporal processing. Using spike train classification methods, we found that neural thresholds for modulation depth discrimination are typically far lower than would be predicted from frequency tuning to static tones. This “dynamic hyperacuity” suggests a substantial central enhancement of the neural representation of frequency changes relative to the auditory periphery. Spike timing information was superior to average rate information when discriminating among SFM signals, and even when discriminating among static tones varying in frequency. This finding held even when differences in total spike count across stimuli were normalized, indicating both the primacy and generality of temporal response dynamics in cortical auditory processing. PMID:24598525

  14. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception.

    PubMed

    Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger

    2016-05-01

    A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.

  15. Cell-assembly coding in several memory processes.

    PubMed

    Sakurai, Y

    1998-01-01

    The present paper discusses why the cell assembly, i.e., an ensemble population of neurons with flexible functional connections, is a tenable view of the basic code for information processes in the brain. The main properties indicating the reality of cell-assembly coding are neurons overlaps among different assemblies and connection dynamics within and among the assemblies. The former can be detected as multiple functions of individual neurons in processing different kinds of information. Individual neurons appear to be involved in multiple information processes. The latter can be detected as changes of functional synaptic connections in processing different kinds of information. Correlations of activity among some of the recorded neurons appear to change in multiple information processes. Recent experiments have compared several different memory processes (tasks) and detected these two main properties, indicating cell-assembly coding of memory in the working brain. The first experiment compared different types of processing of identical stimuli, i.e., working memory and reference memory of auditory stimuli. The second experiment compared identical processes of different types of stimuli, i.e., discriminations of simple auditory, simple visual, and configural auditory-visual stimuli. The third experiment compared identical processes of different types of stimuli with or without temporal processing of stimuli, i.e., discriminations of elemental auditory, configural auditory-visual, and sequential auditory-visual stimuli. Some possible features of the cell-assembly coding, especially "dual coding" by individual neurons and cell assemblies, are discussed for future experimental approaches. Copyright 1998 Academic Press.

  16. A corollary discharge maintains auditory sensitivity during sound production

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2002-08-01

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  17. Adult-like processing of time-compressed speech by newborns: A NIRS study.

    PubMed

    Issard, Cécile; Gervain, Judit

    2017-06-01

    Humans can adapt to a wide range of variations in the speech signal, maintaining an invariant representation of the linguistic information it contains. Among them, adaptation to rapid or time-compressed speech has been well studied in adults, but the developmental origin of this capacity remains unknown. Does this ability depend on experience with speech (if yes, as heard in utero or as heard postnatally), with sounds in general or is it experience-independent? Using near-infrared spectroscopy, we show that the newborn brain can discriminate between three different compression rates: normal, i.e. 100% of the original duration, moderately compressed, i.e. 60% of original duration and highly compressed, i.e. 30% of original duration. Even more interestingly, responses to normal and moderately compressed speech are similar, showing a canonical hemodynamic response in the left temporoparietal, right frontal and right temporal cortex, while responses to highly compressed speech are inverted, showing a decrease in oxyhemoglobin concentration. These results mirror those found in adults, who readily adapt to moderately compressed, but not to highly compressed speech, showing that adaptation to time-compressed speech requires little or no experience with speech, and happens at an auditory, and not at a more abstract linguistic level. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  18. Performance variability on perceptual discrimination tasks in profoundly deaf adults with cochlear implants.

    PubMed

    Hay-McCutcheon, Marcia J; Peterson, Nathaniel R; Pisoni, David B; Kirk, Karen Iler; Yang, Xin; Parton, Jason

    The purpose of this study was to evaluate performance on two challenging listening tasks, talker and regional accent discrimination, and to assess variables that could have affected the outcomes. A prospective study using 35 adults with one cochlear implant (CI) or a CI and a contralateral hearing aid (bimodal hearing) was conducted. Adults completed talker and regional accent discrimination tasks. Two-alternative forced-choice tasks were used to assess talker and accent discrimination in a group of adults who ranged in age from 30 years old to 81 years old. A large amount of performance variability was observed across listeners for both discrimination tasks. Three listeners successfully discriminated between talkers for both listening tasks, 14 participants successfully completed one discrimination task and 18 participants were not able to discriminate between talkers for either listening task. Some adults who used bimodal hearing benefitted from the addition of acoustic cues provided through a HA but for others the HA did not help with discrimination abilities. Acoustic speech feature analysis of the test signals indicated that both the talker speaking rate and the fundamental frequency (F0) helped with talker discrimination. For accent discrimination, findings suggested that access to more salient spectral cues was important for better discrimination performance. The ability to perform challenging discrimination tasks successfully likely involves a number of complex interactions between auditory and non-auditory pre- and post-implant factors. To understand why some adults with CIs perform similarly to adults with normal hearing and others experience difficulty discriminating between talkers, further research will be required with larger populations of adults who use unilateral CIs, bilateral CIs and bimodal hearing. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats

    PubMed Central

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F.; Sotomayor-Zárate, Ramón; Delano, Paul H.; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague–Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle-treated animals. These results indicate that NE has a key role in A1 and attention of stressed rats during tone discrimination. PMID:28082872

  20. Speech prosody perception in cochlear implant users with and without residual hearing.

    PubMed

    Marx, Mathieu; James, Christopher; Foxton, Jessica; Capber, Amandine; Fraysse, Bernard; Barone, Pascal; Deguine, Olivier

    2015-01-01

    The detection of fundamental frequency (F0) variations plays a prominent role in the perception of intonation. Cochlear implant (CI) users with residual hearing might have access to these F0 cues. The objective was to study if and how residual hearing facilitates speech prosody perception in CI users. The authors compared F0 difference limen (F0DL) and question/statement discrimination performance for 15 normal-hearing subjects (NHS) and two distinct groups of CI subjects, according to the presence or absence of acoustic residual hearing: one "combined group" (n = 11) with residual hearing and one CI-only group (n = 10) without any residual hearing. To assess the relative contribution of the different acoustic cues for intonation perception, the sensitivity index d' was calculated for three distinct auditory conditions: one condition with original recordings, one condition with a constant F0, and one with equalized duration and amplitude. In the original condition, combined subjects showed better question/statement discrimination than CI-only subjects, d' 2.44 (SE 0.3) and 0.91 (SE 0.25), respectively. Mean d' score of NHS was 3.3 (SE 0.06). When F0 variations were removed, the scores decreased significantly for combined subjects (d' = 0.66, SE 0.51) and NHS (d' = 0.4, SE 0.09). Duration and amplitude equalization affected the scores of CI-only subjects (mean d' = 0.34, SE 0.28) but did not influence the scores of combined subjects (d' = 2.7, SE 0.02) or NHS (d' = 3.3, SE 0.33). Mean F0DL was poorer in CI-only subjects (34%, SE 15) compared with combined subjects (8.8%, SE 1.4) and NHS (2.4%, SE 0.05). In CI subjects with residual hearing, intonation d' score was correlated with mean residual hearing level (r = -0.86, n = 11, p < 0.001) and mean F0DL (r = 0.84, n = 11, p < 0.001). Where CI subjects with residual hearing had thresholds better than 60 dB HL in the low frequencies, they displayed near-normal question/statement discrimination abilities. Normal listeners mainly relied on F0 variations which were the most effective prosodic cue. In comparison, CI subjects without any residual hearing had poorer F0 discrimination and showed a strong deficit in speech prosody perception. However, this CI-only group appeared to be able to make some use of amplitude and duration cues for statement/question discrimination.

  1. Teaching in an Open Classroom: Informal Checks, Diagnoses, and Learning Strategies for Beginning Reading and Math.

    ERIC Educational Resources Information Center

    Langstaff, Nancy

    This book, intended for use by inservice teachers, preservice teachers, and parents interested in open classrooms, contains three chapters. "Beginning Reading in an Open Classroom" discusses language development, sight vocabulary, visual discrimination, auditory discrimination, directional concepts, small muscle control, and measurement of…

  2. IEEE Conference on Neural Information Processing Systems - Natural and Synthetic Held in Denver, Colorado on 28 November-1 December 1988

    DTIC Science & Technology

    1989-08-14

    DISCRIMINATE SIMILAR KANJt CHARACTERS. Yoshihiro Mori, Kazuhiko Yokosawa . 12 FURTHER EXPLORATIONS IN THE LEARNING OF VISUALLY-GUIDED REACHING: MAKING MURPHY...NETWORKS THAT LEARN TO DISCRIMINATE SIMILAR KANJI CHARACTERS YOSHIHIRO MORI, KAZUHIKO YOKOSAWA , ATR Auditory and Visual Perception Research Laboratories

  3. Auditory Discrimination of Frequency Ratios: The Octave Singularity

    ERIC Educational Resources Information Center

    Bonnard, Damien; Micheyl, Christophe; Semal, Catherine; Dauman, Rene; Demany, Laurent

    2013-01-01

    Sensitivity to frequency ratios is essential for the perceptual processing of complex sounds and the appreciation of music. This study assessed the effect of ratio simplicity on ratio discrimination for pure tones presented either simultaneously or sequentially. Each stimulus consisted of four 100-ms pure tones, equally spaced in terms of…

  4. The role of auditory cortex in retention of rhythmic patterns as studied in patients with temporal lobe removals including Heschl's gyrus.

    PubMed

    Penhune, V B; Zatorre, R J; Feindel, W H

    1999-03-01

    This experiment examined the participation of the auditory cortex of the temporal lobe in the perception and retention of rhythmic patterns. Four patient groups were tested on a paradigm contrasting reproduction of auditory and visual rhythms: those with right or left anterior temporal lobe removals which included Heschl's gyrus (HG), the region of primary auditory cortex (RT-A and LT-A); and patients with right or left anterior temporal lobe removals which did not include HG (RT-a and LT-a). Estimation of lesion extent in HG using an MRI-based probabilistic map indicated that, in the majority of subjects, the lesion was confined to the anterior secondary auditory cortex located on the anterior-lateral extent of HG. On the rhythm reproduction task, RT-A patients were impaired in retention of auditory but not visual rhythms, particularly when accurate reproduction of stimulus durations was required. In contrast, LT-A patients as well as both RT-a and LT-a patients were relatively unimpaired on this task. None of the patient groups was impaired in the ability to make an adequate motor response. Further, they were unimpaired when using a dichotomous response mode, indicating that they were able to adequately differentiate the stimulus durations and, when given an alternative method of encoding, to retain them. Taken together, these results point to a specific role for the right anterior secondary auditory cortex in the retention of a precise analogue representation of auditory tonal patterns.

  5. Poor Auditory Task Scores in Children with Specific Reading and Language Difficulties: Some Poor Scores Are More Equal than Others

    ERIC Educational Resources Information Center

    McArthur, Genevieve M.; Hogben, John H.

    2012-01-01

    Children with specific reading disability (SRD) or specific language impairment (SLI), who scored poorly on an auditory discrimination task, did up to 140 runs on the failed task. Forty-one percent of the children produced widely fluctuating scores that did not improve across runs (untrainable errant performance), 23% produced widely fluctuating…

  6. Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters

    ERIC Educational Resources Information Center

    Smith, Nicholas A.; Trainor, Laurel J.

    2011-01-01

    This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…

  7. A Latent Consolidation Phase in Auditory Identification Learning: Time in the Awake State Is Sufficient

    ERIC Educational Resources Information Center

    Roth, Daphne Ari-Even; Kishon-Rabin, Liat; Hildesheimer, Minka; Karni, Avi

    2005-01-01

    Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear,…

  8. [Auditory and corporal laterality, logoaudiometry, and monaural hearing aid gain].

    PubMed

    Benavides, Mariela; Peñaloza-López, Yolanda R; de la Sancha-Jiménez, Sabino; García Pedroza, Felipe; Gudiño, Paula K

    2007-12-01

    To identify the auditory or clinical test that has the best correlation with the ear in which we apply the monaural hearing aid in symmetric bilateral hearing loss. A total of 37 adult patients with symmetric bilateral hearing loss were examined regarding the correlation between the best score in speech discrimination test, corporal laterality, auditory laterality with dichotic digits in Spanish and score for filtered words with monaural hearing aid. The best correlation was obtained between auditory laterality and gain with hearing aid (0.940). The dichotic test for auditory laterality is a good tool for identifying the best ear in which to apply a monaural hearing aid. The results of this paper suggest the necessity to apply this test in patients before a hearing aid is indicated.

  9. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    PubMed

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  10. Human engineer's guide to auditory displays. Volume 2: Elements of signal reception and resolution affecting auditory displays

    NASA Astrophysics Data System (ADS)

    Mulligan, B. E.; Goodman, L. S.; McBride, D. K.; Mitchell, T. M.; Crosby, T. N.

    1984-08-01

    This work reviews the areas of monaural and binaural signal detection, auditory discrimination and localization, and reaction times to acoustic signals. The review was written from the perspective of human engineering and focuses primarily on auditory processing of information contained in acoustic signals. The impetus for this effort was to establish a data base to be utilized in the design and evaluation of acoustic displays. Appendix 1 also contains citations of the scientific literature on which was based the answers to each question. There are nineteen questions and answers, and more than two hundred citations contained in the list of references given in Appendix 2. This is one of two related works, the other of which reviewed the literature in the areas of auditory attention, recognition memory, and auditory perception of patterns, pitch, and loudness.

  11. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  12. Performance and strategy comparisons of human listeners and logistic regression in discriminating underwater targets.

    PubMed

    Yang, Lixue; Chen, Kean

    2015-11-01

    To improve the design of underwater target recognition systems based on auditory perception, this study compared human listeners with automatic classifiers. Performances measures and strategies in three discrimination experiments, including discriminations between man-made and natural targets, between ships and submarines, and among three types of ships, were used. In the experiments, the subjects were asked to assign a score to each sound based on how confident they were about the category to which it belonged, and logistic regression, which represents linear discriminative models, also completed three similar tasks by utilizing many auditory features. The results indicated that the performances of logistic regression improved as the ratio between inter- and intra-class differences became larger, whereas the performances of the human subjects were limited by their unfamiliarity with the targets. Logistic regression performed better than the human subjects in all tasks but the discrimination between man-made and natural targets, and the strategies employed by excellent human subjects were similar to that of logistic regression. Logistic regression and several human subjects demonstrated similar performances when discriminating man-made and natural targets, but in this case, their strategies were not similar. An appropriate fusion of their strategies led to further improvement in recognition accuracy.

  13. Pitch expertise is not created equal: Cross-domain effects of musicianship and tone language experience on neural and behavioural discrimination of speech and music.

    PubMed

    Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain

    2015-05-01

    Psychophysiological evidence supports a music-language association, such that experience in one domain can impact processing required in the other domain. We investigated the bidirectionality of this association by measuring event-related potentials (ERPs) in native English-speaking musicians, native tone language (Cantonese) nonmusicians, and native English-speaking nonmusician controls. We tested the degree to which pitch expertise stemming from musicianship or tone language experience similarly enhances the neural encoding of auditory information necessary for speech and music processing. Early cortical discriminatory processing for music and speech sounds was characterized using the mismatch negativity (MMN). Stimuli included 'large deviant' and 'small deviant' pairs of sounds that differed minimally in pitch (fundamental frequency, F0; contrastive musical tones) or timbre (first formant, F1; contrastive speech vowels). Behavioural F0 and F1 difference limen tasks probed listeners' perceptual acuity for these same acoustic features. Musicians and Cantonese speakers performed comparably in pitch discrimination; only musicians showed an additional advantage on timbre discrimination performance and an enhanced MMN responses to both music and speech. Cantonese language experience was not associated with enhancements on neural measures, despite enhanced behavioural pitch acuity. These data suggest that while both musicianship and tone language experience enhance some aspects of auditory acuity (behavioural pitch discrimination), musicianship confers farther-reaching enhancements to auditory function, tuning both pitch and timbre-related brain processes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A biologically plausible computational model for auditory object recognition.

    PubMed

    Larson, Eric; Billimoria, Cyrus P; Sen, Kamal

    2009-01-01

    Object recognition is a task of fundamental importance for sensory systems. Although this problem has been intensively investigated in the visual system, relatively little is known about the recognition of complex auditory objects. Recent work has shown that spike trains from individual sensory neurons can be used to discriminate between and recognize stimuli. Multiple groups have developed spike similarity or dissimilarity metrics to quantify the differences between spike trains. Using a nearest-neighbor approach the spike similarity metrics can be used to classify the stimuli into groups used to evoke the spike trains. The nearest prototype spike train to the tested spike train can then be used to identify the stimulus. However, how biological circuits might perform such computations remains unclear. Elucidating this question would facilitate the experimental search for such circuits in biological systems, as well as the design of artificial circuits that can perform such computations. Here we present a biologically plausible model for discrimination inspired by a spike distance metric using a network of integrate-and-fire model neurons coupled to a decision network. We then apply this model to the birdsong system in the context of song discrimination and recognition. We show that the model circuit is effective at recognizing individual songs, based on experimental input data from field L, the avian primary auditory cortex analog. We also compare the performance and robustness of this model to two alternative models of song discrimination: a model based on coincidence detection and a model based on firing rate.

  15. Auditory and Visual Differences in Time Perception? An Investigation from a Developmental Perspective with Neuropsychological Tests

    ERIC Educational Resources Information Center

    Zelanti, Pierre S.; Droit-Volet, Sylvie

    2012-01-01

    Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results…

  16. Basic auditory processing and sensitivity to prosodic structure in children with specific language impairments: a new look at a perceptual hypothesis

    PubMed Central

    Cumming, Ruth; Wilson, Angela; Goswami, Usha

    2015-01-01

    Children with specific language impairments (SLIs) show impaired perception and production of spoken language, and can also present with motor, auditory, and phonological difficulties. Recent auditory studies have shown impaired sensitivity to amplitude rise time (ART) in children with SLIs, along with non-speech rhythmic timing difficulties. Linguistically, these perceptual impairments should affect sensitivity to speech prosody and syllable stress. Here we used two tasks requiring sensitivity to prosodic structure, the DeeDee task and a stress misperception task, to investigate this hypothesis. We also measured auditory processing of ART, rising pitch and sound duration, in both speech (“ba”) and non-speech (tone) stimuli. Participants were 45 children with SLI aged on average 9 years and 50 age-matched controls. We report data for all the SLI children (N = 45, IQ varying), as well as for two independent SLI subgroupings with intact IQ. One subgroup, “Pure SLI,” had intact phonology and reading (N = 16), the other, “SLI PPR” (N = 15), had impaired phonology and reading. Problems with syllable stress and prosodic structure were found for all the group comparisons. Both sub-groups with intact IQ showed reduced sensitivity to ART in speech stimuli, but the PPR subgroup also showed reduced sensitivity to sound duration in speech stimuli. Individual differences in processing syllable stress were associated with auditory processing. These data support a new hypothesis, the “prosodic phrasing” hypothesis, which proposes that grammatical difficulties in SLI may reflect perceptual difficulties with global prosodic structure related to auditory impairments in processing amplitude rise time and duration. PMID:26217286

  17. Olfactory discrimination varies in mice with different levels of α7-nicotinic acetylcholine receptor expression.

    PubMed

    Hellier, Jennifer L; Arevalo, Nicole L; Blatner, Megan J; Dang, An K; Clevenger, Amy C; Adams, Catherine E; Restrepo, Diego

    2010-10-28

    Previous studies have shown that schizophrenics have decreased expression of α7-nicotinic acetylcholine (α7) receptors in the hippocampus and other brain regions, paranoid delusions, disorganized speech, deficits in auditory gating (i.e., inability to inhibit neuronal responses to repetitive auditory stimuli), and difficulties in odor discrimination and detection. Here we use mice with decreased α7 expression that also show a deficit in auditory gating to determine if these mice have similar deficits in olfaction. In the adult mouse olfactory bulb (OB), α7 expression localizes in the glomerular layer; however, the functional role of α7 is unknown. We show that inbred mouse strains (i.e., C3H and C57) with varying α7 expressions (e.g., α7 wild-type [α7+/+], α7 heterozygous knock-out [α7+/-] and α7 homozygous knock-out mice [α7-/-]) significantly differ in odor discrimination and detection of chemically-related odorant pairs. Using [(125)I] α-bungarotoxin (α-BGT) autoradiography, α7 expression was measured in the OB. As previously demonstrated, α-BGT binding was localized to the glomerular layer. Significantly more expression of α7 was observed in C57 α7+/+ mice compared to C3H α7+/+ mice. Furthermore, C57 α7+/+ mice were able to detect a significantly lower concentration of an odor in a mixture compared to C3H α7+/+ mice. Both C57 and C3H α7+/+ mice discriminated between chemically-related odorants sooner than α7+/- or α7-/- mice. These data suggest that α7-nicotinic-receptors contribute strongly to olfactory discrimination and detection in mice and may be one of the mechanisms producing olfactory dysfunction in schizophrenics. Copyright © 2010 Elsevier B.V. All rights reserved.

  18. From bird to sparrow: Learning-induced modulations in fine-grained semantic discrimination.

    PubMed

    De Meo, Rosanna; Bourquin, Nathalie M-P; Knebel, Jean-François; Murray, Micah M; Clarke, Stephanie

    2015-09-01

    Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra-categorical auditory discrimination for untrained items follows the temporal hierarchy and transpires in a late stage of semantic processing. On the other hand, correct categorization of individually trained stimuli occurs earlier, during a period contemporaneous with human vs. animal vocalization discrimination, and involves a parallel semantic pathway requiring expertise. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Neural and receptor cochlear potentials obtained by transtympanic electrocochleography in auditory neuropathy.

    PubMed

    Santarelli, Rosamaria; Starr, Arnold; Michalewski, Henry J; Arslan, Edoardo

    2008-05-01

    Transtympanic electrocochleography (ECochG) was recorded bilaterally in children and adults with auditory neuropathy (AN) to evaluate receptor and neural generators. Test stimuli were clicks from 60 to 120dB p.e. SPL. Measures obtained from eight AN subjects were compared to 16 normally hearing children. Receptor cochlear microphonics (CMs) in AN were of normal or enhanced amplitude. Neural compound action potentials (CAPs) and receptor summating potentials (SPs) were identified in five AN ears. ECochG potentials in those ears without CAPs were of negative polarity and of normal or prolonged duration. We used adaptation to rapid stimulus rates to distinguish whether the generators of the negative potentials were of neural or receptor origin. Adaptation in controls resulted in amplitude reduction of CAP twice that of SP without affecting the duration of ECochG potentials. In seven AN ears without CAP and with prolonged negative potential, adaptation was accompanied by reduction of both amplitude and duration of the negative potential to control values consistent with neural generation. In four ears without CAP and with normal duration potentials, adaptation was without effect consistent with receptor generation. In five AN ears with CAP, there was reduction in amplitude of CAP and SP as controls but with a significant decrease in response duration. Three patterns of cochlear potentials were identified in AN: (1) presence of receptor SP without CAP consistent with pre-synaptic disorder of inner hair cells; (2) presence of both SP and CAP consistent with post-synaptic disorder of proximal auditory nerve; (3) presence of prolonged neural potentials without a CAP consistent with post-synaptic disorder of nerve terminals. Cochlear potential measures may identify pre- and post-synaptic disorders of inner hair cells and auditory nerves in AN.

  20. Shared neural substrates for song discrimination in parental and parasitic songbirds.

    PubMed

    Louder, Matthew I M; Voss, Henning U; Manna, Thomas J; Carryl, Sophia S; London, Sarah E; Balakrishnan, Christopher N; Hauber, Mark E

    2016-05-27

    In many social animals, early exposure to conspecific stimuli is critical for the development of accurate species recognition. Obligate brood parasitic songbirds, however, forego parental care and young are raised by heterospecific hosts in the absence of conspecific stimuli. Having evolved from non-parasitic, parental ancestors, how brood parasites recognize their own species remains unclear. In parental songbirds (e.g. zebra finch Taeniopygia guttata), the primary and secondary auditory forebrain areas are known to be critical in the differential processing of conspecific vs. heterospecific songs. Here we demonstrate that the same auditory brain regions underlie song discrimination in adult brood parasitic pin-tailed whydahs (Vidua macroura), a close relative of the zebra finch lineage. Similar to zebra finches, whydahs showed stronger behavioral responses during conspecific vs. heterospecific song and tone pips as well as increased neural responses within the auditory forebrain, as measured by both functional magnetic resonance imaging (fMRI) and immediate early gene (IEG) expression. Given parallel behavioral and neuroanatomical patterns of song discrimination, our results suggest that the evolutionary transition to brood parasitism from parental songbirds likely involved an "evolutionary tinkering" of existing proximate mechanisms, rather than the wholesale reworking of the neural substrates of species recognition. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. The dispersion-focalization theory of sound systems

    NASA Astrophysics Data System (ADS)

    Schwartz, Jean-Luc; Abry, Christian; Boë, Louis-Jean; Vallée, Nathalie; Ménard, Lucie

    2005-04-01

    The Dispersion-Focalization Theory states that sound systems in human languages are shaped by two major perceptual constraints: dispersion driving auditory contrast towards maximal or sufficient values [B. Lindblom, J. Phonetics 18, 135-152 (1990)] and focalization driving auditory spectra towards patterns with close neighboring formants. Dispersion is computed from the sum of the inverse squared inter-spectra distances in the (F1, F2, F3, F4) space, using a non-linear process based on the 3.5 Bark critical distance to estimate F2'. Focalization is based on the idea that close neighboring formants produce vowel spectra with marked peaks, easier to process and memorize in the auditory system. Evidence for increased stability of focal vowels in short-term memory was provided in a discrimination experiment on adult French subjects [J. L. Schwartz and P. Escudier, Speech Comm. 8, 235-259 (1989)]. A reanalysis of infant discrimination data shows that focalization could well be the responsible for recurrent discrimination asymmetries [J. L. Schwartz et al., Speech Comm. (in press)]. Recent data about children vowel production indicate that focalization seems to be part of the perceptual templates driving speech development. The Dispersion-Focalization Theory produces valid predictions for both vowel and consonant systems, in relation with available databases of human languages inventories.

  2. Anesthetic state modulates excitability but not spectral tuning or neural discrimination in single auditory midbrain neurons

    PubMed Central

    Schumacher, Joseph W.; Schneider, David M.

    2011-01-01

    The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals. PMID:21543752

  3. Developmental hearing loss impedes auditory task learning and performance in gerbils

    PubMed Central

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N.; Sanes, Dan H.

    2016-01-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. PMID:27746215

  4. Vibrotactile Discrimination Training Affects Brain Connectivity in Profoundly Deaf Individuals

    PubMed Central

    González-Garrido, Andrés A.; Ruiz-Stovel, Vanessa D.; Gómez-Velázquez, Fabiola R.; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Salido-Ruiz, Ricardo A.; Espinoza-Valdez, Aurora; Campos, Luis R.

    2017-01-01

    Early auditory deprivation has serious neurodevelopmental and cognitive repercussions largely derived from impoverished and delayed language acquisition. These conditions may be associated with early changes in brain connectivity. Vibrotactile stimulation is a sensory substitution method that allows perception and discrimination of sound, and even speech. To clarify the efficacy of this approach, a vibrotactile oddball task with 700 and 900 Hz pure-tones as stimuli [counterbalanced as target (T: 20% of the total) and non-target (NT: 80%)] with simultaneous EEG recording was performed by 14 profoundly deaf and 14 normal-hearing (NH) subjects, before and after a short training period (five 1-h sessions; in 2.5–3 weeks). A small device worn on the right index finger delivered sound-wave stimuli. The training included discrimination of pure tone frequency and duration, and more complex natural sounds. A significant P300 amplitude increase and behavioral improvement was observed in both deaf and normal subjects, with no between group differences. However, a P3 with larger scalp distribution over parietal cortical areas and lateralized to the right was observed in the profoundly deaf. A graph theory analysis showed that brief training significantly increased fronto-central brain connectivity in deaf subjects, but not in NH subjects. Together, ERP tools and graph methods depicted the different functional brain dynamic in deaf and NH individuals, underlying the temporary engagement of the cognitive resources demanded by the task. Our findings showed that the index-fingertip somatosensory mechanoreceptors can discriminate sounds. Further studies are necessary to clarify brain connectivity dynamics associated with the performance of vibrotactile language-related discrimination tasks and the effect of lengthier training programs. PMID:28220063

  5. Vibrotactile Discrimination Training Affects Brain Connectivity in Profoundly Deaf Individuals.

    PubMed

    González-Garrido, Andrés A; Ruiz-Stovel, Vanessa D; Gómez-Velázquez, Fabiola R; Vélez-Pérez, Hugo; Romo-Vázquez, Rebeca; Salido-Ruiz, Ricardo A; Espinoza-Valdez, Aurora; Campos, Luis R

    2017-01-01

    Early auditory deprivation has serious neurodevelopmental and cognitive repercussions largely derived from impoverished and delayed language acquisition. These conditions may be associated with early changes in brain connectivity. Vibrotactile stimulation is a sensory substitution method that allows perception and discrimination of sound, and even speech. To clarify the efficacy of this approach, a vibrotactile oddball task with 700 and 900 Hz pure-tones as stimuli [counterbalanced as target (T: 20% of the total) and non-target (NT: 80%)] with simultaneous EEG recording was performed by 14 profoundly deaf and 14 normal-hearing (NH) subjects, before and after a short training period (five 1-h sessions; in 2.5-3 weeks). A small device worn on the right index finger delivered sound-wave stimuli. The training included discrimination of pure tone frequency and duration, and more complex natural sounds. A significant P300 amplitude increase and behavioral improvement was observed in both deaf and normal subjects, with no between group differences. However, a P3 with larger scalp distribution over parietal cortical areas and lateralized to the right was observed in the profoundly deaf. A graph theory analysis showed that brief training significantly increased fronto-central brain connectivity in deaf subjects, but not in NH subjects. Together, ERP tools and graph methods depicted the different functional brain dynamic in deaf and NH individuals, underlying the temporary engagement of the cognitive resources demanded by the task. Our findings showed that the index-fingertip somatosensory mechanoreceptors can discriminate sounds. Further studies are necessary to clarify brain connectivity dynamics associated with the performance of vibrotactile language-related discrimination tasks and the effect of lengthier training programs.

  6. Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

    PubMed Central

    Santoro, Roberta; Moerel, Michelle; De Martino, Federico; Valente, Giancarlo; Ugurbil, Kamil; Yacoub, Essa; Formisano, Elia

    2017-01-01

    Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (∼2–4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (<∼2 kHz), whereas for higher frequencies, reconstruction accuracy was higher for faster temporal modulations. Further analyses suggested that auditory cortical processing optimized for the fine-grained discrimination of speech and vocal sounds underlies this enhanced reconstruction accuracy. In sum, the present study introduces an approach to embed models of neural sound representations in the analysis of fMRI response patterns. Furthermore, it reveals that, in the human brain, even general purpose and fundamental neural processing mechanisms are shaped by the physical features of real-world stimuli that are most relevant for behavior (i.e., speech, voice). PMID:28420788

  7. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    PubMed

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Exploration of Acoustic Features for Automatic Vowel Discrimination in Spontaneous Speech

    ERIC Educational Resources Information Center

    Tyson, Na'im R.

    2012-01-01

    In an attempt to understand what acoustic/auditory feature sets motivated transcribers towards certain labeling decisions, I built machine learning models that were capable of discriminating between canonical and non-canonical vowels excised from the Buckeye Corpus. Specifically, I wanted to model when the dictionary form and the transcribed-form…

  9. Abnormal frequency discrimination in children with SLI as indexed by mismatch negativity (MMN).

    PubMed

    Rinker, Tanja; Kohls, Gregor; Richter, Cathrin; Maas, Verena; Schulz, Eberhard; Schecker, Michael

    2007-02-14

    For several decades, the aetiology of specific language impairment (SLI) has been associated with a central auditory processing deficit disrupting the normal language development of affected children. One important aspect for language acquisition is the discrimination of different acoustic features, such as frequency information. Concerning SLI, studies to date that examined frequency discrimination abilities have been contradictory. We hypothesized that an auditory processing deficit in children with SLI depends on the frequency range and the difference between the tones used. Using a passive mismatch negativity (MMN)-design, 13 boys with SLI and 13 age- and IQ-matched controls (7-11 years) were tested with two sine tones of different frequency (700Hz versus 750Hz). Reversed hemispheric activity between groups indicated abnormal processing in SLI. In a second time window, MMN2 was absent for the children with SLI. It can therefore be assumed that a frequency discrimination deficit in children with SLI becomes particularly apparent for tones below 750Hz and for a frequency difference of 50Hz. This finding may have important implications for future research and integration of various research approaches.

  10. Auditory velocity discrimination in the horizontal plane at very high velocities.

    PubMed

    Frissen, Ilja; Féron, François-Xavier; Guastavino, Catherine

    2014-10-01

    We determined velocity discrimination thresholds and Weber fractions for sounds revolving around the listener at very high velocities. Sounds used were a broadband white noise and two harmonic sounds with fundamental frequencies of 330 Hz and 1760 Hz. Experiment 1 used velocities ranging between 288°/s and 720°/s in an acoustically treated room and Experiment 2 used velocities between 288°/s and 576°/s in a highly reverberant hall. A third experiment addressed potential confounds in the first two experiments. The results show that people can reliably discriminate velocity at very high velocities and that both thresholds and Weber fractions decrease as velocity increases. These results violate Weber's law but are consistent with the empirical trend observed in the literature. While thresholds for the noise and 330 Hz harmonic stimulus were similar, those for the 1760 Hz harmonic stimulus were substantially higher. There were no reliable differences in velocity discrimination between the two acoustical environments, suggesting that auditory motion perception at high velocities is robust against the effects of reverberation. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Use of Questionnaire-Based Measures in the Assessment of Listening Difficulties in School-Aged Children

    PubMed Central

    Tomlin, Danielle; Moore, David R.; Dillon, Harvey

    2015-01-01

    Objectives: In this study, the authors assessed the potential utility of a recently developed questionnaire (Evaluation of Children’s Listening and Processing Skills [ECLiPS]) for supporting the clinical assessment of children referred for auditory processing disorder (APD). Design: A total of 49 children (35 referred for APD assessment and 14 from mainstream schools) were assessed for auditory processing (AP) abilities, cognitive abilities, and symptoms of listening difficulty. Four questionnaires were used to capture the symptoms of listening difficulty from the perspective of parents (ECLiPS and Fisher’s auditory problem checklist), teachers (Teacher’s Evaluation of Auditory Performance), and children, that is, self-report (Listening Inventory for Education). Correlation analyses tested for convergence between the questionnaires and both cognitive and AP measures. Discriminant analyses were performed to determine the best combination of tests for discriminating between typically developing children and children referred for APD. Results: All questionnaires were sensitive to the presence of difficulty, that is, children referred for assessment had significantly more symptoms of listening difficulty than typically developing children. There was, however, no evidence of more listening difficulty in children meeting the diagnostic criteria for APD. Some AP tests were significantly correlated with ECLiPS factors measuring related abilities providing evidence for construct validity. All questionnaires correlated to a greater or lesser extent with the cognitive measures in the study. Discriminant analysis suggested that the best discrimination between groups was achieved using a combination of ECLiPS factors, together with nonverbal Intelligence Quotient (cognitive) and AP measures (i.e., dichotic digits test and frequency pattern test). Conclusions: The ECLiPS was particularly sensitive to cognitive difficulties, an important aspect of many children referred for APD, as well as correlating with some AP measures. It can potentially support the preliminary assessment of children referred for APD. PMID:26002277

  12. Six-Month-Old Infants Use Analog Magnitudes to Represent Duration

    ERIC Educational Resources Information Center

    vanMarle, Kristy; Wynn, Karen

    2006-01-01

    While many studies have investigated duration discrimination in human adults and in nonhuman animals, few have investigated this ability in infants. Here, we report findings that 6-month-old infants are able to discriminate brief durations, and, as with other animal species, their discrimination function is characterized by Weber's Law:…

  13. Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals.

    PubMed

    Rizza, Aurora; Terekhov, Alexander V; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O'Regan, J Kevin

    2018-01-01

    Tactile speech aids, though extensively studied in the 1980's and 1990's, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.

  14. Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals

    PubMed Central

    Rizza, Aurora; Terekhov, Alexander V.; Montone, Guglielmo; Olivetti-Belardinelli, Marta; O’Regan, J. Kevin

    2018-01-01

    Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level. PMID:29875719

  15. Discrimination, Other Psychosocial Stressors, and Self-Reported Sleep Duration and Difficulties

    PubMed Central

    Slopen, Natalie; Williams, David R.

    2014-01-01

    Objectives: To advance understanding of the relationship between discrimination and sleep duration and difficulties, with consideration of multiple dimensions of discrimination, and attention to concurrent stressors; and to examine the contribution of discrimination and other stressors to racial/ ethnic differences in these outcomes. Design: Cross-sectional probability sample. Setting: Chicago, IL. Participants: There were 2,983 black, Hispanic, and white adults. Measurements and Results: Outcomes included self-reported sleep duration and difficulties. Discrimination, including racial and nonracial everyday and major experiences of discrimination, workplace harassment and incivilities, and other stressors were assessed via questionnaire. In models adjusted for sociodemographic characteristics, greater exposure to racial (β = -0.14)) and nonracial (β = -0.08) everyday discrimination, major experiences of discrimination attributed to race/ethnicity (β = -0.17), and workplace harassment and incivilities (β = -0.14) were associated with shorter sleep (P < 0.05). The association between major experiences of discrimination attributed to race/ethnicity and sleep duration (β = -0.09, P < 0.05) was independent of concurrent stressors (i.e., acute events, childhood adversity, and financial, community, employment, and relationship stressors). Racial (β = 0.04) and non-racial (β = 0.05) everyday discrimination and racial (β = 0.04) and nonracial (β = 0.04) major experiences of discrimination, and workplace harassment and incivilities (β = 0.04) were also associated with more (log) sleep difficulties, and associations between racial and nonracial everyday discrimination and sleep difficulties remained after adjustment for other stressors (P < 0.05). Racial/ethnic differences in sleep duration and difficulties were not significant after adjustment for discrimination (P > 0.05). Conclusions: Discrimination was associated with shorter sleep and more sleep difficulties, independent of socioeconomic status and other stressors, and may account for some of the racial/ethnic differences in sleep. Citation: Slopen N; Williams DR. Discrimination, other psychosocial stressors, and self-reported sleep duration and difficulties. SLEEP 2014;37(1):147-156. PMID:24381373

  16. Passive stimulation and behavioral training differentially transform temporal processing in the inferior colliculus and primary auditory cortex

    PubMed Central

    Beitel, Ralph E.; Schreiner, Christoph E.; Leake, Patricia A.

    2016-01-01

    In profoundly deaf cats, behavioral training with intracochlear electric stimulation (ICES) can improve temporal processing in the primary auditory cortex (AI). To investigate whether similar effects are manifest in the auditory midbrain, ICES was initiated in neonatally deafened cats either during development after short durations of deafness (8 wk of age) or in adulthood after long durations of deafness (≥3.5 yr). All of these animals received behaviorally meaningless, “passive” ICES. Some animals also received behavioral training with ICES. Two long-deaf cats received no ICES prior to acute electrophysiological recording. After several months of passive ICES and behavioral training, animals were anesthetized, and neuronal responses to pulse trains of increasing rates were recorded in the central (ICC) and external (ICX) nuclei of the inferior colliculus. Neuronal temporal response patterns (repetition rate coding, minimum latencies, response precision) were compared with results from recordings made in the AI of the same animals (Beitel RE, Vollmer M, Raggio MW, Schreiner CE. J Neurophysiol 106: 944–959, 2011; Vollmer M, Beitel RE. J Neurophysiol 106: 2423–2436, 2011). Passive ICES in long-deaf cats remediated severely degraded temporal processing in the ICC and had no effects in the ICX. In contrast to observations in the AI, behaviorally relevant ICES had no effects on temporal processing in the ICC or ICX, with the single exception of shorter latencies in the ICC in short-deaf cats. The results suggest that independent of deafness duration passive stimulation and behavioral training differentially transform temporal processing in auditory midbrain and cortex, and primary auditory cortex emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf cat. NEW & NOTEWORTHY Behaviorally relevant vs. passive electric stimulation of the auditory nerve differentially affects neuronal temporal processing in the central nucleus of the inferior colliculus (ICC) and the primary auditory cortex (AI) in profoundly short-deaf and long-deaf cats. Temporal plasticity in the ICC depends on a critical amount of electric stimulation, independent of its behavioral relevance. In contrast, the AI emerges as a pivotal site for behaviorally driven neuronal temporal plasticity in the deaf auditory system. PMID:27733594

  17. Startle Auditory Stimuli Enhance the Performance of Fast Dynamic Contractions

    PubMed Central

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M.

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training. PMID:24489967

  18. Behavioral determination of stimulus pair discrimination of auditory acoustic and electrical stimuli using a classical conditioning and heart-rate approach.

    PubMed

    Morgan, Simeon J; Paolini, Antonio G

    2012-06-06

    Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear Nucleus, guided by the monitoring of neural responses to acoustic stimuli, and the fixation of the electrode into place for chronic use is likewise shown.

  19. Short-wavelength infrared laser activates the auditory neurons: comparing the effect of 980 vs. 810 nm wavelength.

    PubMed

    Tian, Lan; Wang, Jingxuan; Wei, Ying; Lu, Jianren; Xu, Anting; Xia, Ming

    2017-02-01

    Research on auditory neural triggering by optical stimulus has been developed as an emerging technique to elicit the auditory neural response, which may provide an alternative method to the cochlear implants. However, most previous studies have been focused on using longer-wavelength near-infrared (>1800 nm) laser. The effect comparison of different laser wavelengths in short-wavelength infrared (SWIR) range on the auditory neural stimulation has not been previously explored. In this study, the pulsed 980- and 810-nm SWIR lasers were applied as optical stimuli to irradiate the auditory neurons in the cochlea of five deafened guinea pigs and the neural response under the two laser wavelengths was compared by recording the evoked optical auditory brainstem responses (OABRs). In addition, the effect of radiant exposure, laser pulse width, and threshold with the two laser wavelengths was further investigated and compared. The one-way analysis of variance (ANOVA) was used to analyze those data. Results showed that the OABR amplitude with the 980-nm laser is higher than the amplitude with the 810-nm laser under the same radiant exposure from 10 to 102 mJ/cm 2 . And the laser stimulation of 980 nm wavelength has lower threshold radiant exposure than the 810 nm wavelength at varied pulse duration in 20-500 μs range. Moreover, the 810-nm laser has a wider optimized pulse duration range than the 980-nm laser for the auditory neural stimulation.

  20. Auditory-nerve single-neuron thresholds to electrical stimulation from scala tympani electrodes.

    PubMed

    Parkins, C W; Colombo, J

    1987-12-31

    Single auditory-nerve neuron thresholds were studied in sensory-deafened squirrel monkeys to determine the effects of electrical stimulus shape and frequency on single-neuron thresholds. Frequency was separated into its components, pulse width and pulse rate, which were analyzed separately. Square and sinusoidal pulse shapes were compared. There were no or questionably significant threshold differences in charge per phase between sinusoidal and square pulses of the same pulse width. There was a small (less than 0.5 dB) but significant threshold advantage for 200 microseconds/phase pulses delivered at low pulse rates (156 pps) compared to higher pulse rates (625 pps and 2500 pps). Pulse width was demonstrated to be the prime determinant of single-neuron threshold, resulting in strength-duration curves similar to other mammalian myelinated neurons, but with longer chronaxies. The most efficient electrical stimulus pulse width to use for cochlear implant stimulation was determined to be 100 microseconds/phase. This pulse width delivers the lowest charge/phase at threshold. The single-neuron strength-duration curves were compared to strength-duration curves of a computer model based on the specific anatomy of auditory-nerve neurons. The membrane capacitance and resulting chronaxie of the model can be varied by altering the length of the unmyelinated termination of the neuron, representing the unmyelinated portion of the neuron between the habenula perforata and the hair cell. This unmyelinated segment of the auditory-nerve neuron may be subject to aminoglycoside damage. Simulating a 10 micron unmyelinated termination for this model neuron produces a strength-duration curve that closely fits the single-neuron data obtained from aminoglycoside deafened animals. Both the model and the single-neuron strength-duration curves differ significantly from behavioral threshold data obtained from monkeys and humans with cochlear implants. This discrepancy can best be explained by the involvement of higher level neurologic processes in the behavioral responses. These findings suggest that the basic principles of neural membrane function must be considered in developing or analyzing electrical stimulation strategies for cochlear prostheses if the appropriate stimulation of frequency specific populations of auditory-nerve neurons is the objective.

  1. Trained Musical Performers' and Musically Untrained College Students' Ability to Discriminate Music Instrument Timbre as a Function of Duration.

    NASA Astrophysics Data System (ADS)

    Johnston, Dennis Alan

    The purpose of this study was to investigate the ability of trained musicians and musically untrained college students to discriminate music instrument timbre as a function of duration. Specific factors investigated were the thresholds for timbre discrimination as a function of duration, musical ensemble participation as training, and the relative discrimination abilities of vocalists and instrumentalists. The subjects (N = 126) were volunteer college students from intact classes from various disciplines separated into musically untrained college students (N = 43) who had not participated in musical ensembles and trained musicians (N = 83) who had. The musicians were further divided into instrumentalists (N = 51) and vocalists (N = 32). The Method of Constant Stimuli, using a same-different response procedure with 120 randomized, counterbalanced timbre pairs comprised of trumpet, clarinet, or violin, presented in durations of 20 to 100 milliseconds in a sequence of pitches, in two blocks was used for data collection. Complete, complex musical timbres were recorded digitally and presented in a sequence of changing pitches to more closely approximate an actual music listening experience. Under the conditions of this study, it can be concluded that the threshold for timbre discrimination as a function of duration is at or below 20 ms. Even though trained musicians tended to discriminate timbre better than musically untrained college students, musicians cannot discriminate timbre significantly better then those subjects who have not participated in musical ensembles. Additionally, instrumentalists tended to discriminate timbre better than vocalists, but the discrimination is not significantly different. Recommendations for further research include suggestions for a timbre discrimination measurement tool that takes into consideration the multidimensionality of timbre and the relationship of timbre discrimination to timbre source, duration, pitch, and loudness.

  2. Auditory Attentional Capture: Effects of Singleton Distractor Sounds

    ERIC Educational Resources Information Center

    Dalton, Polly; Lavie, Nilli

    2004-01-01

    The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…

  3. Group Cognitive-Behavioral Therapy for Auditory Hallucinations: A Pilot Study

    ERIC Educational Resources Information Center

    Pinkham, Amy E.; Gloege, Andrew T.; Flanagan, Steven; Penn, David L.

    2004-01-01

    In this article, we describe a pilot study that investigated the effectiveness of group cognitive behavioral therapy (CBT) for auditory hallucinations. Eleven inpatients with either chronic schizophrenia or schizoaffective disorder participated in 2 CBT groups of differing treatment duration (i.e., 7 versus 20 sessions). The results showed that…

  4. Auditory and Motor Rhythm Awareness in Adults with Dyslexia

    ERIC Educational Resources Information Center

    Thomson, Jennifer M.; Fryer, Ben; Maltby, James; Goswami, Usha

    2006-01-01

    Children with developmental dyslexia appear to be insensitive to basic auditory cues to speech rhythm and stress. For example, they experience difficulties in processing duration and amplitude envelope onset cues. Here we explored the sensitivity of adults with developmental dyslexia to the same cues. In addition, relations with expressive and…

  5. Effect of Attentional State on Frequency Discrimination: A Comparison of Children with ADHD on and off Medication

    ERIC Educational Resources Information Center

    Sutcliffe, Paul A.; Bishop, Dorothy V. M.; Houghton, Stephen; Taylor, Myra

    2006-01-01

    Debate continues over the hypothesis that children with language or literacy difficulties have a genuine auditory processing deficit. Several recent studies have reported deficits in frequency discrimination (FD), but it is unclear whether these are genuine perceptual impairments or reflective of the comorbid attentional problems that exist in…

  6. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  7. Input from the Medial Geniculate Nucleus Modulates Amygdala Encoding of Fear Memory Discrimination

    ERIC Educational Resources Information Center

    Ferrara, Nicole C.; Cullen, Patrick K.; Pullins, Shane P.; Rotondo, Elena K.; Helmstetter, Fred J.

    2017-01-01

    Generalization of fear can involve abnormal responding to cues that signal safety and is common in people diagnosed with post-traumatic stress disorder. Differential auditory fear conditioning can be used as a tool to measure changes in fear discrimination and generalization. Most prior work in this area has focused on elevated amygdala activity…

  8. Fast transfer of crossmodal time interval training.

    PubMed

    Chen, Lihan; Zhou, Xiaolin

    2014-06-01

    Sub-second time perception is essential for many important sensory and perceptual tasks including speech perception, motion perception, motor coordination, and crossmodal interaction. This study investigates to what extent the ability to discriminate sub-second time intervals acquired in one sensory modality can be transferred to another modality. To this end, we used perceptual classification of visual Ternus display (Ternus in Psychol Forsch 7:81-136, 1926) to implicitly measure participants' interval perception in pre- and posttests and implemented an intra- or crossmodal sub-second interval discrimination training protocol in between the tests. The Ternus display elicited either an "element motion" or a "group motion" percept, depending on the inter-stimulus interval between the two visual frames. The training protocol required participants to explicitly compare the interval length between a pair of visual, auditory, or tactile stimuli with a standard interval or to implicitly perceive the length of visual, auditory, or tactile intervals by completing a non-temporal task (discrimination of auditory pitch or tactile intensity). Results showed that after fast explicit training of interval discrimination (about 15 min), participants improved their ability to categorize the visual apparent motion in Ternus displays, although the training benefits were mild for visual timing training. However, the benefits were absent for implicit interval training protocols. This finding suggests that the timing ability in one modality can be rapidly acquired and used to improve timing-related performance in another modality and that there may exist a central clock for sub-second temporal processing, although modality-specific perceptual properties may constrain the functioning of this clock.

  9. Frontal Cortex Activation Causes Rapid Plasticity of Auditory Cortical Processing

    PubMed Central

    Winkowski, Daniel E.; Bandyopadhyay, Sharba; Shamma, Shihab A.

    2013-01-01

    Neurons in the primary auditory cortex (A1) can show rapid changes in receptive fields when animals are engaged in sound detection and discrimination tasks. The source of a signal to A1 that triggers these changes is suspected to be in frontal cortical areas. How or whether activity in frontal areas can influence activity and sensory processing in A1 and the detailed changes occurring in A1 on the level of single neurons and in neuronal populations remain uncertain. Using electrophysiological techniques in mice, we found that pairing orbitofrontal cortex (OFC) stimulation with sound stimuli caused rapid changes in the sound-driven activity within A1 that are largely mediated by noncholinergic mechanisms. By integrating in vivo two-photon Ca2+ imaging of A1 with OFC stimulation, we found that pairing OFC activity with sounds caused dynamic and selective changes in sensory responses of neural populations in A1. Further, analysis of changes in signal and noise correlation after OFC pairing revealed improvement in neural population-based discrimination performance within A1. This improvement was frequency specific and dependent on correlation changes. These OFC-induced influences on auditory responses resemble behavior-induced influences on auditory responses and demonstrate that OFC activity could underlie the coordination of rapid, dynamic changes in A1 to dynamic sensory environments. PMID:24227723

  10. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds

    PubMed Central

    Engineer, Crystal T.; Shetake, Jai A.; Engineer, Navzer D.; Vrana, Will A.; Wolf, Jordan T.; Kilgard, Michael P.

    2017-01-01

    Background Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. Objective/Hypothesis We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. Methods VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Results Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. Conclusion This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. PMID:28131520

  11. [Children with specific language impairment: electrophysiological and pedaudiological findings].

    PubMed

    Rinker, T; Hartmann, K; Smith, E; Reiter, R; Alku, P; Kiefer, M; Brosch, S

    2014-08-01

    Auditory deficits may be at the core of the language delay in children with Specific Language Impairment (SLI). It was therefore hypothesized that children with SLI perform poorly on 4 tests typically used to diagnose central auditory processing disorder (CAPD) as well in the processing of phonetic and tone stimuli in an electrophysiological experiment. 14 children with SLI (mean age 61,7 months) and 16 children without SLI (mean age 64,9 months) were tested with 4 tasks: non-word repetition, language discrimination in noise, directional hearing, and dichotic listening. The electrophysiological recording Mismatch Negativity (MMN) employed sine tones (600 vs. 650 Hz) and phonetic stimuli (/ε/ versus /e/). Control children and children with SLI differed significantly in the non-word repetition as well as in the dichotic listening task but not in the two other tasks. Only the control children recognized the frequency difference in the MMN-experiment. The phonetic difference was discriminated by both groups, however, effects were longer lasting for the control children. Group differences were not significant. Children with SLI show limitations in auditory processing that involve either a complex task repeating unfamiliar or difficult material and show subtle deficits in auditory processing at the neural level. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Seasonal Plasticity of Precise Spike Timing in the Avian Auditory System

    PubMed Central

    Sen, Kamal; Rubel, Edwin W; Brenowitz, Eliot A.

    2015-01-01

    Vertebrate audition is a dynamic process, capable of exhibiting both short- and long-term adaptations to varying listening conditions. Precise spike timing has long been known to play an important role in auditory encoding, but its role in sensory plasticity remains largely unexplored. We addressed this issue in Gambel's white-crowned sparrow (Zonotrichia leucophrys gambelii), a songbird that shows pronounced seasonal fluctuations in circulating levels of sex-steroid hormones, which are known to be potent neuromodulators of auditory function. We recorded extracellular single-unit activity in the auditory forebrain of males and females under different breeding conditions and used a computational approach to explore two potential strategies for the neural discrimination of sound level: one based on spike counts and one based on spike timing reliability. We report that breeding condition has robust sex-specific effects on spike timing. Specifically, in females, breeding condition increases the proportion of cells that rely solely on spike timing information and increases the temporal resolution required for optimal intensity encoding. Furthermore, in a functionally distinct subset of cells that are particularly well suited for amplitude encoding, female breeding condition enhances spike timing-based discrimination accuracy. No effects of breeding condition were observed in males. Our results suggest that high-resolution temporal discharge patterns may provide a plastic neural substrate for sensory coding. PMID:25716843

  13. Assessing spectral and temporal processing in children and adults using temporal modulation transfer function (TMTF), Iterated Ripple Noise (IRN) perception, and spectral ripple discrimination (SRD).

    PubMed

    Peter, Varghese; Wong, Kogo; Narne, Vijaya Kumar; Sharma, Mridula; Purdy, Suzanne C; McMahon, Catherine

    2014-02-01

    There are many clinically available tests for the assessment of auditory processing skills in children and adults. However, there is limited data available on the maturational effects on the performance on these tests. The current study investigated maturational effects on auditory processing abilities using three psychophysical measures: temporal modulation transfer function (TMTF), iterated ripple noise (IRN) perception, and spectral ripple discrimination (SRD). A cross-sectional study. Three groups of subjects were tested: 10 adults (18-30 yr), 10 older children (12-18 yr), and 10 young children (8-11 yr) Temporal envelope processing was measured by obtaining thresholds for amplitude modulation detection as a function of modulation frequency (TMTF; 4, 8, 16, 32, 64, and 128 Hz). Temporal fine structure processing was measured using IRN, and spectral processing was measured using SRD. The results showed that young children had significantly higher modulation thresholds at 4 Hz (TMTF) compared to the other two groups and poorer SRD scores compared to adults. The results on IRN did not differ across groups. The results suggest that different aspects of auditory processing mature at different age periods and these maturational effects need to be considered while assessing auditory processing in children. American Academy of Audiology.

  14. Clinical applications of the human brainstem responses to auditory stimuli

    NASA Technical Reports Server (NTRS)

    Galambos, R.; Hecox, K.

    1975-01-01

    A technique utilizing the frequency following response (FFR) (obtained by auditory stimulation, whereby the stimulus frequency and duration are mirror-imaged in the resulting brainwaves) as a clinical tool for hearing disorders in humans of all ages is presented. Various medical studies are discussed to support the clinical value of the technique. The discovery and origin of the FFR and another significant brainstem auditory response involved in studying the eighth nerve is also discussed.

  15. Discrimination, other psychosocial stressors, and self-reported sleep duration and difficulties.

    PubMed

    Slopen, Natalie; Williams, David R

    2014-01-01

    To advance understanding of the relationship between discrimination and sleep duration and difficulties, with consideration of multiple dimensions of discrimination, and attention to concurrent stressors; and to examine the contribution of discrimination and other stressors to racial/ ethnic differences in these outcomes. Cross-sectional probability sample. Chicago, IL. There were 2,983 black, Hispanic, and white adults. Outcomes included self-reported sleep duration and difficulties. Discrimination, including racial and nonracial everyday and major experiences of discrimination, workplace harassment and incivilities, and other stressors were assessed via questionnaire. In models adjusted for sociodemographic characteristics, greater exposure to racial (β = -0.14)) and nonracial (β = -0.08) everyday discrimination, major experiences of discrimination attributed to race/ethnicity (β = -0.17), and workplace harassment and incivilities (β = -0.14) were associated with shorter sleep (P < 0.05). The association between major experiences of discrimination attributed to race/ethnicity and sleep duration (β = -0.09, P < 0.05) was independent of concurrent stressors (i.e., acute events, childhood adversity, and financial, community, employment, and relationship stressors). Racial (β = 0.04) and non-racial (β = 0.05) everyday discrimination and racial (β = 0.04) and nonracial (β = 0.04) major experiences of discrimination, and workplace harassment and incivilities (β = 0.04) were also associated with more (log) sleep difficulties, and associations between racial and nonracial everyday discrimination and sleep difficulties remained after adjustment for other stressors (P < 0.05). Racial/ethnic differences in sleep duration and difficulties were not significant after adjustment for discrimination (P > 0.05). Discrimination was associated with shorter sleep and more sleep difficulties, independent of socioeconomic status and other stressors, and may account for some of the racial/ethnic differences in sleep.

  16. Altered Brain Functional Activity in Infants with Congenital Bilateral Severe Sensorineural Hearing Loss: A Resting-State Functional MRI Study under Sedation.

    PubMed

    Xia, Shuang; Song, TianBin; Che, Jing; Li, Qiang; Chai, Chao; Zheng, Meizhu; Shen, Wen

    2017-01-01

    Early hearing deprivation could affect the development of auditory, language, and vision ability. Insufficient or no stimulation of the auditory cortex during the sensitive periods of plasticity could affect the function of hearing, language, and vision development. Twenty-three infants with congenital severe sensorineural hearing loss (CSSHL) and 17 age and sex matched normal hearing subjects were recruited. The amplitude of low frequency fluctuations (ALFF) and regional homogeneity (ReHo) of the auditory, language, and vision related brain areas were compared between deaf infants and normal subjects. Compared with normal hearing subjects, decreased ALFF and ReHo were observed in auditory and language-related cortex. Increased ALFF and ReHo were observed in vision related cortex, which suggest that hearing and language function were impaired and vision function was enhanced due to the loss of hearing. ALFF of left Brodmann area 45 (BA45) was negatively correlated with deaf duration in infants with CSSHL. ALFF of right BA39 was positively correlated with deaf duration in infants with CSSHL. In conclusion, ALFF and ReHo can reflect the abnormal brain function in language, auditory, and visual information processing in infants with CSSHL. This demonstrates that the development of auditory, language, and vision processing function has been affected by congenital severe sensorineural hearing loss before 4 years of age.

  17. Effects of Frequency Separation and Diotic/Dichotic Presentations on the Alternation Frequency Limits in Audition Derived from a Temporal Phase Discrimination Task.

    PubMed

    Kanaya, Shoko; Fujisaki, Waka; Nishida, Shin'ya; Furukawa, Shigeto; Yokosawa, Kazuhiko

    2015-02-01

    Temporal phase discrimination is a useful psychophysical task to evaluate how sensory signals, synchronously detected in parallel, are perceptually bound by human observers. In this task two stimulus sequences synchronously alternate between two states (say, A-B-A-B and X-Y-X-Y) in either of two temporal phases (ie A and B are respectively paired with X and Y, or vice versa). The critical alternation frequency beyond which participants cannot discriminate the temporal phase is measured as an index characterizing the temporal property of the underlying binding process. This task has been used to reveal the mechanisms underlying visual and cross-modal bindings. To directly compare these binding mechanisms with those in another modality, this study used the temporal phase discrimination task to reveal the processes underlying auditory bindings. The two sequences were alternations between two pitches. We manipulated the distance between the two sequences by changing intersequence frequency separation, or presentation ears (diotic vs dichotic). Results showed that the alternation frequency limit ranged from 7 to 30 Hz, becoming higher as the intersequence distance decreased, as is the case with vision. However, unlike vision, auditory phase discrimination limits were higher and more variable across participants. © 2015 SAGE Publications.

  18. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults.

    PubMed

    Saltuklaroglu, Tim; Harkrider, Ashley W; Thornton, David; Jenson, David; Kittilstved, Tiffani

    2017-06-01

    Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Current understanding of auditory neuropathy.

    PubMed

    Boo, Nem-Yun

    2008-12-01

    Auditory neuropathy is defined by the presence of normal evoked otoacoustic emissions (OAE) and absent or abnormal auditory brainstem responses (ABR). The sites of lesion could be at the cochlear inner hair cells, spiral ganglion cells of the cochlea, synapse between the inner hair cells and auditory nerve, or the auditory nerve itself. Genetic, infectious or neonatal/perinatal insults are the 3 most commonly identified underlying causes. Children usually present with delay in speech and language development while adult patients present with hearing loss and disproportionately poor speech discrimination for the degree of hearing loss. Although cochlear implant is the treatment of choice, current evidence show that it benefits only those patients with endocochlear lesions, but not those with cochlear nerve deficiency or central nervous system disorders. As auditory neuropathy is a disorder with potential long-term impact on a child's development, early hearing screen using both OAE and ABR should be carried out on all newborns and infants to allow early detection and intervention.

  20. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  1. Emergent selectivity for task-relevant stimuli in higher-order auditory cortex

    PubMed Central

    Atiani, Serin; David, Stephen V.; Elgueda, Diego; Locastro, Michael; Radtke-Schuller, Susanne; Shamma, Shihab A.; Fritz, Jonathan B.

    2014-01-01

    A variety of attention-related effects have been demonstrated in primary auditory cortex (A1). However, an understanding of the functional role of higher auditory cortical areas in guiding attention to acoustic stimuli has been elusive. We recorded from neurons in two tonotopic cortical belt areas in the dorsal posterior ectosylvian gyrus (dPEG) of ferrets trained on a simple auditory discrimination task. Neurons in dPEG showed similar basic auditory tuning properties to A1, but during behavior we observed marked differences between these areas. In the belt areas, changes in neuronal firing rate and response dynamics greatly enhanced responses to target stimuli relative to distractors, allowing for greater attentional selection during active listening. Consistent with existing anatomical evidence, the pattern of sensory tuning and behavioral modulation in auditory belt cortex links the spectro-temporal representation of the whole acoustic scene in A1 to a more abstracted representation of task-relevant stimuli observed in frontal cortex. PMID:24742467

  2. Ups and Downs in Auditory Development: Preschoolers' Sensitivity to Pitch Contour and Timbre.

    PubMed

    Creel, Sarah C

    2016-03-01

    Much research has explored developing sound representations in language, but less work addresses developing representations of other sound patterns. This study examined preschool children's musical representations using two different tasks: discrimination and sound-picture association. Melodic contour--a musically relevant property--and instrumental timbre, which is (arguably) less musically relevant, were tested. In Experiment 1, children failed to associate cartoon characters to melodies with maximally different pitch contours, with no advantage for melody preexposure. Experiment 2 also used different-contour melodies and found good discrimination, whereas association was at chance. Experiment 3 replicated Experiment 2, but with a large timbre change instead of a contour change. Here, discrimination and association were both excellent. Preschool-aged children may have stronger or more durable representations of timbre than contour, particularly in more difficult tasks. Reasons for weaker association of contour than timbre information are discussed, along with implications for auditory development. Copyright © 2015 Cognitive Science Society, Inc.

  3. Evidence for distinct human auditory cortex regions for sound location versus identity processing

    PubMed Central

    Ahveninen, Jyrki; Huang, Samantha; Nummenmaa, Aapo; Belliveau, John W.; Hung, An-Yi; Jääskeläinen, Iiro P.; Rauschecker, Josef P.; Rossi, Stephanie; Tiitinen, Hannu; Raij, Tommi

    2014-01-01

    Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound-identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55–145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC. PMID:24121634

  4. Enhanced perception of pitch changes in speech and music in early blind adults.

    PubMed

    Arnaud, Laureline; Gracco, Vincent; Ménard, Lucie

    2018-06-12

    It is well known that congenitally blind adults have enhanced auditory processing for some tasks. For instance, they show supra-normal capacity to perceive accelerated speech. However, only a few studies have investigated basic auditory processing in this population. In this study, we investigated if pitch processing enhancement in the blind is a domain-general or domain-specific phenomenon, and if pitch processing shares the same properties as in the sighted regarding how scores from different domains are associated. Fifteen congenitally blind adults and fifteen sighted adults participated in the study. We first created a set of personalized native and non-native vowel stimuli using an identification and rating task. Then, an adaptive discrimination paradigm was used to determine the frequency difference limen for pitch direction identification of speech (native and non-native vowels) and non-speech stimuli (musical instruments and pure tones). The results show that the blind participants had better discrimination thresholds than controls for native vowels, music stimuli, and pure tones. Whereas within the blind group, the discrimination thresholds were smaller for musical stimuli than speech stimuli, replicating previous findings in sighted participants, we did not find this effect in the current control group. Further analyses indicate that older sighted participants show higher thresholds for instrument sounds compared to speech sounds. This effect of age was not found in the blind group. Moreover, the scores across domains were not associated to the same extent in the blind as they were in the sighted. In conclusion, in addition to providing further evidence of compensatory auditory mechanisms in early blind individuals, our results point to differences in how auditory processing is modulated in this population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. The modulation rate transfer function of a harbour porpoise (Phocoena phocoena).

    PubMed

    Linnenschmidt, Meike; Wahlberg, Magnus; Damsgaard Hansen, Janni

    2013-02-01

    During echolocation, toothed whales produce ultrasonic clicks at extremely rapid rates and listen for the returning echoes. The auditory brainstem response (ABR) duration was evaluated in terms of latency between single peaks: 5.5 ms (from peak I to VII), 3.4 ms (I-VI), and 1.4 ms (II-IV). In comparison to the killer whale and the bottlenose dolphin, the ABR of the harbour porpoise has shorter intervals between the peaks and consequently a shorter ABR duration. This indicates that the ABR duration and peak latencies are possibly related to the relative size of the auditory structures of the central nervous system and thus to the animal's size. The ABR to a sinusoidal amplitude modulated stimulus at 125 kHz (sensitivity threshold 63 dB re 1 μPa rms) was evaluated to determine the modulation rate transfer function of a harbour porpoise. The ABR showed distinct envelope following responses up to a modulation rate of 1,900 Hz. The corresponding calculated equivalent rectangular duration of 263 μs indicates a good temporal resolution in the harbour porpoise auditory system similar to the one for the bottlenose dolphin. The results explain how the harbour porpoise can follow clicks and echoes during echolocation with very short inter click intervals.

  6. Developmental hearing loss impedes auditory task learning and performance in gerbils.

    PubMed

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N; Sanes, Dan H

    2017-04-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. The influence of bat echolocation call duration and timing on auditory encoding of predator distance in noctuoid moths.

    PubMed

    Gordon, Shira D; Ter Hofstede, Hannah M

    2018-03-22

    Animals co-occur with multiple predators, making sensory systems that can encode information about diverse predators advantageous. Moths in the families Noctuidae and Erebidae have ears with two auditory receptor cells (A1 and A2) used to detect the echolocation calls of predatory bats. Bat communities contain species that vary in echolocation call duration, and the dynamic range of A1 is limited by the duration of sound, suggesting that A1 provides less information about bats with shorter echolocation calls. To test this hypothesis, we obtained intensity-response functions for both receptor cells across many moth species for sound pulse durations representing the range of echolocation call durations produced by bat species in northeastern North America. We found that the threshold and dynamic range of both cells varied with sound pulse duration. The number of A1 action potentials per sound pulse increases linearly with increasing amplitude for long-duration pulses, saturating near the A2 threshold. For short sound pulses, however, A1 saturates with only a few action potentials per pulse at amplitudes far lower than the A2 threshold for both single sound pulses and pulse sequences typical of searching or approaching bats. Neural adaptation was only evident in response to approaching bat sequences at high amplitudes, not search-phase sequences. These results show that, for short echolocation calls, a large range of sound levels cannot be coded by moth auditory receptor activity, resulting in no information about the distance of a bat, although differences in activity between ears might provide information about direction. © 2018. Published by The Company of Biologists Ltd.

  8. Stimulus Intensity and the Perception of Duration

    ERIC Educational Resources Information Center

    Matthews, William J.; Stewart, Neil; Wearden, John H.

    2011-01-01

    This article explores the widely reported finding that the subjective duration of a stimulus is positively related to its magnitude. In Experiments 1 and 2 we show that, for both auditory and visual stimuli, the effect of stimulus magnitude on the perception of duration depends upon the background: Against a high intensity background, weak stimuli…

  9. Temporal processing dysfunction in schizophrenia.

    PubMed

    Carroll, Christine A; Boggs, Jennifer; O'Donnell, Brian F; Shekhar, Anantha; Hetrick, William P

    2008-07-01

    Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the pathophysiology of schizophrenia, there remains a paucity of research directly examining overt timing performance in the disorder. Accordingly, the present study investigated timing in schizophrenia using a well-established task of time perception. Twenty-three individuals with schizophrenia and 22 non-psychiatric control participants completed a temporal bisection task, which required participants to make temporal judgments about auditory and visually presented durations ranging from 300 to 600 ms. Both schizophrenia and control groups displayed greater visual compared to auditory timing variability, with no difference between groups in the visual modality. However, individuals with schizophrenia exhibited less temporal precision than controls in the perception of auditory durations. These findings correlated with parameter estimates obtained from a quantitative model of time estimation, and provide evidence of a fundamental deficit in temporal auditory precision in schizophrenia.

  10. Reduced auditory efferent activity in childhood selective mutism.

    PubMed

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.

  11. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    PubMed Central

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  12. A Melodic Contour Repeatedly Experienced by Human Near-Term Fetuses Elicits a Profound Cardiac Reaction One Month after Birth

    PubMed Central

    Granier-Deferre, Carolyn; Bassereau, Sophie; Ribeiro, Aurélie; Jacquet, Anne-Yvonne; DeCasper, Anthony J.

    2011-01-01

    Background Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth. Methodology/Principal Findings Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35th, 36th, and 37th weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants. Conclusions/Significance Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants ‘auditory processing’ or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3–4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed. PMID:21383836

  13. Right Occipital Cortex Activation Correlates with Superior Odor Processing Performance in the Early Blind

    PubMed Central

    Grandin, Cécile B.; Dricot, Laurence; Plaza, Paula; Lerens, Elodie; Rombaux, Philippe; De Volder, Anne G.

    2013-01-01

    Using functional magnetic resonance imaging (fMRI) in ten early blind humans, we found robust occipital activation during two odor-processing tasks (discrimination or categorization of fruit and flower odors), as well as during control auditory-verbal conditions (discrimination or categorization of fruit and flower names). We also found evidence for reorganization and specialization of the ventral part of the occipital cortex, with dissociation according to stimulus modality: the right fusiform gyrus was most activated during olfactory conditions while part of the left ventral lateral occipital complex showed a preference for auditory-verbal processing. Only little occipital activation was found in sighted subjects, but the same right-olfactory/left-auditory-verbal hemispheric lateralization was found overall in their brain. This difference between the groups was mirrored by superior performance of the blind in various odor-processing tasks. Moreover, the level of right fusiform gyrus activation during the olfactory conditions was highly correlated with individual scores in a variety of odor recognition tests, indicating that the additional occipital activation may play a functional role in odor processing. PMID:23967263

  14. Language impairment is reflected in auditory evoked fields.

    PubMed

    Pihko, Elina; Kujala, Teija; Mickos, Annika; Alku, Paavo; Byring, Roger; Korkman, Marit

    2008-05-01

    Specific language impairment (SLI) is diagnosed when a child has problems in producing or understanding language despite having a normal IQ and there being no other obvious explanation. There can be several associated problems, and no single underlying cause has yet been identified. Some theories propose problems in auditory processing, specifically in the discrimination of sound frequency or rapid temporal frequency changes. We compared automatic cortical speech-sound processing and discrimination between a group of children with SLI and control children with normal language development (mean age: 6.6 years; range: 5-7 years). We measured auditory evoked magnetic fields using two sets of CV syllables, one with a changing consonant /da/ba/ga/ and another one with a changing vowel /su/so/sy/ in an oddball paradigm. The P1m responses for onsets of repetitive stimuli were weaker in the SLI group whereas no significant group differences were found in the mismatch responses. The results indicate that the SLI group, having weaker responses to the onsets of sounds, might have slightly depressed sensory encoding.

  15. Speech processing and production in two-year-old children acquiring isiXhosa: A tale of two children

    PubMed Central

    Rossouw, Kate; Fish, Laura; Jansen, Charne; Manley, Natalie; Powell, Michelle; Rosen, Loren

    2016-01-01

    We investigated the speech processing and production of 2-year-old children acquiring isiXhosa in South Africa. Two children (2 years, 5 months; 2 years, 8 months) are presented as single cases. Speech input processing, stored phonological knowledge and speech output are described, based on data from auditory discrimination, naming, and repetition tasks. Both children were approximating adult levels of accuracy in their speech output, although naming was constrained by vocabulary. Performance across tasks was variable: One child showed a relative strength with repetition, and experienced most difficulties with auditory discrimination. The other performed equally well in naming and repetition, and obtained 100% for her auditory task. There is limited data regarding typical development of isiXhosa, and the focus has mainly been on speech production. This exploratory study describes typical development of isiXhosa using a variety of tasks understood within a psycholinguistic framework. We describe some ways in which speech and language therapists can devise and carry out assessment with children in situations where few formal assessments exist, and also detail the challenges of such work. PMID:27245131

  16. Auditory Information Systems in Military Aircraft: Current Configurations Versus the State of the Art.

    DTIC Science & Technology

    1984-06-01

    types of conditions, discriminable differences in intensity, pitch, or use of beats or harmonics shall be provided. If absolute discrimination is...shall be directed to the operator’s headset as well as to the work area. Binaural headsets should not be used in any operational environment below 85...signals are to be used to alert an operator to different types of conditions, discriminable difference in Intensity, pitch, or use of beats and harmonics

  17. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    PubMed

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  18. EXPERIMENTAL ANALYSIS OF THE CONTROL OF SPEECH PRODUCTION AND PERCEPTION. PROGRESS REPORT NO. I, FEBRUARY 1--SEPTEMBER 1, 1961.

    ERIC Educational Resources Information Center

    LANE, HARLAN; AND OTHERS

    THIS DOCUMENT IS THE FIRST IN A SERIES REPORTING ON PROGRESS OF AN EXPERIMENTAL RESEARCH PROGRAM IN SPEECH CONTROL. THE TOPICS DISCUSSED ARE--(1) THE DISCONTINUITY OF AUDITORY DISCRIMINATION LEARNING IN HUMAN ADULTS, (2) DISCRIMINATIVE CONTROL OF CONCURRENT RESPONSES--THE RELATIONS AMONG RESPONSE FREQUENCY, LATENCY, AND TOPOGRAPHY IN AUDITORY…

  19. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-08-29

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children.

  20. Do informal musical activities shape auditory skill development in preschool-age children?

    PubMed Central

    Putkinen, Vesa; Saarikivi, Katri; Tervaniemi, Mari

    2013-01-01

    The influence of formal musical training on auditory cognition has been well established. For the majority of children, however, musical experience does not primarily consist of adult-guided training on a musical instrument. Instead, young children mostly engage in everyday musical activities such as singing and musical play. Here, we review recent electrophysiological and behavioral studies carried out in our laboratory and elsewhere which have begun to map how developing auditory skills are shaped by such informal musical activities both at home and in playschool-type settings. Although more research is still needed, the evidence emerging from these studies suggests that, in addition to formal musical training, informal musical activities can also influence the maturation of auditory discrimination and attention in preschool-aged children. PMID:24009597

  1. Auditory Masking Effects on Speech Fluency in Apraxia of Speech and Aphasia: Comparison to Altered Auditory Feedback

    PubMed Central

    Haley, Katarina L.

    2015-01-01

    Purpose To study the effects of masked auditory feedback (MAF) on speech fluency in adults with aphasia and/or apraxia of speech (APH/AOS). We hypothesized that adults with AOS would increase speech fluency when speaking with noise. Altered auditory feedback (AAF; i.e., delayed/frequency-shifted feedback) was included as a control condition not expected to improve speech fluency. Method Ten participants with APH/AOS and 10 neurologically healthy (NH) participants were studied under both feedback conditions. To allow examination of individual responses, we used an ABACA design. Effects were examined on syllable rate, disfluency duration, and vocal intensity. Results Seven of 10 APH/AOS participants increased fluency with masking by increasing rate, decreasing disfluency duration, or both. In contrast, none of the NH participants increased speaking rate with MAF. In the AAF condition, only 1 APH/AOS participant increased fluency. Four APH/AOS participants and 8 NH participants slowed their rate with AAF. Conclusions Speaking with MAF appears to increase fluency in a subset of individuals with APH/AOS, indicating that overreliance on auditory feedback monitoring may contribute to their disorder presentation. The distinction between responders and nonresponders was not linked to AOS diagnosis, so additional work is needed to develop hypotheses for candidacy and underlying control mechanisms. PMID:26363508

  2. Movement goals and feedback and feedforward control mechanisms in speech production

    PubMed Central

    Perkell, Joseph S.

    2010-01-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences. PMID:22661828

  3. Movement goals and feedback and feedforward control mechanisms in speech production.

    PubMed

    Perkell, Joseph S

    2012-09-01

    Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.

  4. Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams.

    PubMed

    Centanni, Tracy Michelle; Booker, Anne B; Chen, Fuyi; Sloan, Andrew M; Carraway, Ryan S; Rennaker, Robert L; LoTurco, Joseph J; Kilgard, Michael P

    2016-04-27

    Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC-) before any behavioral training. A separate group of 8 rats (3 DC-) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. Copyright © 2016 the authors 0270-6474/16/364895-12$15.00/0.

  5. Knockdown of Dyslexia-Gene Dcdc2 Interferes with Speech Sound Discrimination in Continuous Streams

    PubMed Central

    Booker, Anne B.; Chen, Fuyi; Sloan, Andrew M.; Carraway, Ryan S.; Rennaker, Robert L.; LoTurco, Joseph J.; Kilgard, Michael P.

    2016-01-01

    Dyslexia is the most common developmental language disorder and is marked by deficits in reading and phonological awareness. One theory of dyslexia suggests that the phonological awareness deficit is due to abnormal auditory processing of speech sounds. Variants in DCDC2 and several other neural migration genes are associated with dyslexia and may contribute to auditory processing deficits. In the current study, we tested the hypothesis that RNAi suppression of Dcdc2 in rats causes abnormal cortical responses to sound and impaired speech sound discrimination. In the current study, rats were subjected in utero to RNA interference targeting of the gene Dcdc2 or a scrambled sequence. Primary auditory cortex (A1) responses were acquired from 11 rats (5 with Dcdc2 RNAi; DC−) before any behavioral training. A separate group of 8 rats (3 DC−) were trained on a variety of speech sound discrimination tasks, and auditory cortex responses were acquired following training. Dcdc2 RNAi nearly eliminated the ability of rats to identify specific speech sounds from a continuous train of speech sounds but did not impair performance during discrimination of isolated speech sounds. The neural responses to speech sounds in A1 were not degraded as a function of presentation rate before training. These results suggest that A1 is not directly involved in the impaired speech discrimination caused by Dcdc2 RNAi. This result contrasts earlier results using Kiaa0319 RNAi and suggests that different dyslexia genes may cause different deficits in the speech processing circuitry, which may explain differential responses to therapy. SIGNIFICANCE STATEMENT Although dyslexia is diagnosed through reading difficulty, there is a great deal of variation in the phenotypes of these individuals. The underlying neural and genetic mechanisms causing these differences are still widely debated. In the current study, we demonstrate that suppression of a candidate-dyslexia gene causes deficits on tasks of rapid stimulus processing. These animals also exhibited abnormal neural plasticity after training, which may be a mechanism for why some children with dyslexia do not respond to intervention. These results are in stark contrast to our previous work with a different candidate gene, which caused a different set of deficits. Our results shed some light on possible neural and genetic mechanisms causing heterogeneity in the dyslexic population. PMID:27122044

  6. Performance of normal adults and children on central auditory diagnostic tests and their corresponding visual analogs.

    PubMed

    Bellis, Teri James; Ross, Jody

    2011-09-01

    It has been suggested that, in order to validate a diagnosis of (C)APD (central auditory processing disorder), testing using direct cross-modal analogs should be performed to demonstrate that deficits exist solely or primarily in the auditory modality (McFarland and Cacace, 1995; Cacace and McFarland, 2005). This modality-specific viewpoint is controversial and not universally accepted (American Speech-Language-Hearing Association [ASHA], 2005; Musiek et al, 2005). Further, no such analogs have been developed to date, and neither the feasibility of such testing in normally functioning individuals nor the concurrent validity of cross-modal analogs has been established. The purpose of this study was to investigate the feasibility of cross-modal testing by examining the performance of normal adults and children on four tests of central auditory function and their corresponding visual analogs. In addition, this study investigated the degree to which concurrent validity of auditory and visual versions of these tests could be demonstrated. An experimental repeated measures design was employed. Participants consisted of two groups (adults, n=10; children, n=10) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of auditory/otologic, language, learning, neurologic, or related disorders. Visual analogs of four tests in common clinical use for the diagnosis of (C)APD were developed (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; Duration Patterns [Pinheiro and Musiek, 1985]; and the Random Gap Detection Test [RGDT; Keith, 2000]). Participants underwent two 1 hr test sessions separated by at least 1 wk. Order of sessions (auditory, visual) and tests within each session were counterbalanced across participants. ANOVAs (analyses of variance) were used to examine effects of group, modality, and laterality (for the Dichotic/Dichoptic Digits tests) or response condition (for the auditory and visual Frequency Patterns and Duration Patterns tests). Pearson product-moment correlations were used to investigate relationships between auditory and visual performance. Adults performed significantly better than children on the Dichotic/Dichoptic Digits tests. Results also revealed a significant effect of modality, with auditory better than visual, and a significant modality×laterality interaction, with a right-ear advantage seen for the auditory task and a left-visual-field advantage seen for the visual task. For the Frequency Patterns test and its visual analog, results revealed a significant modality×response condition interaction, with humming better than labeling for the auditory version but the reversed effect for the visual version. For Duration Patterns testing, visual performance was significantly poorer than auditory performance. Due to poor test-retest reliability and ceiling effects for the auditory and visual gap-detection tasks, analyses could not be performed. No cross-modal correlations were observed for any test. Results demonstrated that cross-modal testing is at least feasible using easily accessible computer hardware and software. The lack of any cross-modal correlations suggests independent processing mechanisms for auditory and visual versions of each task. Examination of performance in individuals with central auditory and pan-sensory disorders is needed to determine the utility of cross-modal analogs in the differential diagnosis of (C)APD. American Academy of Audiology.

  7. Conserved mechanisms of vocalization coding in mammalian and songbird auditory midbrain.

    PubMed

    Woolley, Sarah M N; Portfors, Christine V

    2013-11-01

    The ubiquity of social vocalizations among animals provides the opportunity to identify conserved mechanisms of auditory processing that subserve communication. Identifying auditory coding properties that are shared across vocal communicators will provide insight into how human auditory processing leads to speech perception. Here, we compare auditory response properties and neural coding of social vocalizations in auditory midbrain neurons of mammalian and avian vocal communicators. The auditory midbrain is a nexus of auditory processing because it receives and integrates information from multiple parallel pathways and provides the ascending auditory input to the thalamus. The auditory midbrain is also the first region in the ascending auditory system where neurons show complex tuning properties that are correlated with the acoustics of social vocalizations. Single unit studies in mice, bats and zebra finches reveal shared principles of auditory coding including tonotopy, excitatory and inhibitory interactions that shape responses to vocal signals, nonlinear response properties that are important for auditory coding of social vocalizations and modulation tuning. Additionally, single neuron responses in the mouse and songbird midbrain are reliable, selective for specific syllables, and rely on spike timing for neural discrimination of distinct vocalizations. We propose that future research on auditory coding of vocalizations in mouse and songbird midbrain neurons adopt similar experimental and analytical approaches so that conserved principles of vocalization coding may be distinguished from those that are specialized for each species. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Cueing listeners to attend to a target talker progressively improves word report as the duration of the cue-target interval lengthens to 2,000 ms.

    PubMed

    Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin

    2018-04-25

    Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.

  9. Automatic auditory processing deficits in schizophrenia and clinical high-risk patients: forecasting psychosis risk with mismatch negativity.

    PubMed

    Perez, Veronica B; Woods, Scott W; Roach, Brian J; Ford, Judith M; McGlashan, Thomas H; Srihari, Vinod H; Mathalon, Daniel H

    2014-03-15

    Only about one third of patients at high risk for psychosis based on current clinical criteria convert to a psychotic disorder within a 2.5-year follow-up period. Targeting clinical high-risk (CHR) individuals for preventive interventions could expose many to unnecessary treatments, underscoring the need to enhance predictive accuracy with nonclinical measures. Candidate measures include event-related potential components with established sensitivity to schizophrenia. Here, we examined the mismatch negativity (MMN) component of the event-related potential elicited automatically by auditory deviance in CHR and early illness schizophrenia (ESZ) patients. We also examined whether MMN predicted subsequent conversion to psychosis in CHR patients. Mismatch negativity to auditory deviants (duration, frequency, and duration + frequency double deviant) was assessed in 44 healthy control subjects, 19 ESZ, and 38 CHR patients. Within CHR patients, 15 converters to psychosis were compared with 16 nonconverters with at least 12 months of clinical follow-up. Hierarchical Cox regression examined the ability of MMN to predict time to psychosis onset in CHR patients. Irrespective of deviant type, MMN was significantly reduced in ESZ and CHR patients relative to healthy control subjects and in CHR converters relative to nonconverters. Mismatch negativity did not significantly differentiate ESZ and CHR patients. The duration + frequency double deviant MMN, but not the single deviant MMNs, significantly predicted the time to psychosis onset in CHR patients. Neurophysiological mechanisms underlying automatic processing of auditory deviance, as reflected by the duration + frequency double deviant MMN, are compromised before psychosis onset and can enhance the prediction of psychosis risk among CHR patients. Copyright © 2014 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    PubMed

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  11. Monkey’s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices

    PubMed Central

    Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.

    2016-01-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975

  12. Predictive cues for auditory stream formation in humans and monkeys.

    PubMed

    Aggelopoulos, Nikolaos C; Deike, Susann; Selezneva, Elena; Scheich, Henning; Brechmann, André; Brosch, Michael

    2017-12-18

    Auditory perception is improved when stimuli are predictable, and this effect is evident in a modulation of the activity of neurons in the auditory cortex as shown previously. Human listeners can better predict the presence of duration deviants embedded in stimulus streams with fixed interonset interval (isochrony) and repeated duration pattern (regularity), and neurons in the auditory cortex of macaque monkeys have stronger sustained responses in the 60-140 ms post-stimulus time window under these conditions. Subsequently, the question has arisen whether isochrony or regularity in the sensory input contributed to the enhancement of the neuronal and behavioural responses. Therefore, we varied the two factors isochrony and regularity independently and measured the ability of human subjects to detect deviants embedded in these sequences as well as measuring the responses of neurons the primary auditory cortex of macaque monkeys during presentations of the sequences. The performance of humans in detecting deviants was significantly increased by regularity. Isochrony enhanced detection only in the presence of the regularity cue. In monkeys, regularity increased the sustained component of neuronal tone responses in auditory cortex while isochrony had no consistent effect. Although both regularity and isochrony can be considered as parameters that would make a sequence of sounds more predictable, our results from the human and monkey experiments converge in that regularity has a greater influence on behavioural performance and neuronal responses. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening.

    PubMed

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants' auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution.

  14. Distributional Learning of Lexical Tones: A Comparison of Attended vs. Unattended Listening

    PubMed Central

    Ong, Jia Hoong; Burnham, Denis; Escudero, Paola

    2015-01-01

    This study examines whether non-tone language listeners can acquire lexical tone categories distributionally and whether attention in the training phase modulates the effect of distributional learning. Native Australian English listeners were trained on a Thai lexical tone minimal pair and their performance was assessed using a discrimination task before and after training. During Training, participants either heard a Unimodal distribution that would induce a single central category, which should hinder their discrimination of that minimal pair, or a Bimodal distribution that would induce two separate categories that should facilitate their discrimination. The participants either heard the distribution passively (Experiments 1A and 1B) or performed a cover task during training designed to encourage auditory attention to the entire distribution (Experiment 2). In passive listening (Experiments 1A and 1B), results indicated no effect of distributional learning: the Bimodal group did not outperform the Unimodal group in discriminating the Thai tone minimal pairs. Moreover, both Unimodal and Bimodal groups improved above chance on most test aspects from Pretest to Posttest. However, when participants’ auditory attention was encouraged using the cover task (Experiment 2), distributional learning was found: the Bimodal group outperformed the Unimodal group on a novel test syllable minimal pair at Posttest relative to at Pretest. Furthermore, the Bimodal group showed above-chance improvement from Pretest to Posttest on three test aspects, while the Unimodal group only showed above-chance improvement on one test aspect. These results suggest that non-tone language listeners are able to learn lexical tones distributionally but only when auditory attention is encouraged in the acquisition phase. This implies that distributional learning of lexical tones is more readily induced when participants attend carefully during training, presumably because they are better able to compute the relevant statistics of the distribution. PMID:26214002

  15. Speech-Sound Duration Processing in a Second Language is Specific to Phonetic Categories

    ERIC Educational Resources Information Center

    Nenonen, Sari; Shestakova, Anna; Huotilainen, Minna; Naatanen, Risto

    2005-01-01

    The mismatch negativity (MMN) component of the auditory event-related potential was used to determine the effect of native language, Russian, on the processing of speech-sound duration in a second language, Finnish, that uses duration as a cue for phonological distinction. The native-language effect was compared with Finnish vowels that either can…

  16. Representation of Dynamic Interaural Phase Difference in Auditory Cortex of Awake Rhesus Macaques

    PubMed Central

    Scott, Brian H.; Malone, Brian J.; Semple, Malcolm N.

    2009-01-01

    Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level. PMID:19164111

  17. Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.

    PubMed

    Scott, Brian H; Malone, Brian J; Semple, Malcolm N

    2009-04-01

    Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.

  18. English vowel identification and vowel formant discrimination by native Mandarin Chinese- and native English-speaking listeners: The effect of vowel duration dependence.

    PubMed

    Mi, Lin; Tao, Sha; Wang, Wenjing; Dong, Qi; Guan, Jingjing; Liu, Chang

    2016-03-01

    The purpose of this study was to examine the relationship between English vowel identification and English vowel formant discrimination for native Mandarin Chinese- and native English-speaking listeners. The identification of 12 English vowels was measured with the duration cue preserved or removed. The thresholds of vowel formant discrimination on the F2 of two English vowels,/Λ/and/i/, were also estimated using an adaptive-tracking procedure. Native Mandarin Chinese-speaking listeners showed significantly higher thresholds of vowel formant discrimination and lower identification scores than native English-speaking listeners. The duration effect on English vowel identification was similar between native Mandarin Chinese- and native English-speaking listeners. Moreover, regardless of listeners' language background, vowel identification was significantly correlated with vowel formant discrimination for the listeners who were less dependent on duration cues, whereas the correlation between vowel identification and vowel formant discrimination was not significant for the listeners who were highly dependent on duration cues. This study revealed individual variability in using multiple acoustic cues to identify English vowels for both native and non-native listeners. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Speech training alters tone frequency tuning in rat primary auditory cortex

    PubMed Central

    Engineer, Crystal T.; Perez, Claudia A.; Carraway, Ryan S.; Chang, Kevin Q.; Roland, Jarod L.; Kilgard, Michael P.

    2013-01-01

    Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing. PMID:24344364

  20. Learning and Discrimination of Audiovisual Events in Human Infants: The Hierarchical Relation between Intersensory Temporal Synchrony and Rhythmic Pattern Cues.

    ERIC Educational Resources Information Center

    Lewkowicz, David J.

    2003-01-01

    Three experiments examined 4- to 10-month-olds' perception of audio-visual (A-V) temporal synchrony cues in the presence or absence of rhythmic pattern cues. Results established that infants of all ages could discriminate between two different audio-visual rhythmic events. Only 10-month-olds detected a desynchronization of the auditory and visual…

  1. Short-term memory for event duration: modality specificity and goal dependency.

    PubMed

    Takahashi, Kohske; Watanabe, Katsumi

    2012-11-01

    Time perception is involved in various cognitive functions. This study investigated the characteristics of short-term memory for event duration by examining how the length of the retention period affects inter- and intramodal duration judgment. On each trial, a sample stimulus was followed by a comparison stimulus, after a variable delay period (0.5-5 s). The sample and comparison stimuli were presented in the visual or auditory modality. The participants determined whether the comparison stimulus was longer or shorter than the sample stimulus. The distortion pattern of subjective duration during the delay period depended on the sensory modality of the comparison stimulus but was not affected by that of the sample stimulus. When the comparison stimulus was visually presented, the retained duration of the sample stimulus was shortened as the delay period increased. Contrarily, when the comparison stimulus was presented in the auditory modality, the delay period had little to no effect on the retained duration. Furthermore, whenever the participants did not know the sensory modality of the comparison stimulus beforehand, the effect of the delay period disappeared. These results suggest that the memory process for event duration is specific to sensory modality and that its performance is determined depending on the sensory modality in which the retained duration will be used subsequently.

  2. Underwater temporary threshold shift in pinnipeds: effects of noise level and duration.

    PubMed

    Kastak, David; Southall, Brandon L; Schusterman, Ronald J; Kastak, Colleen Reichmuth

    2005-11-01

    Behavioral psychophysical techniques were used to evaluate the residual effects of underwater noise on the hearing sensitivity of three pinnipeds: a California sea lion (Zalophus californianus), a harbor seal (Phoca vitulina), and a northern elephant seal (Mirounga angustirostris). Temporary threshold shift (TTS), defined as the difference between auditory thresholds obtained before and after noise exposure, was assessed. The subjects were exposed to octave-band noise centered at 2500 Hz at two sound pressure levels: 80 and 95 dB SL (re: auditory threshold at 2500 Hz). Noise exposure durations were 22, 25, and 50 min. Threshold shifts were assessed at 2500 and 3530 Hz. Mean threshold shifts ranged from 2.9-12.2 dB. Full recovery of auditory sensitivity occurred within 24 h of noise exposure. Control sequences, comprising sham noise exposures, did not result in significant mean threshold shifts for any subject. Threshold shift magnitudes increased with increasing noise sound exposure level (SEL) for two of the three subjects. The results underscore the importance of including sound exposure metrics (incorporating sound pressure level and exposure duration) in order to fully assess the effects of noise on marine mammal hearing.

  3. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  4. Acquired word deafness, and the temporal grain of sound representation in the primary auditory cortex.

    PubMed

    Phillips, D P; Farmer, M E

    1990-11-15

    This paper explores the nature of the processing disorder which underlies the speech discrimination deficit in the syndrome of acquired word deafness following from pathology to the primary auditory cortex. A critical examination of the evidence on this disorder revealed the following. First, the most profound forms of the condition are expressed not only in an isolation of the cerebral linguistic processor from auditory input, but in a failure of even the perceptual elaboration of the relevant sounds. Second, in agreement with earlier studies, we conclude that the perceptual dimension disturbed in word deafness is a temporal one. We argue, however, that it is not a generalized disorder of auditory temporal processing, but one which is largely restricted to the processing of sounds with temporal content in the milliseconds to tens-of-milliseconds time frame. The perceptual elaboration of sounds with temporal content outside that range, in either direction, may survive the disorder. Third, we present neurophysiological evidence that the primary auditory cortex has a special role in the representation of auditory events in that time frame, but not in the representation of auditory events with temporal grains outside that range.

  5. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location

    PubMed Central

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-01-01

    Summary Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here we studied instead whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5 secs delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array), or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d′) for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location. PMID:18566808

  6. Eye-movements intervening between two successive sounds disrupt comparisons of auditory location.

    PubMed

    Pavani, Francesco; Husain, Masud; Driver, Jon

    2008-08-01

    Many studies have investigated how saccades may affect the internal representation of visual locations across eye-movements. Here, we studied, instead, whether eye-movements can affect auditory spatial cognition. In two experiments, participants judged the relative azimuth (same/different) of two successive sounds presented from a horizontal array of loudspeakers, separated by a 2.5-s delay. Eye-position was either held constant throughout the trial (being directed in a fixed manner to the far left or right of the loudspeaker array) or had to be shifted to the opposite side of the array during the retention delay between the two sounds, after the first sound but before the second. Loudspeakers were either visible (Experiment 1) or occluded from sight (Experiment 2). In both cases, shifting eye-position during the silent delay-period affected auditory performance in thn the successive auditory comparison task, even though the auditory inputs to be judged were equivalent. Sensitivity (d') for the auditory discrimination was disrupted, specifically when the second sound shifted in the opposite direction to the intervening eye-movement with respect to the first sound. These results indicate that eye-movements affect internal representation of auditory location.

  7. Auditory false perception in schizophrenia: Development and validation of auditory signal detection task.

    PubMed

    Chhabra, Harleen; Sowmya, Selvaraj; Sreeraj, Vanteemar S; Kalmady, Sunil V; Shivakumar, Venkataram; Amaresha, Anekal C; Narayanaswamy, Janardhanan C; Venkatasubramanian, Ganesan

    2016-12-01

    Auditory hallucinations constitute an important symptom component in 70-80% of schizophrenia patients. These hallucinations are proposed to occur due to an imbalance between perceptual expectation and external input, resulting in attachment of meaning to abstract noises; signal detection theory has been proposed to explain these phenomena. In this study, we describe the development of an auditory signal detection task using a carefully chosen set of English words that could be tested successfully in schizophrenia patients coming from varying linguistic, cultural and social backgrounds. Schizophrenia patients with significant auditory hallucinations (N=15) and healthy controls (N=15) performed the auditory signal detection task wherein they were instructed to differentiate between a 5-s burst of plain white noise and voiced-noise. The analysis showed that false alarms (p=0.02), discriminability index (p=0.001) and decision bias (p=0.004) were significantly different between the two groups. There was a significant negative correlation between false alarm rate and decision bias. These findings extend further support for impaired perceptual expectation system in schizophrenia patients. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Effects of musical training on the auditory cortex in children.

    PubMed

    Trainor, Laurel J; Shahin, Antoine; Roberts, Larry E

    2003-11-01

    Several studies of the effects of musical experience on sound representations in the auditory cortex are reviewed. Auditory evoked potentials are compared in response to pure tones, violin tones, and piano tones in adult musicians versus nonmusicians as well as in 4- to 5-year-old children who have either had or not had extensive musical experience. In addition, the effects of auditory frequency discrimination training in adult nonmusicians on auditory evoked potentials are examined. It was found that the P2-evoked response is larger in both adult and child musicians than in nonmusicians and that auditory training enhances this component in nonmusician adults. The results suggest that the P2 is particularly neuroplastic and that the effects of musical experience can be seen early in development. They also suggest that although the effects of musical training on cortical representations may be greater if training begins in childhood, the adult brain is also open to change. These results are discussed with respect to potential benefits of early musical training as well as potential benefits of musical experience in aging.

  9. Normative data and discriminant validity of Rey's Verbal Learning Test for the Greek adult population.

    PubMed

    Messinis, Lambros; Tsakona, Ioanna; Malefaki, Sonia; Papathanasopoulos, Panagiotis

    2007-08-01

    The present study sought to establish normative and discriminant validity data for Rey's Auditory Verbal Learning Test [Rey, A. (1964). L 'examen clinique en psychologie [Clinical tests in psychology]. Paris: Presses Universitaires de France; Schmidt, M. (1996). Rey auditory verbal learning test: A handbook. Los Angeles, CA: Western Psychological Services] using newly adapted learning lists for the Greek adult population. Applying the procedure suggested by Geffen et al. [Geffen, G., Moar, K. J., O'Hanlon, A. P., Clark, C. R., & Geffen, L. N. (1990). Performance measures of 16-86-year-old males and females on the auditory verbal learning test. The Clinical Neuropsychologist, 4, 45-63] we administered the test to 205 healthy participants, aged 18-78 years and two adult patient groups (long-term cannabis users and HIV symptomatic patients). Stepwise linear regression analyses showed that the variables age, education and gender contributed significantly to most trials of the RAVLT. Performance decreased in an age-dependent manner from young adulthood. Women, young adults and higher educated participants outperformed men, older adults and less educated individuals. The test appears to discriminate adequately between the performance of long-term heavy cannabis users and HIV seropositive symptomatic patients and matched healthy controls, as both patient groups performed more poorly than their respective control group. Normative data stratified by age, gender and education for the Greek adult population is presented for use in research and clinical settings.

  10. Discrimination Task Reveals Differences in Neural Bases of Tinnitus and Hearing Impairment

    PubMed Central

    Husain, Fatima T.; Pajor, Nathan M.; Smith, Jason F.; Kim, H. Jeff; Rudy, Susan; Zalewski, Christopher; Brewer, Carmen; Horwitz, Barry

    2011-01-01

    We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI). Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN), bilateral hearing loss without tinnitus (HL), and normal hearing without tinnitus (NH). We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone. PMID:22066003

  11. Effects of auditory information on self-motion perception during simultaneous presentation of visual shearing motion

    PubMed Central

    Tanahashi, Shigehito; Ashihara, Kaoru; Ujike, Hiroyasu

    2015-01-01

    Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis). We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information. PMID:26113828

  12. Primary auditory cortex regulates threat memory specificity.

    PubMed

    Wigestrand, Mattis B; Schiff, Hillary C; Fyhn, Marianne; LeDoux, Joseph E; Sears, Robert M

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used muscimol infusions in rats to show that discriminatory threat learning requires Au1 activity specifically during memory acquisition and retrieval, but not during consolidation. Memory specificity was similarly disrupted by infusion of PKMζ inhibitor peptide (ZIP) during memory storage. Our findings show that Au1 is required at critical memory phases and suggest that Au1 plasticity enables stimulus discrimination. © 2016 Wigestrand et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Reproduction of auditory and visual standards in monochannel cochlear implant users.

    PubMed

    Kanabus, Magdalena; Szelag, Elzbieta; Kolodziejczyk, Iwona; Szuchnik, Joanna

    2004-01-01

    The temporal reproduction of standard durations ranging from 1 to 9 seconds was investigated in monochannel cochlear implant (CI) users and in normally hearing subjects for the auditory and visual modality. The results showed that the pattern of performance in patients depended on their level of auditory comprehension. Results for CI users, who displayed relatively good auditory comprehension, did not differ from that of normally hearing subjects for both modalities. Patients with poor auditory comprehension significantly overestimated shorter auditory standards (1, 1.5 and 2.5 s), compared to both patients with good comprehension and controls. For the visual modality the between-group comparisons were not significant. These deficits in the reproduction of auditory standards were explained in accordance with both the attentional-gate model and the role of working memory in prospective time judgment. The impairments described above can influence the functioning of the temporal integration mechanism that is crucial for auditory speech comprehension on the level of words and phrases. We postulate that the deficits in time reproduction of short standards may be one of the possible reasons for poor speech understanding in monochannel CI users.

  14. Auditory-motor entrainment and phonological skills: precise auditory timing hypothesis (PATH).

    PubMed

    Tierney, Adam; Kraus, Nina

    2014-01-01

    Phonological skills are enhanced by music training, but the mechanisms enabling this cross-domain enhancement remain unknown. To explain this cross-domain transfer, we propose a precise auditory timing hypothesis (PATH) whereby entrainment practice is the core mechanism underlying enhanced phonological abilities in musicians. Both rhythmic synchronization and language skills such as consonant discrimination, detection of word and phrase boundaries, and conversational turn-taking rely on the perception of extremely fine-grained timing details in sound. Auditory-motor timing is an acoustic feature which meets all five of the pre-conditions necessary for cross-domain enhancement to occur (Patel, 2011, 2012, 2014). There is overlap between the neural networks that process timing in the context of both music and language. Entrainment to music demands more precise timing sensitivity than does language processing. Moreover, auditory-motor timing integration captures the emotion of the trainee, is repeatedly practiced, and demands focused attention. The PATH predicts that musical training emphasizing entrainment will be particularly effective in enhancing phonological skills.

  15. Auditory Confrontation Naming in Alzheimer’s Disease

    PubMed Central

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-01-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer’s disease (AD). We developed an Auditory Naming Task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies, mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the Auditory Naming Task. This task was also more difficult than two versions of a comparable Visual Naming Task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal subjects. Nonetheless, our Auditory Naming Test may prove useful in research and clinical practice, especially with visually-impaired patients. PMID:20981630

  16. Auditory stream segregation in monkey auditory cortex: effects of frequency separation, presentation rate, and tone duration

    NASA Astrophysics Data System (ADS)

    Fishman, Yonatan I.; Arezzo, Joseph C.; Steinschneider, Mitchell

    2004-09-01

    Auditory stream segregation refers to the organization of sequential sounds into ``perceptual streams'' reflecting individual environmental sound sources. In the present study, sequences of alternating high and low tones, ``...ABAB...,'' similar to those used in psychoacoustic experiments on stream segregation, were presented to awake monkeys while neural activity was recorded in primary auditory cortex (A1). Tone frequency separation (ΔF), tone presentation rate (PR), and tone duration (TD) were systematically varied to examine whether neural responses correlate with effects of these variables on perceptual stream segregation. ``A'' tones were fixed at the best frequency of the recording site, while ``B'' tones were displaced in frequency from ``A'' tones by an amount=ΔF. As PR increased, ``B'' tone responses decreased in amplitude to a greater extent than ``A'' tone responses, yielding neural response patterns dominated by ``A'' tone responses occurring at half the alternation rate. Increasing TD facilitated the differential attenuation of ``B'' tone responses. These findings parallel psychoacoustic data and suggest a physiological model of stream segregation whereby increasing ΔF, PR, or TD enhances spatial differentiation of ``A'' tone and ``B'' tone responses along the tonotopic map in A1.

  17. Is auditory perceptual timing a core deficit of developmental coordination disorder?

    PubMed

    Trainor, Laurel J; Chang, Andrew; Cairney, John; Li, Yao-Chuen

    2018-05-09

    Time is an essential dimension for perceiving and processing auditory events, and for planning and producing motor behaviors. Developmental coordination disorder (DCD) is a neurodevelopmental disorder affecting 5-6% of children that is characterized by deficits in motor skills. Studies show that children with DCD have motor timing and sensorimotor timing deficits. We suggest that auditory perceptual timing deficits may also be core characteristics of DCD. This idea is consistent with evidence from several domains, (1) motor-related brain regions are often involved in auditory timing process; (2) DCD has high comorbidity with dyslexia and attention deficit hyperactivity, which are known to be associated with auditory timing deficits; (3) a few studies report deficits in auditory-motor timing among children with DCD; and (4) our preliminary behavioral and neuroimaging results show that children with DCD at age 6 and 7 have deficits in auditory time discrimination compared to typically developing children. We propose directions for investigating auditory perceptual timing processing in DCD that use various behavioral and neuroimaging approaches. From a clinical perspective, research findings can potentially benefit our understanding of the etiology of DCD, identify early biomarkers of DCD, and can be used to develop evidence-based interventions for DCD involving auditory-motor training. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of The New York Academy of Sciences.

  18. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence.

    PubMed

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective.

  19. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    PubMed Central

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  20. Perceived Discrimination and Mexican-Origin Young Adults' Sleep Duration and Variability: The Moderating Role of Cultural Orientations.

    PubMed

    Zeiders, Katharine H; Updegraff, Kimberly A; Kuo, Sally I-Chun; Umaña-Taylor, Adriana J; McHale, Susan M

    2017-08-01

    Perceived ethnic discrimination is central to the experiences of Latino young adults, yet we know little about the ways in which and the conditions under which ethnic discrimination relates to Latino young adults' sleep patterns. Using a sample of 246 Mexican-origin young adults (M age  = 21.11, SD = 1.54; 50 % female), the current study investigated the longitudinal links between perceived ethnic discrimination and both sleep duration and night-to-night variability in duration, while also examining the moderating roles of Anglo and Mexican orientations in the associations. The results revealed that perceived discrimination predicted greater sleep variability, and this link was not moderated by cultural orientations. The relation between perceived discrimination and hours of sleep, however, was moderated by Anglo and Mexican orientations. Individuals with high Anglo and Mexican orientations (bicultural) and those with only high Mexican orientations (enculturated), showed no association between discrimination and hours of sleep. Individuals with low Anglo and Mexican orientations (marginalized) displayed a positive association, whereas those with high Anglo and low Mexican orientations (acculturated) displayed a negative association. The results suggest that discrimination has long term effects on sleep variability of Mexican-origin young adults, regardless of cultural orientations; however, for sleep duration, bicultural and enculturated orientations are protective.

  1. Predicting dynamic range and intensity discrimination for electrical pulse-train stimuli using a stochastic auditory nerve model: the effects of stimulus noise.

    PubMed

    Xu, Yifang; Collins, Leslie M

    2005-06-01

    This work investigates dynamic range and intensity discrimination for electrical pulse-train stimuli that are modulated by noise using a stochastic auditory nerve model. Based on a hypothesized monotonic relationship between loudness and the number of spikes elicited by a stimulus, theoretical prediction of the uncomfortable level has previously been determined by comparing spike counts to a fixed threshold, Nucl. However, no specific rule for determining Nucl has been suggested. Our work determines the uncomfortable level based on the excitation pattern of the neural response in a normal ear. The number of fibers corresponding to the portion of the basilar membrane driven by a stimulus at an uncomfortable level in a normal ear is related to Nucl at an uncomfortable level of the electrical stimulus. Intensity discrimination limens are predicted using signal detection theory via the probability mass function of the neural response and via experimental simulations. The results show that the uncomfortable level for pulse-train stimuli increases slightly as noise level increases. Combining this with our previous threshold predictions, we hypothesize that the dynamic range for noise-modulated pulse-train stimuli should increase with additive noise. However, since our predictions indicate that intensity discrimination under noise degrades, overall intensity coding performance may not improve significantly.

  2. Psychophysics of human echolocation.

    PubMed

    Schörnich, Sven; Wallmeier, Ludwig; Gessele, Nikodemus; Nagy, Andreas; Schranner, Michael; Kish, Daniel; Wiegrebe, Lutz

    2013-01-01

    The skills of some blind humans orienting in their environment through the auditory analysis of reflections from self-generated sounds have received only little scientific attention to date. Here we present data from a series of formal psychophysical experiments with sighted subjects trained to evaluate features of a virtual echo-acoustic space, allowing for rigid and fine-grain control of the stimulus parameters. The data show how subjects shape both their vocalisations and auditory analysis of the echoes to serve specific echo-acoustic tasks. First, we show that humans can echo-acoustically discriminate target distances with a resolution of less than 1 m for reference distances above 3.4 m. For a reference distance of 1.7 m, corresponding to an echo delay of only 10 ms, distance JNDs were typically around 0.5 m. Second, we explore the interplay between the precedence effect and echolocation. We show that the strong perceptual asymmetry between lead and lag is weakened during echolocation. Finally, we show that through the auditory analysis of self-generated sounds, subjects discriminate room-size changes as small as 10%.In summary, the current data confirm the practical efficacy of human echolocation, and they provide a rigid psychophysical basis for addressing its neural foundations.

  3. Relation of earphone transient response to measurement of onset-duration.

    DOT National Transportation Integrated Search

    1963-03-01

    Measurements were made of the transient response of PDR-8 earphones. These data indicate that the original estimate of the onset-duration of auditory stimuli can be revised downward to 0.8 ms. Implicit in the results is a caveat on the faith one shou...

  4. Musical Experience, Auditory Perception and Reading-Related Skills in Children

    PubMed Central

    Banai, Karen; Ahissar, Merav

    2013-01-01

    Background The relationships between auditory processing and reading-related skills remain poorly understood despite intensive research. Here we focus on the potential role of musical experience as a confounding factor. Specifically we ask whether the pattern of correlations between auditory and reading related skills differ between children with different amounts of musical experience. Methodology/Principal Findings Third grade children with various degrees of musical experience were tested on a battery of auditory processing and reading related tasks. Very poor auditory thresholds and poor memory skills were abundant only among children with no musical education. In this population, indices of auditory processing (frequency and interval discrimination thresholds) were significantly correlated with and accounted for up to 13% of the variance in reading related skills. Among children with more than one year of musical training, auditory processing indices were better, yet reading related skills were not correlated with them. A potential interpretation for the reduction in the correlations might be that auditory and reading-related skills improve at different rates as a function of musical training. Conclusions/Significance Participants’ previous musical training, which is typically ignored in studies assessing the relations between auditory and reading related skills, should be considered. Very poor auditory and memory skills are rare among children with even a short period of musical training, suggesting musical training could have an impact on both. The lack of correlation in the musically trained population suggests that a short period of musical training does not enhance reading related skills of individuals with within-normal auditory processing skills. Further studies are required to determine whether the associations between musical training, auditory processing and memory are indeed causal or whether children with poor auditory and memory skills are less likely to study music and if so, why this is the case. PMID:24086654

  5. Time-Warp–Invariant Neuronal Processing

    PubMed Central

    Gütig, Robert; Sompolinsky, Haim

    2009-01-01

    Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp–invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp–invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp–invariant computational capabilities. PMID:19582146

  6. Effects of Sex and Gender on Adaptation to Space: Neurosensory Systems

    PubMed Central

    Cohen, Helen S.; Cerisano, Jody M.; Clayton, Janine A.; Cromwell, Ronita; Danielson, Richard W.; Hwang, Emma Y.; Tingen, Candace; Allen, John R.; Tomko, David L.

    2014-01-01

    Abstract Sex and gender differences have long been a research topic of interest, yet few studies have explored the specific differences in neurological responses between men and women during and after spaceflight. Knowledge in this field is limited due to the significant disproportion of sexes enrolled in the astronaut corps. Research indicates that general neurological and sensory differences exist between the sexes, such as those in laterality of amygdala activity, sensitivity and discrimination in vision processing, and neuronal cell death (apoptosis) pathways. In spaceflight, sex differences may include a higher incidence of entry and space motion sickness and of post-flight vestibular instability in female as opposed to male astronauts who flew on both short- and long-duration missions. Hearing and auditory function in crewmembers shows the expected hearing threshold differences between men and women, in which female astronauts exhibit better hearing thresholds. Longitudinal observations of hearing thresholds for crewmembers yield normal age-related decrements; however, no evidence of sex-related differences from spaceflight has been observed. The impact of sex and gender differences should be studied by making spaceflight accessible and flying more women into space. Only in this way will we know if increasingly longer-duration missions cause significantly different neurophysiological responses in men and women. PMID:25401941

  7. Differential occipital responses in early- and late-blind individuals during a sound-source discrimination task.

    PubMed

    Voss, Patrice; Gougoux, Frederic; Zatorre, Robert J; Lassonde, Maryse; Lepore, Franco

    2008-04-01

    Blind individuals do not necessarily receive more auditory stimulation than sighted individuals. However, to interact effectively with their environment, they have to rely on non-visual cues (in particular auditory) to a greater extent. Often benefiting from cerebral reorganization, they not only learn to rely more on such cues but also may process them better and, as a result, demonstrate exceptional abilities in auditory spatial tasks. Here we examine the effects of blindness on brain activity, using positron emission tomography (PET), during a sound-source discrimination task (SSDT) in both early- and late-onset blind individuals. This should not only provide an answer to the question of whether the blind manifest changes in brain activity but also allow a direct comparison of the two subgroups performing an auditory spatial task. The task was presented under two listening conditions: one binaural and one monaural. The binaural task did not show any significant behavioural differences between groups, but it demonstrated striate and extrastriate activation in the early-blind groups. A subgroup of early-blind individuals, on the other hand, performed significantly better than all the other groups during the monaural task, and these enhanced skills were correlated with elevated activity within the left dorsal extrastriate cortex. Surprisingly, activation of the right ventral visual pathway, which was significantly activated in the late-blind individuals during the monaural task, was negatively correlated with performance. This suggests the possibility that not all cross-modal plasticity is beneficial. Overall, our results not only support previous findings showing that occipital cortex of early-blind individuals is functionally engaged in spatial auditory processing but also shed light on the impact the age of onset of blindness can have on the ensuing cross-modal plasticity.

  8. Assessment of anodal and cathodal transcranial direct current stimulation (tDCS) on MMN-indexed auditory sensory processing.

    PubMed

    Impey, Danielle; de la Salle, Sara; Knott, Verner

    2016-06-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a very weak constant current to temporarily excite (anodal stimulation) or inhibit (cathodal stimulation) activity in the brain area of interest via small electrodes placed on the scalp. Currently, tDCS of the frontal cortex is being used as a tool to investigate cognition in healthy controls and to improve symptoms in neurological and psychiatric patients. tDCS has been found to facilitate cognitive performance on measures of attention, memory, and frontal-executive functions. Recently, a short session of anodal tDCS over the temporal lobe has been shown to increase auditory sensory processing as indexed by the Mismatch Negativity (MMN) event-related potential (ERP). This preliminary pilot study examined the separate and interacting effects of both anodal and cathodal tDCS on MMN-indexed auditory pitch discrimination. In a randomized, double blind design, the MMN was assessed before (baseline) and after tDCS (2mA, 20min) in 2 separate sessions, one involving 'sham' stimulation (the device is turned off), followed by anodal stimulation (to temporarily excite cortical activity locally), and one involving cathodal stimulation (to temporarily decrease cortical activity locally), followed by anodal stimulation. Results demonstrated that anodal tDCS over the temporal cortex increased MMN-indexed auditory detection of pitch deviance, and while cathodal tDCS decreased auditory discrimination in baseline-stratified groups, subsequent anodal stimulation did not significantly alter MMN amplitudes. These findings strengthen the position that tDCS effects on cognition extend to the neural processing of sensory input and raise the possibility that this neuromodulatory technique may be useful for investigating sensory processing deficits in clinical populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Multisensory perceptual learning of temporal order: audiovisual learning transfers to vision but not audition.

    PubMed

    Alais, David; Cass, John

    2010-06-23

    An outstanding question in sensory neuroscience is whether the perceived timing of events is mediated by a central supra-modal timing mechanism, or multiple modality-specific systems. We use a perceptual learning paradigm to address this question. Three groups were trained daily for 10 sessions on an auditory, a visual or a combined audiovisual temporal order judgment (TOJ). Groups were pre-tested on a range TOJ tasks within and between their group modality prior to learning so that transfer of any learning from the trained task could be measured by post-testing other tasks. Robust TOJ learning (reduced temporal order discrimination thresholds) occurred for all groups, although auditory learning (dichotic 500/2000 Hz tones) was slightly weaker than visual learning (lateralised grating patches). Crossmodal TOJs also displayed robust learning. Post-testing revealed that improvements in temporal resolution acquired during visual learning transferred within modality to other retinotopic locations and orientations, but not to auditory or crossmodal tasks. Auditory learning did not transfer to visual or crossmodal tasks, and neither did it transfer within audition to another frequency pair. In an interesting asymmetry, crossmodal learning transferred to all visual tasks but not to auditory tasks. Finally, in all conditions, learning to make TOJs for stimulus onsets did not transfer at all to discriminating temporal offsets. These data present a complex picture of timing processes. The lack of transfer between unimodal groups indicates no central supramodal timing process for this task; however, the audiovisual-to-visual transfer cannot be explained without some form of sensory interaction. We propose that auditory learning occurred in frequency-tuned processes in the periphery, precluding interactions with more central visual and audiovisual timing processes. Functionally the patterns of featural transfer suggest that perceptual learning of temporal order may be optimised to object-centered rather than viewer-centered constraints.

  10. Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues

    PubMed

    Liu, Andrew S K; Tsunada, Joji; Gold, Joshua I; Cohen, Yale E

    2015-01-01

    Auditory perception depends on the temporal structure of incoming acoustic stimuli. Here, we examined whether a temporal manipulation that affects the perceptual grouping also affects the time dependence of decisions regarding those stimuli. We designed a novel discrimination task that required human listeners to decide whether a sequence of tone bursts was increasing or decreasing in frequency. We manipulated temporal perceptual-grouping cues by changing the time interval between the tone bursts, which led to listeners hearing the sequences as a single sound for short intervals or discrete sounds for longer intervals. Despite these strong perceptual differences, this manipulation did not affect the efficiency of how auditory information was integrated over time to form a decision. Instead, the grouping manipulation affected subjects' speed-accuracy trade-offs. These results indicate that the temporal dynamics of evidence accumulation for auditory perceptual decisions can be invariant to manipulations that affect the perceptual grouping of the evidence.

  11. Long-Lasting Sound-Evoked Afterdischarge in the Auditory Midbrain.

    PubMed

    Ono, Munenori; Bishop, Deborah C; Oliver, Douglas L

    2016-02-12

    Different forms of plasticity are known to play a critical role in the processing of information about sound. Here, we report a novel neural plastic response in the inferior colliculus, an auditory center in the midbrain of the auditory pathway. A vigorous, long-lasting sound-evoked afterdischarge (LSA) is seen in a subpopulation of both glutamatergic and GABAergic neurons in the central nucleus of the inferior colliculus of normal hearing mice. These neurons were identified with single unit recordings and optogenetics in vivo. The LSA can continue for up to several minutes after the offset of the sound. LSA is induced by long-lasting, or repetitive short-duration, innocuous sounds. Neurons with LSA showed less adaptation than the neurons without LSA. The mechanisms that cause this neural behavior are unknown but may be a function of intrinsic mechanisms or the microcircuitry of the inferior colliculus. Since LSA produces long-lasting firing in the absence of sound, it may be relevant to temporary or chronic tinnitus or to some other aftereffect of long-duration sound.

  12. Time perception deficit in children with ADHD.

    PubMed

    Yang, Binrang; Chan, Raymond C K; Zou, Xiaobing; Jing, Jin; Mai, Jianning; Li, Jing

    2007-09-19

    Time perception deficit has been demonstrated in children with attention deficit hyperactivity disorder (ADHD) by using time production and time reproduction tasks. The impact of motor demand, however, has not yet been fully examined. The current study, which is reported herein, aimed to investigate the pure time perception of Chinese children with ADHD by using a duration discrimination task. A battery of tests that were specifically designed to measure time perception and other related abilities, such as inhibition, attention, and working memory, was administered to 40 children with ADHD and to 40 demographically matched healthy children. A repeated measure MANOVA indicated that children with ADHD showed significantly higher discrimination thresholds than did healthy controls, and there was an interaction effect between group and duration. Pairwise comparison indicated that children with ADHD were less accurate in discriminating duration at either target duration. Working memory (Corsi blocks task) was related to the discrimination threshold at a duration of 800 ms after controlling for full-scale IQ (FIQ) in the ADHD group, but this did not survive the Bonferroni correction. The results indicated that children with ADHD may have perceptual deficits in time discrimination. They needed a greater difference between the comparison and target intervals to discriminate the short, median, and long durations reliably. This study provides further support for the existence of a generic time perception deficit, which is probably due to the involvement of a dysfunctional fronto-striato-cerebellar network in this capacity, especially the presence of deficits in basic internal timing mechanisms.

  13. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    ERIC Educational Resources Information Center

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  14. A higher sensory brain region is involved in reversing reinforcement-induced vocal changes in a songbird.

    PubMed

    Canopoli, Alessandro; Herbst, Joshua A; Hahnloser, Richard H R

    2014-05-14

    Many animals exhibit flexible behaviors that they can adjust to increase reward or avoid harm (learning by positive or aversive reinforcement). But what neural mechanisms allow them to restore their original behavior (motor program) after reinforcement is withdrawn? One possibility is that motor restoration relies on brain areas that have a role in memorization but no role in either motor production or in sensory processing relevant for expressing the behavior and its refinement. We investigated the role of a higher auditory brain area in the songbird for modifying and restoring the stereotyped adult song. We exposed zebra finches to aversively reinforcing white noise stimuli contingent on the pitch of one of their stereotyped song syllables. In response, birds significantly changed the pitch of that syllable to avoid the aversive reinforcer. After we withdrew reinforcement, birds recovered their original song within a few days. However, we found that large bilateral lesions in the caudal medial nidopallium (NCM, a high auditory area) impaired recovery of the original pitch even several weeks after withdrawal of the reinforcing stimuli. Because NCM lesions spared both successful noise-avoidance behavior and birds' auditory discrimination ability, our results show that NCM is not needed for directed motor changes or for auditory discriminative processing, but is implied in memorizing or recalling the memory of the recent song target. Copyright © 2014 the authors 0270-6474/14/347018-09$15.00/0.

  15. Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel-array skin-hearing technology

    PubMed Central

    Li, Jianwen; Li, Yan; Zhang, Ming; Ma, Weifang; Ma, Xuezong

    2014-01-01

    The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different frequencies are converted to current signals at corresponding frequencies using electronic multi-channel bandpass filtering technology. Different positions on the skin can be stimulated by the electrode array, allowing the perception and discrimination of external speech signals to be determined by the skin response to the current signals. Through voice frequency analysis, the frequency range of the band-pass filter can also be determined. These findings demonstrate that the sensory nerves in the skin can help to transfer the voice signal and to distinguish the speech signal, suggesting that the skin sensory nerves are good candidates for the replacement of the auditory nerve in addressing deaf-mutes’ hearing problems. Scientific hearing experiments can be more safely performed on the skin. Compared with the artificial cochlea, multi-channel-array skin-hearing aids have lower operation risk in use, are cheaper and are more easily popularized. PMID:25317171

  16. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    PubMed

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  17. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    PubMed

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  18. Interactions between auditory 'what' and 'where' pathways revealed by enhanced near-threshold discrimination of frequency and position.

    PubMed

    Tardif, Eric; Spierer, Lucas; Clarke, Stephanie; Murray, Micah M

    2008-03-07

    Partially segregated neuronal pathways ("what" and "where" pathways, respectively) are thought to mediate sound recognition and localization. Less studied are interactions between these pathways. In two experiments, we investigated whether near-threshold pitch discrimination sensitivity (d') is altered by supra-threshold task-irrelevant position differences and likewise whether near-threshold position discrimination sensitivity is altered by supra-threshold task-irrelevant pitch differences. Each experiment followed a 2 x 2 within-subjects design regarding changes/no change in the task-relevant and task-irrelevant stimulus dimensions. In Experiment 1, subjects discriminated between 750 Hz and 752 Hz pure tones, and d' for this near-threshold pitch change significantly increased by a factor of 1.09 when accompanied by a task-irrelevant position change of 65 micros interaural time difference (ITD). No response bias was induced by the task-irrelevant position change. In Experiment 2, subjects discriminated between 385 micros and 431 micros ITDs, and d' for this near-threshold position change significantly increased by a factor of 0.73 when accompanied by task-irrelevant pitch changes (6 Hz). In contrast to Experiment 1, task-irrelevant pitch changes induced a response criterion bias toward responding that the two stimuli differed. The collective results are indicative of facilitative interactions between "what" and "where" pathways. By demonstrating how these pathways may cooperate under impoverished listening conditions, our results bear implications for possible neuro-rehabilitation strategies. We discuss our results in terms of the dual-pathway model of auditory processing.

  19. Speech Evoked Auditory Brainstem Response in Stuttering

    PubMed Central

    Tahaei, Ali Akbar; Ashayeri, Hassan; Pourbakht, Akram; Kamali, Mohammad

    2014-01-01

    Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS) at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency. PMID:25215262

  20. Is the Role of External Feedback in Auditory Skill Learning Age Dependent?

    PubMed

    Zaltz, Yael; Roth, Daphne Ari-Even; Kishon-Rabin, Liat

    2017-12-20

    The purpose of this study is to investigate the role of external feedback in auditory perceptual learning of school-age children as compared with that of adults. Forty-eight children (7-9 years of age) and 64 adults (20-35 years of age) conducted a training session using an auditory frequency discrimination (difference limen for frequency) task, with external feedback (EF) provided for half of them. Data supported the following findings: (a) Children learned the difference limen for frequency task only when EF was provided. (b) The ability of the children to benefit from EF was associated with better cognitive skills. (c) Adults showed significant learning whether EF was provided or not. (d) In children, within-session learning following training was dependent on the provision of feedback, whereas between-sessions learning occurred irrespective of feedback. EF was found beneficial for auditory skill learning of 7-9-year-old children but not for young adults. The data support the supervised Hebbian model for auditory skill learning, suggesting combined bottom-up internal neural feedback controlled by top-down monitoring. In the case of immature executive functions, EF enhanced auditory skill learning. This study has implications for the design of training protocols in the auditory modality for different age groups, as well as for special populations.

  1. AUDITORY ASSOCIATIVE MEMORY AND REPRESENTATIONAL PLASTICITY IN THE PRIMARY AUDITORY CORTEX

    PubMed Central

    Weinberger, Norman M.

    2009-01-01

    Historically, the primary auditory cortex has been largely ignored as a substrate of auditory memory, perhaps because studies of associative learning could not reveal the plasticity of receptive fields (RFs). The use of a unified experimental design, in which RFs are obtained before and after standard training (e.g., classical and instrumental conditioning) revealed associative representational plasticity, characterized by facilitation of responses to tonal conditioned stimuli (CSs) at the expense of other frequencies, producing CS-specific tuning shifts. Associative representational plasticity (ARP) possesses the major attributes of associative memory: it is highly specific, discriminative, rapidly acquired, consolidates over hours and days and can be retained indefinitely. The nucleus basalis cholinergic system is sufficient both for the induction of ARP and for the induction of specific auditory memory, including control of the amount of remembered acoustic details. Extant controversies regarding the form, function and neural substrates of ARP appear largely to reflect different assumptions, which are explicitly discussed. The view that the forms of plasticity are task-dependent is supported by ongoing studies in which auditory learning involves CS-specific decreases in threshold or bandwidth without affecting frequency tuning. Future research needs to focus on the factors that determine ARP and their functions in hearing and in auditory memory. PMID:17344002

  2. [Functional hearing examinations in patients suffering from diabetes mellitus type 1 in regard to disease duration].

    PubMed

    Pudar, Goran; Vlaski, Ljiljana; Filipović, Danka; Tanackov, Ilija

    2010-01-01

    Problems of hearing disturbances in persons suffering from diabetes have been attracting great attention for many decades. In this study we examined the auditory function of 50 patients suffering from diabetes mellitus type 1 of different duration by analyzing results of pure-tone audiometry and brainstem auditory evoked potentials. The obtained results of measuring were compared to 30 healthy subjects from the corresponding age and gender group. The group of diabetic patients was divided according to the disease duration (I group 0-5 years; II group 6-10 years, III group over 10 years). A statistically significant increase of sensorineural hearing loss was found in the diabetics according to the duration of their disease (I group = 14.09%, II group = 21.39%, III group = 104.89%). The results of the brain stem auditory evoked potentials, the significance threshold being p = 0.05 between the controls and the diabetics at all levels of absolute latency of right and left sides, did not show significant differences in the mean values. In the case of interwave latencies, the diabetic patients were found to have a significant qualitative difference of intervals I-III and I-V on both ears in the sense of internal distribution of response. In cases of sensorineural hearing loss we found a significant connection with prolonged latencies of I wave on the right ear and of I and V waves on the left ear. In all probability, the cause of these results could be found in distinctive individuality of the organism reactions to the consequences of this disease (disturbance in the distal part of N. cochlearis). The results of research have shown the existence of a significant sensorineural hearing loss in the patients with diabetes mellitus type 1 in accordance to the disease duration. We also found qualitative changes of brainstem auditory evoked potentials in the diabetic patients in comparison to the controls as well as significant quantitative changes in regard to the presence of sensorineural hearing loss of the patients.

  3. Hearing improvement with softband and implanted bone-anchored hearing devices and modified implantation surgery in patients with bilateral microtia-atresia.

    PubMed

    Wang, Yibei; Fan, Xinmiao; Wang, Pu; Fan, Yue; Chen, Xiaowei

    2018-01-01

    To evaluate auditory development and hearing improvement in patients with bilateral microtia-atresia using softband and implanted bone-anchored hearing devices and to modify the implantation surgery. The subjects were divided into two groups: the softband group (40 infants, 3 months to 2 years old, Ponto softband) and the implanted group (6 patients, 6-28 years old, Ponto). The Infant-Toddler Meaning Auditory Integration Scale was used conducted to evaluate auditory development at baseline and after 3, 6, 12, and 24 months, and visual reinforcement audiometry was used to assess the auditory threshold in the softband group. In the implanted group, bone-anchored hearing devices were implanted combined with the auricular reconstruction surgery, and high-resolution CT was used to assess the deformity preoperatively. Auditory threshold and speech discrimination scores of the patients with implants were measured under the unaided, softband, and implanted conditions. Total Infant-Toddler Meaning Auditory Integration Scale scores in the softband group improved significantly and approached normal levels. The average visual reinforcement audiometry values under the unaided and softband conditions were 76.75 ± 6.05 dB HL and 32.25 ± 6.20 dB HL (P < 0.01), respectively. In the implanted group, the auditory thresholds under the unaided, softband, and implanted conditions were 59.17 ± 3.76 dB HL, 32.5 ± 2.74 dB HL, and 17.5 ± 5.24 dB HL (P < 0.01), respectively. The respective speech discrimination scores were 23.33 ± 14.72%, 77.17 ± 6.46%, and 96.50 ± 2.66% (P < 0.01). Using softband bone-anchored hearing devices is effective for auditory development and hearing improvement in infants with bilateral microtia-atresia. Wearing softband bone-anchored hearing devices before auricle reconstruction and combining bone-anchored hearing device implantation with auricular reconstruction surgery may bethe optimal clinical choice for these patients, and results in more significant hearing improvement and minimal surgical and anesthetic injury. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. An investigation of the relation between sibilant production and somatosensory and auditory acuity

    PubMed Central

    Ghosh, Satrajit S.; Matthies, Melanie L.; Maas, Edwin; Hanson, Alexandra; Tiede, Mark; Ménard, Lucie; Guenther, Frank H.; Lane, Harlan; Perkell, Joseph S.

    2010-01-01

    The relation between auditory acuity, somatosensory acuity and the magnitude of produced sibilant contrast was investigated with data from 18 participants. To measure auditory acuity, stimuli from a synthetic sibilant continuum ([s]-[ʃ]) were used in a four-interval, two-alternative forced choice adaptive-staircase discrimination task. To measure somatosensory acuity, small plastic domes with grooves of different spacing were pressed against each participant’s tongue tip and the participant was asked to identify one of four possible orientations of the grooves. Sibilant contrast magnitudes were estimated from productions of the words ‘said,’ ‘shed,’ ‘sid,’ and ‘shid’. Multiple linear regression revealed a significant relation indicating that a combination of somatosensory and auditory acuity measures predicts produced acoustic contrast. When the participants were divided into high- and low-acuity groups based on their median somatosensory and auditory acuity measures, separate ANOVA analyses with sibilant contrast as the dependent variable yielded a significant main effect for each acuity group. These results provide evidence that sibilant productions have auditory as well as somatosensory goals and are consistent with prior results and the theoretical framework underlying the DIVA model of speech production. PMID:21110603

  5. Auditory Memory Distortion for Spoken Prose

    PubMed Central

    Hutchison, Joanna L.; Hubbard, Timothy L.; Ferrandino, Blaise; Brigante, Ryan; Wright, Jamie M.; Rypma, Bart

    2013-01-01

    Observers often remember a scene as containing information that was not presented but that would have likely been located just beyond the observed boundaries of the scene. This effect is called boundary extension (BE; e.g., Intraub & Richardson, 1989). Previous studies have observed BE in memory for visual and haptic stimuli, and the present experiments examined whether BE occurred in memory for auditory stimuli (prose, music). Experiments 1 and 2 varied the amount of auditory content to be remembered. BE was not observed, but when auditory targets contained more content, boundary restriction (BR) occurred. Experiment 3 presented auditory stimuli with less content and BR also occurred. In Experiment 4, white noise was added to stimuli with less content to equalize the durations of auditory stimuli, and BR still occurred. Experiments 5 and 6 presented trained stories and popular music, and BR still occurred. This latter finding ruled out the hypothesis that the lack of BE in Experiments 1–4 reflected a lack of familiarity with the stimuli. Overall, memory for auditory content exhibited BR rather than BE, and this pattern was stronger if auditory stimuli contained more content. Implications for the understanding of general perceptual processing and directions for future research are discussed. PMID:22612172

  6. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features.

    PubMed

    Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna

    2015-01-01

    When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content.

  7. Auditory Profiles of Classical, Jazz, and Rock Musicians: Genre-Specific Sensitivity to Musical Sound Features

    PubMed Central

    Tervaniemi, Mari; Janhunen, Lauri; Kruck, Stefanie; Putkinen, Vesa; Huotilainen, Minna

    2016-01-01

    When compared with individuals without explicit training in music, adult musicians have facilitated neural functions in several modalities. They also display structural changes in various brain areas, these changes corresponding to the intensity and duration of their musical training. Previous studies have focused on investigating musicians with training in Western classical music. However, musicians involved in different musical genres may display highly differentiated auditory profiles according to the demands set by their genre, i.e., varying importance of different musical sound features. This hypothesis was tested in a novel melody paradigm including deviants in tuning, timbre, rhythm, melody transpositions, and melody contour. Using this paradigm while the participants were watching a silent video and instructed to ignore the sounds, we compared classical, jazz, and rock musicians' and non-musicians' accuracy of neural encoding of the melody. In all groups of participants, all deviants elicited an MMN response, which is a cortical index of deviance discrimination. The strength of the MMN and the subsequent attentional P3a responses reflected the importance of various sound features in each music genre: these automatic brain responses were selectively enhanced to deviants in tuning (classical musicians), timing (classical and jazz musicians), transposition (jazz musicians), and melody contour (jazz and rock musicians). Taken together, these results indicate that musicians with different training history have highly specialized cortical reactivity to sounds which violate the neural template for melody content. PMID:26779055

  8. A genome-wide linkage and association study of musical aptitude identifies loci containing genes related to inner ear development and neurocognitive functions

    PubMed Central

    Oikkonen, J.; Huang, Y.; Onkamo, P.; Ukkola-Vuoti, L.; Raijas, P.; Karma, K.; Vieland, V. J.; Järvelä, I.

    2014-01-01

    Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several SNPs near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude. PMID:24614497

  9. A genome-wide linkage and association study of musical aptitude identifies loci containing genes related to inner ear development and neurocognitive functions.

    PubMed

    Oikkonen, J; Huang, Y; Onkamo, P; Ukkola-Vuoti, L; Raijas, P; Karma, K; Vieland, V J; Järvelä, I

    2015-02-01

    Humans have developed the perception, production and processing of sounds into the art of music. A genetic contribution to these skills of musical aptitude has long been suggested. We performed a genome-wide scan in 76 pedigrees (767 individuals) characterized for the ability to discriminate pitch (SP), duration (ST) and sound patterns (KMT), which are primary capacities for music perception. Using the Bayesian linkage and association approach implemented in program package KELVIN, especially designed for complex pedigrees, several single nucleotide polymorphisms (SNPs) near genes affecting the functions of the auditory pathway and neurocognitive processes were identified. The strongest association was found at 3q21.3 (rs9854612) with combined SP, ST and KMT test scores (COMB). This region is located a few dozen kilobases upstream of the GATA binding protein 2 (GATA2) gene. GATA2 regulates the development of cochlear hair cells and the inferior colliculus (IC), which are important in tonotopic mapping. The highest probability of linkage was obtained for phenotype SP at 4p14, located next to the region harboring the protocadherin 7 gene, PCDH7. Two SNPs rs13146789 and rs13109270 of PCDH7 showed strong association. PCDH7 has been suggested to play a role in cochlear and amygdaloid complexes. Functional class analysis showed that inner ear and schizophrenia-related genes were enriched inside the linked regions. This study is the first to show the importance of auditory pathway genes in musical aptitude.

  10. Developmental Trajectories for Children With Dyslexia and Low IQ Poor Readers

    PubMed Central

    2016-01-01

    Reading difficulties are found in children with both high and low IQ and it is now clear that both groups exhibit difficulties in phonological processing. Here, we apply the developmental trajectories approach, a new methodology developed for studying language and cognitive impairments in developmental disorders, to both poor reader groups. The trajectory methodology enables identification of atypical versus delayed development in datasets gathered using group matching designs. Regarding the cognitive predictors of reading, which here are phonological awareness, phonological short-term memory (PSTM) and rapid automatized naming (RAN), the method showed that trajectories for the two groups diverged markedly. Children with dyslexia showed atypical development in phonological awareness, while low IQ poor readers showed developmental delay. Low IQ poor readers showed atypical PSTM and RAN development, but children with dyslexia showed developmental delay. These divergent trajectories may have important ramifications for supporting each type of poor reader, although all poor readers showed weakness in all areas. Regarding auditory processing, the developmental trajectories were very similar for the two poor reader groups. However, children with dyslexia demonstrated developmental delay for auditory discrimination of Duration, while the low IQ children showed atypical development on this measure. The data show that, regardless of IQ, poor readers have developmental trajectories that differ from typically developing children. The trajectories approach enables differences in trajectory classification to be identified across poor reader group, as well as specifying the individual nature of these trajectories. PMID:27110928

  11. Stimulus train duration but not attention moderates γ-band entrainment abnormalities in schizophrenia

    PubMed Central

    Hamm, Jordan P.; Bobilev, Anastasia M.; Hayrynen, Lauren K.; Hudgens-Haney, Matthew E.; Oliver, William T.; Parker, David A.; McDowell, Jennifer E.; Buckley, Peter A.; Clementz, Brett A.

    2017-01-01

    Electroencephalographic (EEG) studies of auditory steady-state responses (aSSRs) non-invasively probe gamma-band (40-Hz) oscillatory capacity in sensory cortex with high signal-to-noise ratio. Consistent reports of reduced 40-Hz aSSRs in persons with schizophrenia (SZ) indicate its potential as an efficient biomarker for the disease, but studies have been limited to passive or indirect listening contexts with stereotypically short (500ms) stimulus trains. An inability to modulate sensorineural processing in accord with behavioral goals or within the sensory environmental context may represent a fundamental deficit in SZ, but whether and how this deficit relates to reduced aSSRs is unknown. We systematically varied stimulus duration and attentional contexts to further mature the 40-Hz aSSR as biomarker for future translational or mechanistic studies. Eighteen SZ and 18 healthy subjects (H) were presented binaural pure-tones with or without sinusoidal amplitude modulation at 40-Hz. Stimulus duration (500-ms or 1500-ms) and attention (via a button press task) were varied across 4 separate blocks. Evoked potentials recorded with dense-array EEGs were analyzed in the time-frequency domain. SZ displayed reduced 40-Hz aSSRs to typical stimulation parameters, replicating previous findings. In H, aSSRs were reduced when stimuli were presented in longer trains and were slightly enhanced by attention. Only the former modulation was impaired in SZ and correlated with sensory discrimination performance. Thus, gamma-band aSSRs are modulated by both attentional and stimulus duration contexts, but only modulations related to physical stimulus properties are abnormal in SZ, supporting its status as a biomarker of psychotic perceptual disturbance involving non-attentional sensori-cortical circuits. PMID:25868936

  12. EEG signatures accompanying auditory figure-ground segregation

    PubMed Central

    Tóth, Brigitta; Kocsis, Zsuzsanna; Háden, Gábor P.; Szerafin, Ágnes; Shinn-Cunningham, Barbara; Winkler, István

    2017-01-01

    In everyday acoustic scenes, figure-ground segregation typically requires one to group together sound elements over both time and frequency. Electroencephalogram was recorded while listeners detected repeating tonal complexes composed of a random set of pure tones within stimuli consisting of randomly varying tonal elements. The repeating pattern was perceived as a figure over the randomly changing background. It was found that detection performance improved both as the number of pure tones making up each repeated complex (figure coherence) increased, and as the number of repeated complexes (duration) increased – i.e., detection was easier when either the spectral or temporal structure of the figure was enhanced. Figure detection was accompanied by the elicitation of the object related negativity (ORN) and the P400 event-related potentials (ERPs), which have been previously shown to be evoked by the presence of two concurrent sounds. Both ERP components had generators within and outside of auditory cortex. The amplitudes of the ORN and the P400 increased with both figure coherence and figure duration. However, only the P400 amplitude correlated with detection performance. These results suggest that 1) the ORN and P400 reflect processes involved in detecting the emergence of a new auditory object in the presence of other concurrent auditory objects; 2) the ORN corresponds to the likelihood of the presence of two or more concurrent sound objects, whereas the P400 reflects the perceptual recognition of the presence of multiple auditory objects and/or preparation for reporting the detection of a target object. PMID:27421185

  13. Effects of single cycle binaural beat duration on auditory evoked potentials.

    PubMed

    Mihajloski, Todor; Bohorquez, Jorge; Özdamar, Özcan

    2014-01-01

    Binaural beat (BB) illusions are experienced as continuous central pulsations when two sounds with slightly different frequencies are delivered to each ear. It has been shown that steady-state auditory evoked potentials (AEPs) to BBs can be captured and investigated. The authors recently developed a new method of evoking transient AEPs to binaural beats using frequency modulated stimuli. This methodology was able to create single BBs in predetermined intervals with varying carrier frequencies. This study examines the effects of the BB duration and the frequency modulating component of the stimulus on the binaural beats and their evoked potentials. Normal hearing subjects were tested with a set of four durations (25, 50, 100, and 200 ms) with two stimulation configurations, binaural dichotic (binaural beats) and diotic (frequency modulation). The results obtained from the study showed that out of the given durations, the 100 ms beat, was capable of evoking the largest amplitude responses. The frequency modulation effect showed a decrease in peak amplitudes with increasing beat duration until their complete disappearance at 200 ms. Even though, at 200 ms, the frequency modulation effects were not present, the binaural beats were still perceived and captured as evoked potentials.

  14. Object-based attention modulates the discrimination of level increments in stop-consonant noise bursts

    PubMed Central

    2018-01-01

    This study tested the hypothesis that object-based attention modulates the discrimination of level increments in stop-consonant noise bursts. With consonant-vowel-consonant (CvC) words consisting of an ≈80-dB vowel (v), a pre-vocalic (Cv) and a post-vocalic (vC) stop-consonant noise burst (≈60-dB SPL), we measured discrimination thresholds (LDTs) for level increments (ΔL) in the noise bursts presented either in CvC context or in isolation. In the 2-interval 2-alternative forced-choice task, each observation interval presented a CvC word (e.g., /pæk/ /pæk/), and normal-hearing participants had to discern ΔL in the Cv or vC burst. Based on the linguistic word labels, the auditory events of each trial were perceived as two auditory objects (Cv-v-vC and Cv-v-vC) that group together the bursts and vowels, hindering selective attention to ΔL. To discern ΔL in Cv or vC, the events must be reorganized into three auditory objects: the to-be-attended pre-vocalic (Cv–Cv) or post-vocalic burst pair (vC–vC), and the to-be-ignored vowel pair (v–v). Our results suggest that instead of being automatic this reorganization requires training, in spite of using familiar CvC words. Relative to bursts in isolation, bursts in context always produced inferior ΔL discrimination accuracy (a context effect), which depended strongly on the acoustic separation between the bursts and the vowel, being much keener for the object apart from (post-vocalic) than for the object adjoining (pre-vocalic) the vowel (a temporal-position effect). Variability in CvC dimensions that did not alter the noise-burst perceptual grouping had minor effects on discrimination accuracy. In addition to being robust and persistent, these effects are relatively general, evincing in forced-choice tasks with one or two observation intervals, with or without variability in the temporal position of ΔL, and with either fixed or roving CvC standards. The results lend support to the hypothesis. PMID:29364931

  15. Residual neural processing of musical sound features in adult cochlear implant users.

    PubMed

    Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants' attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients' age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation.

  16. Residual Neural Processing of Musical Sound Features in Adult Cochlear Implant Users

    PubMed Central

    Timm, Lydia; Vuust, Peter; Brattico, Elvira; Agrawal, Deepashri; Debener, Stefan; Büchner, Andreas; Dengler, Reinhard; Wittfoth, Matthias

    2014-01-01

    Auditory processing in general and music perception in particular are hampered in adult cochlear implant (CI) users. To examine the residual music perception skills and their underlying neural correlates in CI users implanted in adolescence or adulthood, we conducted an electrophysiological and behavioral study comparing adult CI users with normal-hearing age-matched controls (NH controls). We used a newly developed musical multi-feature paradigm, which makes it possible to test automatic auditory discrimination of six different types of sound feature changes inserted within a musical enriched setting lasting only 20 min. The presentation of stimuli did not require the participants’ attention, allowing the study of the early automatic stage of feature processing in the auditory cortex. For the CI users, we obtained mismatch negativity (MMN) brain responses to five feature changes but not to changes of rhythm, whereas we obtained MMNs for all the feature changes in the NH controls. Furthermore, the MMNs to deviants of pitch of CI users were reduced in amplitude and later than those of NH controls for changes of pitch and guitar timber. No other group differences in MMN parameters were found to changes in intensity and saxophone timber. Furthermore, the MMNs in CI users reflected the behavioral scores from a respective discrimination task and were correlated with patients’ age and speech intelligibility. Our results suggest that even though CI users are not performing at the same level as NH controls in neural discrimination of pitch-based features, they do possess potential neural abilities for music processing. However, CI users showed a disrupted ability to automatically discriminate rhythmic changes compared with controls. The current behavioral and MMN findings highlight the residual neural skills for music processing even in CI users who have been implanted in adolescence or adulthood. Highlights: -Automatic brain responses to musical feature changes reflect the limitations of central auditory processing in adult Cochlear Implant users.-The brains of adult CI users automatically process sound features changes even when inserted in a musical context.-CI users show disrupted automatic discriminatory abilities for rhythm in the brain.-Our fast paradigm demonstrate residual musical abilities in the brains of adult CI users giving hope for their future rehabilitation. PMID:24772074

  17. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs

    PubMed Central

    Ponnath, Abhilash; Farris, Hamilton E.

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3–10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene. PMID:25120437

  18. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    PubMed

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted <2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  19. The music perception abilities of prelingually deaf children with cochlear implants.

    PubMed

    Stabej, Katja Kladnik; Smid, Lojze; Gros, Anton; Zargi, Miha; Kosir, Andrej; Vatovec, Jagoda

    2012-10-01

    To investigate the music perception abilities of prelingually deaf children with cochlear implants, in comparison to a group of normal-hearing children, and to consider the factors that contribute to music perception. The music perception abilities of 39 prelingually deaf children with unilateral cochlear implants were compared to the abilities of 39 normal hearing children. To assess the music listening abilities, the MuSIC perception test was adopted. The influence of the child's age, age at implantation, device experience and type of sound-processing strategy on the music perception were evaluated. The effects of auditory performance, nonverbal intellectual abilities, as well as the child's additional musical education on music perception were also considered. Children with cochlear implants and normal hearing children performed significantly differently with respect to rhythm discrimination (55% vs. 82%, p<0.001), instrument identification (57% vs. 88%, p<0.001) and emotion rating (p=0.022). However we found no significant difference in terms of melody discrimination and dissonance rating between the two groups. There was a positive correlation between auditory performance and melody discrimination (r=0.27; p=0.031), between auditory performance and instrument identification (r=0.20; p=0.059) and between the child's grade (mark) in school music classes and melody discrimination (r=0.34; p=0.030). In children with cochlear implant only, the music perception ability assessed by the emotion rating test was negatively correlated to the child's age (r(S)=-0.38; p=0.001), age at implantation (r(S)=-0.34; p=0.032), and device experience (r(S)=-0.38; p=0.019). The child's grade in school music classes showed a positive correlation to music perception abilities assessed by rhythm discrimination test (r(S)=0.46; p<0.001), melody discrimination test (r(S)=0.28; p=0.018), and instrument identification test (r(S)=0.23; p=0.05). As expected, there was a marked difference in the music perception abilities of prelingually deaf children with cochlear implants in comparison to the group of normal hearing children, but not for all the tests of music perception. Additional multi-centre studies, including a larger number of participants and a broader spectrum of music subtests, considering as many as possible of the factors that may contribute to music perception, seem reasonable. Copyright © 2012. Published by Elsevier Ireland Ltd.

  20. Supramodal Enhancement of Auditory Perceptual and Cognitive Learning by Video Game Playing.

    PubMed

    Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal

    2017-01-01

    Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.

  1. The 5% difference: early sensory processing predicts sarcasm perception in schizophrenia and schizo-affective disorder.

    PubMed

    Kantrowitz, J T; Hoptman, M J; Leitman, D I; Silipo, G; Javitt, D C

    2014-01-01

    Intact sarcasm perception is a crucial component of social cognition and mentalizing (the ability to understand the mental state of oneself and others). In sarcasm, tone of voice is used to negate the literal meaning of an utterance. In particular, changes in pitch are used to distinguish between sincere and sarcastic utterances. Schizophrenia patients show well-replicated deficits in auditory function and functional connectivity (FC) within and between auditory cortical regions. In this study we investigated the contributions of auditory deficits to sarcasm perception in schizophrenia. Auditory measures including pitch processing, auditory emotion recognition (AER) and sarcasm detection were obtained from 76 patients with schizophrenia/schizo-affective disorder and 72 controls. Resting-state FC (rsFC) was obtained from a subsample and was analyzed using seeds placed in both auditory cortex and meta-analysis-defined core-mentalizing regions relative to auditory performance. Patients showed large effect-size deficits across auditory measures. Sarcasm deficits correlated significantly with general functioning and impaired pitch processing both across groups and within the patient group alone. Patients also showed reduced sensitivity to alterations in mean pitch and variability. For patients, sarcasm discrimination correlated exclusively with the level of rsFC within primary auditory regions whereas for controls, correlations were observed exclusively within core-mentalizing regions (the right posterior superior temporal gyrus, anterior superior temporal sulcus and insula, and left posterior medial temporal gyrus). These findings confirm the contribution of auditory deficits to theory of mind (ToM) impairments in schizophrenia, and demonstrate that FC within auditory, but not core-mentalizing, regions is rate limiting with respect to sarcasm detection in schizophrenia.

  2. Discrimination of phoneme length differences in word and sentence contexts

    NASA Astrophysics Data System (ADS)

    Kawai, Norimune; Carrell, Thomas

    2005-09-01

    The ability of listeners to discriminate phoneme duration differences within word and sentence contexts was measured. This investigation was part of a series of studies examining the audibility and perceptual importance of speech modifications produced by stuttering intervention techniques. Just noticeable differences (jnd's) of phoneme lengths were measured via the parameter estimation by sequential testing (PEST) task, an adaptive tracking procedure. The target phonemes were digitally manipulated to vary from normal (130 m) to prolonged (210 m) duration in 2-m increments. In the first condition the phonemes were embedded in words. In the second condition the phonemes were embedded within words, which were further embedded in sentences. A four-interval forced-choice (4IAX) task was employed on each trial, and the PEST procedure determined the duration at which each listener correctly detected a difference between the normal duration and the test duration 71% of the time. The results revealed that listeners were able to reliably discriminate approximately 15-m differences in word context and 10-m differences in sentence context. An independent t-test showed a difference in discriminability between word and sentence contexts to be significant. These results indicate that duration differences were better perceived within a sentence context.

  3. Context-Dependent Duration Signals in the Primate Prefrontal Cortex

    PubMed Central

    Genovesio, Aldo; Seitz, Lucia K.; Tsujimoto, Satoshi; Wise, Steven P.

    2016-01-01

    The activity of some prefrontal (PF) cortex neurons distinguishes short from long time intervals. Here, we examined whether this property reflected a general timing mechanism or one dependent on behavioral context. In one task, monkeys discriminated the relative duration of 2 stimuli; in the other, they discriminated the relative distance of 2 stimuli from a fixed reference point. Both tasks had a pre-cue period (interval 1) and a delay period (interval 2) with no discriminant stimulus. Interval 1 elapsed before the presentation of the first discriminant stimulus, and interval 2 began after that stimulus. Both intervals had durations of either 400 or 800 ms. Most PF neurons distinguished short from long durations in one task or interval, but not in the others. When neurons did signal something about duration for both intervals, they did so in an uncorrelated or weakly correlated manner. These results demonstrate a high degree of context dependency in PF time processing. The PF, therefore, does not appear to signal durations abstractedly, as would be expected of a general temporal encoder, but instead does so in a highly context-dependent manner, both within and between tasks. PMID:26209845

  4. Auditory evoked potentials in patients with major depressive disorder measured by Emotiv system.

    PubMed

    Wang, Dongcui; Mo, Fongming; Zhang, Yangde; Yang, Chao; Liu, Jun; Chen, Zhencheng; Zhao, Jinfeng

    2015-01-01

    In a previous study (unpublished), Emotiv headset was validated for capturing event-related potentials (ERPs) from normal subjects. In the present follow-up study, the signal quality of Emotiv headset was tested by the accuracy rate of discriminating Major Depressive Disorder (MDD) patients from the normal subjects. ERPs of 22 MDD patients and 15 normal subjects were induced by an auditory oddball task and the amplitude of N1, N2 and P3 of ERP components were specifically analyzed. The features of ERPs were statistically investigated. It is found that Emotiv headset is capable of discriminating the abnormal N1, N2 and P3 components in MDD patients. Relief-F algorithm was applied to all features for feature selection. The selected features were then input to a linear discriminant analysis (LDA) classifier with leave-one-out cross-validation to characterize the ERP features of MDD. 127 possible combinations out of the selected 7 ERP features were classified using LDA. The best classification accuracy was achieved to be 89.66%. These results suggest that MDD patients are identifiable from normal subjects by ERPs measured by Emotiv headset.

  5. Generalization of cross-modal stimulus equivalence classes: operant processes as components in human category formation.

    PubMed Central

    Lane, S D; Clow, J K; Innis, A; Critchfield, T S

    1998-01-01

    This study employed a stimulus-class rating procedure to explore whether stimulus equivalence and stimulus generalization can combine to promote the formation of open-ended categories incorporating cross-modal stimuli. A pretest of simple auditory discrimination indicated that subjects (college students) could discriminate among a range of tones used in the main study. Before beginning the main study, 10 subjects learned to use a rating procedure for categorizing sets of stimuli as class consistent or class inconsistent. After completing conditional discrimination training with new stimuli (shapes and tones), the subjects demonstrated the formation of cross-modal equivalence classes. Subsequently, the class-inclusion rating procedure was reinstituted, this time with cross-modal sets of stimuli drawn from the equivalence classes. On some occasions, the tones of the equivalence classes were replaced by novel tones. The probability that these novel sets would be rated as class consistent was generally a function of the auditory distance between the novel tone and the tone that was explicitly included in the equivalence class. These data extend prior work on generalization of equivalence classes, and support the role of operant processes in human category formation. PMID:9821680

  6. Time discrimination deficits in schizophrenia patients with first-rank (passivity) symptoms.

    PubMed

    Waters, Flavie; Jablensky, Assen

    2009-05-15

    Schizophrenia patients with first-rank (passivity) symptoms (FRS) report a loss of clear boundaries between the self and others and that their thoughts and actions are controlled by external forces. One of the more widely accepted explanatory models of FRS suggests a dysfunction in the 'forward model' system, whose role consists in predicting the sensory consequences of actions [Frith, C., 2006. The neural basis of hallucinations and delusions. Comptes Rendus Biologies 328, 169-175.]. There has been recent interest in the importance of timing precision underlying both the functioning of the forward model, and in processes contributing to the mechanisms of self-recognition [Haggard, P., Martin, F., Taylor-Clarke, M., Jeannerod, M., Franck, N., 2003. Awareness of action in schizophrenia. Neuroreport 14, 1081-1085.]. In the current study, we examined whether schizophrenia patients with FRS have a time perception impairment, using an auditory discrimination task requiring judgments of temporal intervals. Thirty-five schizophrenia patients (15 with, and 20 without, FRS), and 16 non-clinical controls completed the task. The results showed that patients with FRS experienced time differently by underestimating the duration of time intervals. Given the role of timing in shaping sensory awareness and in the formation of causal mental associations, a breakdown in timing mechanisms may affect the processes relating to the perceived control of actions and mental events, leading to disturbances of self-recognition in FRS.

  7. Using EEG to Discriminate Cognitive Workload and Performance Based on Neural Activation and Connectivity

    DTIC Science & Technology

    2016-05-31

    auditory working memory task to vary cognitive workload by altering the number of digits held in memory during the simultaneous retention of a sentence...in memory . Cognitive efficacy is assessed based on accuracy in recalling digits from memory . A Gaussian classifier is used to discriminate cognitive...effectiveness of cognition under the existing load. One major factor that impacts cognitive load is the amount of working memory required in a task

  8. Time on your hands: Perceived duration of sensory events is biased toward concurrent actions.

    PubMed

    Yon, Daniel; Edey, Rosanna; Ivry, Richard B; Press, Clare

    2017-02-01

    Perceptual systems must rapidly generate accurate representations of the world from sensory inputs that are corrupted by internal and external noise. We can typically obtain more veridical representations by integrating information from multiple channels, but this integration can lead to biases when inputs are, in fact, not from the same source. Although a considerable amount is known about how different sources of information are combined to influence what we perceive, it is not known whether temporal features are combined. It is vital to address this question given the divergent predictions made by different models of cue combination and time perception concerning the plausibility of cross-modal temporal integration, and the implications that such integration would have for research programs in action control and social cognition. Here we present four experiments investigating the influence of movement duration on the perceived duration of an auditory tone. Participants either explicitly (Experiments 1-2) or implicitly (Experiments 3-4) produced hand movements of shorter or longer durations, while judging the duration of a concurrently presented tone (500-950 ms in duration). Across all experiments, judgments of tone duration were attracted toward the duration of executed movements (i.e., tones were perceived to be longer when executing a movement of longer duration). Our results demonstrate that temporal information associated with movement biases perceived auditory duration, placing important constraints on theories modeling cue integration for state estimation, as well as models of time perception, action control and social cognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Neurophysiological and Behavioral Responses of Mandarin Lexical Tone Processing

    PubMed Central

    Yu, Yan H.; Shafer, Valerie L.; Sussman, Elyse S.

    2017-01-01

    Language experience enhances discrimination of speech contrasts at a behavioral- perceptual level, as well as at a pre-attentive level, as indexed by event-related potential (ERP) mismatch negativity (MMN) responses. The enhanced sensitivity could be the result of changes in acoustic resolution and/or long-term memory representations of the relevant information in the auditory cortex. To examine these possibilities, we used a short (ca. 600 ms) vs. long (ca. 2,600 ms) interstimulus interval (ISI) in a passive, oddball discrimination task while obtaining ERPs. These ISI differences were used to test whether cross-linguistic differences in processing Mandarin lexical tone are a function of differences in acoustic resolution and/or differences in long-term memory representations. Bisyllabic nonword tokens that differed in lexical tone categories were presented using a passive listening multiple oddball paradigm. Behavioral discrimination and identification data were also collected. The ERP results revealed robust MMNs to both easy and difficult lexical tone differences for both groups at short ISIs. At long ISIs, there was either no change or an enhanced MMN amplitude for the Mandarin group, but reduced MMN amplitude for the English group. In addition, the Mandarin listeners showed a larger late negativity (LN) discriminative response than the English listeners for lexical tone contrasts in the long ISI condition. Mandarin speakers outperformed English speakers in the behavioral tasks, especially under the long ISI conditions with the more similar lexical tone pair. These results suggest that the acoustic correlates of lexical tone are fairly robust and easily discriminated at short ISIs, when the auditory sensory memory trace is strong. At longer ISIs beyond 2.5 s language-specific experience is necessary for robust discrimination. PMID:28321179

  10. Media.

    ERIC Educational Resources Information Center

    Allen, Lee E., Ed.

    1974-01-01

    Intended for secondary English teachers, the materials and ideas presented here suggest ways to use media in the classroom in teaching visual and auditory discrimination while enlivening classes and motivating students. Contents include "Media Specialists Need Not Apply," which discusses the need for preparation of media educators with…

  11. Effects of phonological contrast on auditory word discrimination in children with and without reading disability: A magnetoencephalography (MEG) study

    PubMed Central

    Wehner, Daniel T.; Ahlfors, Seppo P.; Mody, Maria

    2007-01-01

    Poor readers perform worse than their normal reading peers on a variety of speech perception tasks, which may be linked to their phonological processing abilities. The purpose of the study was to compare the brain activation patterns of normal and impaired readers on speech perception to better understand the phonological basis in reading disability. Whole-head magnetoencephalography (MEG) was recorded as good and poor readers, 7-13 years of age, performed an auditory word discrimination task. We used an auditory oddball paradigm in which the ‘deviant’ stimuli (/bat/, /kat/, /rat/) differed in the degree of phonological contrast (1 vs. 3 features) from a repeated standard word (/pat/). Both good and poor readers responded more slowly to deviants that were phonologically similar compared to deviants that were phonologically dissimilar to the standard word. Source analysis of the MEG data using Minimum Norm Estimation (MNE) showed that compared to good readers, poor readers had reduced left-hemisphere activation to the most demanding phonological condition reflecting their difficulties with phonological processing. Furthermore, unlike good readers, poor readers did not show differences in activation as a function of the degree of phonological contrast. These results are consistent with a phonological account of reading disability. PMID:17675109

  12. What's what in auditory cortices?

    PubMed

    Retsa, Chrysa; Matusz, Pawel J; Schnupp, Jan W H; Murray, Micah M

    2018-08-01

    Distinct anatomical and functional pathways are postulated for analysing a sound's object-related ('what') and space-related ('where') information. It remains unresolved to which extent distinct or overlapping neural resources subserve specific object-related dimensions (i.e. who is speaking and what is being said can both be derived from the same acoustic input). To address this issue, we recorded high-density auditory evoked potentials (AEPs) while participants selectively attended and discriminated sounds according to their pitch, speaker identity, uttered syllable ('what' dimensions) or their location ('where'). Sound acoustics were held constant across blocks; the only manipulation involved the sound dimension that participants had to attend to. The task-relevant dimension was varied across blocks. AEPs from healthy participants were analysed within an electrical neuroimaging framework to differentiate modulations in response strength from modulations in response topography; the latter of which forcibly follow from changes in the configuration of underlying sources. There were no behavioural differences in discrimination of sounds across the 4 feature dimensions. As early as 90ms post-stimulus onset, AEP topographies differed across 'what' conditions, supporting a functional sub-segregation within the auditory 'what' pathway. This study characterises the spatio-temporal dynamics of segregated, yet parallel, processing of multiple sound object-related feature dimensions when selective attention is directed to them. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Speech processing in children with functional articulation disorders.

    PubMed

    Gósy, Mária; Horváth, Viktória

    2015-03-01

    This study explored auditory speech processing and comprehension abilities in 5-8-year-old monolingual Hungarian children with functional articulation disorders (FADs) and their typically developing peers. Our main hypothesis was that children with FAD would show co-existing auditory speech processing disorders, with different levels of these skills depending on the nature of the receptive processes. The tasks included (i) sentence and non-word repetitions, (ii) non-word discrimination and (iii) sentence and story comprehension. Results suggest that the auditory speech processing of children with FAD is underdeveloped compared with that of typically developing children, and largely varies across task types. In addition, there are differences between children with FAD and controls in all age groups from 5 to 8 years. Our results have several clinical implications.

  14. Auditory stream segregation in children with Asperger syndrome

    PubMed Central

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  15. Analysis of impact noise induced by hitting of titanium head golf driver.

    PubMed

    Kim, Young Ho; Kim, Young Chul; Lee, Jun Hee; An, Yong-Hwi; Park, Kyung Tae; Kang, Kyung Min; Kang, Yeon June

    2014-11-01

    The hitting of titanium head golf driver against golf ball creates a short duration, high frequency impact noise. We analyzed the spectra of these impact noises and evaluated the auditory hazards from exposure to the noises. Noises made by 10 titanium head golf drivers with five maximum hits were collected, and the spectra of the pure impact sounds were studied using a noise analysis program. The noise was measured at 1.7 m (position A) and 3.4 m (position B) from the hitting point in front of the hitter and at 3.4 m (position C) behind the hitting point. Average time duration was measured and auditory risk units (ARUs) at position A were calculated using the Auditory Hazard Assessment Algorithm for Humans. The average peak levels at position A were 119.9 dBA at the sound pressure level (SPL) peak and 100.0 dBA at the overall octave level. The average peak levels (SPL and overall octave level) at position B were 111.6 and 96.5 dBA, respectively, and at position C were 111.5 and 96.7 dBA, respectively. The average time duration and ARUs measured at position A were 120.6 ms and 194.9 units, respectively. Although impact noises made by titanium head golf drivers showed relatively low ARUs, individuals enjoying golf frequently may be susceptible to hearing loss due to the repeated exposure of this intense impact noise with short duration and high frequency. Unprotected exposure to impact noises should be limited to prevent cochleovestibular disorders.

  16. Developmental changes in automatic rule-learning mechanisms across early childhood.

    PubMed

    Mueller, Jutta L; Friederici, Angela D; Männel, Claudia

    2018-06-27

    Infants' ability to learn complex linguistic regularities from early on has been revealed by electrophysiological studies indicating that 3-month-olds, but not adults, can automatically detect non-adjacent dependencies between syllables. While different ERP responses in adults and infants suggest that both linguistic rule learning and its link to basic auditory processing undergo developmental changes, systematic investigations of the developmental trajectories are scarce. In the present study, we assessed 2- and 4-year-olds' ERP indicators of pitch discrimination and linguistic rule learning in a syllable-based oddball design. To test for the relation between auditory discrimination and rule learning, ERP responses to pitch changes were used as predictor for potential linguistic rule-learning effects. Results revealed that 2-year-olds, but not 4-year-olds, showed ERP markers of rule learning. Although, 2-year-olds' rule learning was not dependent on differences in pitch perception, 4-year-old children demonstrated a dependency, such that those children who showed more pronounced responses to pitch changes still showed an effect of rule learning. These results narrow down the developmental decline of the ability for automatic linguistic rule learning to the age between 2 and 4 years, and, moreover, point towards a strong modification of this change by auditory processes. At an age when the ability of automatic linguistic rule learning phases out, rule learning can still be observed in children with enhanced auditory responses. The observed interrelations are plausible causes for age-of-acquisition effects and inter-individual differences in language learning. © 2018 John Wiley & Sons Ltd.

  17. The ability for cocaine and cocaine-associated cues to compete for attention

    PubMed Central

    Pitchers, Kyle K.; Wood, Taylor R.; Skrzynski, Cari J.; Robinson, Terry E.; Sarter, Martin

    2017-01-01

    In humans, reward cues, including drug cues in addicts, are especially effective in biasing attention towards them, so much so they can disrupt ongoing task performance. It is not known, however, whether this happens in rats. To address this question, we developed a behavioral paradigm to assess the capacity of an auditory drug (cocaine) cue to evoke cocaine-seeking behavior, thus distracting thirsty rats from performing a well-learned sustained attention task (SAT) to obtain a water reward. First, it was determined that an auditory cocaine cue (tone-CS) reinstated drug-seeking equally in sign-trackers (STs) and goal-trackers (GTs), which otherwise vary in the propensity to attribute incentive salience to a localizable drug cue. Next, we tested the ability of an auditory cocaine cue to disrupt performance on the SAT in STs and GTs. Rats were trained to self-administer cocaine intravenously using an Intermittent Access self-administration procedure known to produce a progressive increase in motivation for cocaine, escalation of intake, and strong discriminative stimulus control over drug-seeking behavior. When presented alone, the auditory discriminative stimulus elicited cocaine-seeking behavior while rats were performing the SAT, but it was not sufficiently disruptive to impair SAT performance. In contrast, if cocaine was available in the presence of the cue, or when administered non-contingently, SAT performance was severely disrupted. We suggest that performance on a relatively automatic, stimulus-driven task, such as the basic version of the SAT used here, may be difficult to disrupt with a drug cue alone. A task that requires more top-down cognitive control may be needed. PMID:27890441

  18. Brain activity associated with selective attention, divided attention and distraction.

    PubMed

    Salo, Emma; Salmela, Viljami; Salmi, Juha; Numminen, Jussi; Alho, Kimmo

    2017-06-01

    Top-down controlled selective or divided attention to sounds and visual objects, as well as bottom-up triggered attention to auditory and visual distractors, has been widely investigated. However, no study has systematically compared brain activations related to all these types of attention. To this end, we used functional magnetic resonance imaging (fMRI) to measure brain activity in participants performing a tone pitch or a foveal grating orientation discrimination task, or both, distracted by novel sounds not sharing frequencies with the tones or by extrafoveal visual textures. To force focusing of attention to tones or gratings, or both, task difficulty was kept constantly high with an adaptive staircase method. A whole brain analysis of variance (ANOVA) revealed fronto-parietal attention networks for both selective auditory and visual attention. A subsequent conjunction analysis indicated partial overlaps of these networks. However, like some previous studies, the present results also suggest segregation of prefrontal areas involved in the control of auditory and visual attention. The ANOVA also suggested, and another conjunction analysis confirmed, an additional activity enhancement in the left middle frontal gyrus related to divided attention supporting the role of this area in top-down integration of dual task performance. Distractors expectedly disrupted task performance. However, contrary to our expectations, activations specifically related to the distractors were found only in the auditory and visual cortices. This suggests gating of the distractors from further processing perhaps due to strictly focused attention in the current demanding discrimination tasks. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Impairing the useful field of view in natural scenes: Tunnel vision versus general interference.

    PubMed

    Ringer, Ryan V; Throneburg, Zachary; Johnson, Aaron P; Kramer, Arthur F; Loschky, Lester C

    2016-01-01

    A fundamental issue in visual attention is the relationship between the useful field of view (UFOV), the region of visual space where information is encoded within a single fixation, and eccentricity. A common assumption is that impairing attentional resources reduces the size of the UFOV (i.e., tunnel vision). However, most research has not accounted for eccentricity-dependent changes in spatial resolution, potentially conflating fixed visual properties with flexible changes in visual attention. Williams (1988, 1989) argued that foveal loads are necessary to reduce the size of the UFOV, producing tunnel vision. Without a foveal load, it is argued that the attentional decrement is constant across the visual field (i.e., general interference). However, other research asserts that auditory working memory (WM) loads produce tunnel vision. To date, foveal versus auditory WM loads have not been compared to determine if they differentially change the size of the UFOV. In two experiments, we tested the effects of a foveal (rotated L vs. T discrimination) task and an auditory WM (N-back) task on an extrafoveal (Gabor) discrimination task. Gabor patches were scaled for size and processing time to produce equal performance across the visual field under single-task conditions, thus removing the confound of eccentricity-dependent differences in visual sensitivity. The results showed that although both foveal and auditory loads reduced Gabor orientation sensitivity, only the foveal load interacted with retinal eccentricity to produce tunnel vision, clearly demonstrating task-specific changes to the form of the UFOV. This has theoretical implications for understanding the UFOV.

  20. Degraded speech sound processing in a rat model of fragile X syndrome

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.

    2014-01-01

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347

Top