Sample records for complex listening environments

  1. Speech Understanding in Complex Listening Environments by Listeners Fit with Cochlear Implants

    ERIC Educational Resources Information Center

    Dorman, Michael F.; Gifford, Rene H.

    2017-01-01

    Purpose: The aim of this article is to summarize recent published and unpublished research from our 2 laboratories on improving speech understanding in complex listening environments by listeners fit with cochlear implants (CIs). Method: CI listeners were tested in 2 listening environments. One was a simulation of a restaurant with multiple,…

  2. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  3. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  4. Development of a test battery for evaluating speech perception in complex listening environments.

    PubMed

    Brungart, Douglas S; Sheffield, Benjamin M; Kubli, Lina R

    2014-08-01

    In the real world, spoken communication occurs in complex environments that involve audiovisual speech cues, spatially separated sound sources, reverberant listening spaces, and other complicating factors that influence speech understanding. However, most clinical tools for assessing speech perception are based on simplified listening environments that do not reflect the complexities of real-world listening. In this study, speech materials from the QuickSIN speech-in-noise test by Killion, Niquette, Gudmundsen, Revit, and Banerjee [J. Acoust. Soc. Am. 116, 2395-2405 (2004)] were modified to simulate eight listening conditions spanning the range of auditory environments listeners encounter in everyday life. The standard QuickSIN test method was used to estimate 50% speech reception thresholds (SRT50) in each condition. A method of adjustment procedure was also used to obtain subjective estimates of the lowest signal-to-noise ratio (SNR) where the listeners were able to understand 100% of the speech (SRT100) and the highest SNR where they could detect the speech but could not understand any of the words (SRT0). The results show that the modified materials maintained most of the efficiency of the QuickSIN test procedure while capturing performance differences across listening conditions comparable to those reported in previous studies that have examined the effects of audiovisual cues, binaural cues, room reverberation, and time compression on the intelligibility of speech.

  5. Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments

    ERIC Educational Resources Information Center

    Helfer, Karen S.; Merchant, Gabrielle R.; Wasiuk, Peter A.

    2017-01-01

    Purpose: A frequent complaint by older adults is difficulty communicating in challenging acoustic environments. The purpose of this work was to review and summarize information about how speech perception in complex listening situations changes across the adult age range. Method: This article provides a review of age-related changes in speech…

  6. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  7. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment.

    PubMed

    Best, Virginia; Keidser, Gitte; Buchholz, Jörg M; Freeston, Katrina

    2015-01-01

    There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing-aid benefit from those measured in the standard environment. The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests.

  8. An examination of speech reception thresholds measured in a simulated reverberant cafeteria environment

    PubMed Central

    Best, Virginia; Keidser, Gitte; Buchholz, J(x004E7)rg M.; Freeston, Katrina

    2016-01-01

    Objective There is increasing demand in the hearing research community for the creation of laboratory environments that better simulate challenging real-world listening environments. The hope is that the use of such environments for testing will lead to more meaningful assessments of listening ability, and better predictions about the performance of hearing devices. Here we present one approach for simulating a complex acoustic environment in the laboratory, and investigate the effect of transplanting a speech test into such an environment. Design Speech reception thresholds were measured in a simulated reverberant cafeteria, and in a more typical anechoic laboratory environment containing background speech babble. Study Sample The participants were 46 listeners varying in age and hearing levels, including 25 hearing-aid wearers who were tested with and without their hearing aids. Results Reliable SRTs were obtained in the complex environment, but led to different estimates of performance and hearing aid benefit from those measured in the standard environment. Conclusions The findings provide a starting point for future efforts to increase the real-world relevance of laboratory-based speech tests. PMID:25853616

  9. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  10. The influence of different native language systems on vowel discrimination and identification

    NASA Astrophysics Data System (ADS)

    Kewley-Port, Diane; Bohn, Ocke-Schwen; Nishi, Kanae

    2005-04-01

    The ability to identify the vowel sounds of a language reliably is dependent on the ability to discriminate between vowels at a more sensory level. This study examined how the complexity of the vowel systems of three native languages (L1) influenced listeners perception of American English (AE) vowels. AE has a fairly complex vowel system with 11 monophthongs. In contrast, Japanese has only 5 spectrally different vowels, while Swedish has 9 and Danish has 12. Six listeners, with exposure of less than 4 months in English speaking environments, participated from each L1. Their performance in two tasks was compared to 6 AE listeners. As expected, there were large differences in a linguistic identification task using 4 confusable AE low vowels. Japanese listeners performed quite poorly compared to listeners with more complex L1 vowel systems. Thresholds for formant discrimination for the 3 groups were very similar to those of native AE listeners. Thus it appears that sensory abilities for discriminating vowels are only slightly affected by native vowel systems, and that vowel confusions occur at a more central, linguistic level. [Work supported by funding from NIHDCD-02229 and the American-Scandinavian Foundation.

  11. Prior exposure to a reverberant listening environment improves speech intelligibility in adult cochlear implant listeners.

    PubMed

    Srinivasan, Nirmal Kumar; Tobey, Emily A; Loizou, Philipos C

    2016-01-01

    The goal of this study is to investigate whether prior exposure to reverberant listening environment improves speech intelligibility of adult cochlear implant (CI) users. Six adult CI users participated in this study. Speech intelligibility was measured in five different simulated reverberant listening environments with two different speech corpuses. Within each listening environment, prior exposure was varied by either having the same environment across all trials (blocked presentation) or having different environment from trial to trial (unblocked). Speech intelligibility decreased as reverberation time increased. Although substantial individual variability was observed, all CI listeners showed an increase in the blocked presentation condition as compared to the unblocked presentation condition for both speech corpuses. Prior listening exposure to a reverberant listening environment improves speech intelligibility in adult CI listeners. Further research is required to understand the underlying mechanism of adaptation to listening environment.

  12. Virtual environment display for a 3D audio room simulation

    NASA Astrophysics Data System (ADS)

    Chapin, William L.; Foster, Scott

    1992-06-01

    Recent developments in virtual 3D audio and synthetic aural environments have produced a complex acoustical room simulation. The acoustical simulation models a room with walls, ceiling, and floor of selected sound reflecting/absorbing characteristics and unlimited independent localizable sound sources. This non-visual acoustic simulation, implemented with 4 audio ConvolvotronsTM by Crystal River Engineering and coupled to the listener with a Poihemus IsotrakTM, tracking the listener's head position and orientation, and stereo headphones returning binaural sound, is quite compelling to most listeners with eyes closed. This immersive effect should be reinforced when properly integrated into a full, multi-sensory virtual environment presentation. This paper discusses the design of an interactive, visual virtual environment, complementing the acoustic model and specified to: 1) allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; 2) reinforce the listener's feeling of telepresence into the acoustical environment with visual and proprioceptive sensations; 3) enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and 4) serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations. The installed system implements a head-coupled, wide-angle, stereo-optic tracker/viewer and multi-computer simulation control. The portable demonstration system implements a head-mounted wide-angle, stereo-optic display, separate head and pointer electro-magnetic position trackers, a heterogeneous parallel graphics processing system, and object oriented C++ program code.

  13. Good distractions: Testing the effects of listening to an audiobook on driving performance in simple and complex road environments.

    PubMed

    Nowosielski, Robert J; Trick, Lana M; Toxopeus, Ryan

    2018-02-01

    Distracted driving (driving while performing a secondary task) causes many collisions. Most research on distracted driving has focused on operating a cell-phone, but distracted driving can include eating while driving, conversing with passengers or listening to music or audiobooks. Although the research has focused on the deleterious effects of distraction, there may be situations where distraction improves driving performance. Fatigue and boredom are also associated with collision risk and it is possible that secondary tasks can help alleviate the effects of fatigue and boredom. Furthermore, it has been found that individuals with high levels of executive functioning as measured by the OSPAN (Operation Span) task show better driving while multitasking. In this study, licensed drivers were tested in a driving simulator (a car body surrounded by screens) that simulated simple or complex roads. Road complexity was manipulated by increasing traffic, scenery, and the number of curves in the drive. Participants either drove, or drove while listening to an audiobook. Driving performance was measured in terms of braking response time to hazards (HRT): the time required to brake in response to pedestrians or vehicles that suddenly emerged from the periphery into the path of the vehicle, speed, standard deviation of speed, standard deviation of lateral position (SDLP). Overall, braking times to hazards were higher on the complex drive than the simple one, though the effects of secondary tasks such as audiobooks were especially deleterious on the complex drive. In contrast, on the simple drive, driving while listening to an audiobook lead to faster HRT. We found evidence that individuals with high OSPAN scores had faster HRTs when listening to an audiobook. These results suggest that there are environmental and individual factors behind difference in the allocation of attention while listening to audiobooks while driving. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Central Auditory Processing of Temporal and Spectral-Variance Cues in Cochlear Implant Listeners

    PubMed Central

    Pham, Carol Q.; Bremen, Peter; Shen, Weidong; Yang, Shi-Ming; Middlebrooks, John C.; Zeng, Fan-Gang; Mc Laughlin, Myles

    2015-01-01

    Cochlear implant (CI) listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH) listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking) or outside (central masking) the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners. PMID:26176553

  15. A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

    PubMed

    Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan

    2016-07-01

    Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.

  16. An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments.

    PubMed

    Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J; Dillon, Harvey

    2015-01-01

    Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localization in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable.

  17. An evaluation of the performance of two binaural beamformers in complex and dynamic multitalker environments

    PubMed Central

    Best, Virginia; Mejia, Jorge; Freeston, Katrina; van Hoesel, Richard J.; Dillon, Harvey

    2016-01-01

    Objective Binaural beamformers are super-directional hearing aids created by combining microphone outputs from each side of the head. While they offer substantial improvements in SNR over conventional directional hearing aids, the benefits (and possible limitations) of these devices in realistic, complex listening situations have not yet been fully explored. In this study we evaluated the performance of two experimental binaural beamformers. Design Testing was carried out using a horizontal loudspeaker array. Background noise was created using recorded conversations. Performance measures included speech intelligibility, localisation in noise, acceptable noise level, subjective ratings, and a novel dynamic speech intelligibility measure. Study sample Participants were 27 listeners with bilateral hearing loss, fitted with BTE prototypes that could be switched between conventional directional or binaural beamformer microphone modes. Results Relative to the conventional directional microphones, both binaural beamformer modes were generally superior for tasks involving fixed frontal targets, but not always for situations involving dynamic target locations. Conclusions Binaural beamformers show promise for enhancing listening in complex situations when the location of the source of interest is predictable. PMID:26140298

  18. The relationship of speech intelligibility with hearing sensitivity, cognition, and perceived hearing difficulties varies for different speech perception tests

    PubMed Central

    Heinrich, Antje; Henshaw, Helen; Ferguson, Melanie A.

    2015-01-01

    Listeners vary in their ability to understand speech in noisy environments. Hearing sensitivity, as measured by pure-tone audiometry, can only partly explain these results, and cognition has emerged as another key concept. Although cognition relates to speech perception, the exact nature of the relationship remains to be fully understood. This study investigates how different aspects of cognition, particularly working memory and attention, relate to speech intelligibility for various tests. Perceptual accuracy of speech perception represents just one aspect of functioning in a listening environment. Activity and participation limits imposed by hearing loss, in addition to the demands of a listening environment, are also important and may be better captured by self-report questionnaires. Understanding how speech perception relates to self-reported aspects of listening forms the second focus of the study. Forty-four listeners aged between 50 and 74 years with mild sensorineural hearing loss were tested on speech perception tests differing in complexity from low (phoneme discrimination in quiet), to medium (digit triplet perception in speech-shaped noise) to high (sentence perception in modulated noise); cognitive tests of attention, memory, and non-verbal intelligence quotient; and self-report questionnaires of general health-related and hearing-specific quality of life. Hearing sensitivity and cognition related to intelligibility differently depending on the speech test: neither was important for phoneme discrimination, hearing sensitivity alone was important for digit triplet perception, and hearing and cognition together played a role in sentence perception. Self-reported aspects of auditory functioning were correlated with speech intelligibility to different degrees, with digit triplets in noise showing the richest pattern. The results suggest that intelligibility tests can vary in their auditory and cognitive demands and their sensitivity to the challenges that auditory environments pose on functioning. PMID:26136699

  19. Effects of reverberation and noise on speech intelligibility in normal-hearing and aided hearing-impaired listeners.

    PubMed

    Xia, Jing; Xu, Buye; Pentony, Shareka; Xu, Jingjing; Swaminathan, Jayaganesh

    2018-03-01

    Many hearing-aid wearers have difficulties understanding speech in reverberant noisy environments. This study evaluated the effects of reverberation and noise on speech recognition in normal-hearing listeners and hearing-impaired listeners wearing hearing aids. Sixteen typical acoustic scenes with different amounts of reverberation and various types of noise maskers were simulated using a loudspeaker array in an anechoic chamber. Results showed that, across all listening conditions, speech intelligibility of aided hearing-impaired listeners was poorer than normal-hearing counterparts. Once corrected for ceiling effects, the differences in the effects of reverberation on speech intelligibility between the two groups were much smaller. This suggests that, at least, part of the difference in susceptibility to reverberation between normal-hearing and hearing-impaired listeners was due to ceiling effects. Across both groups, a complex interaction between the noise characteristics and reverberation was observed on the speech intelligibility scores. Further fine-grained analyses of the perception of consonants showed that, for both listener groups, final consonants were more susceptible to reverberation than initial consonants. However, differences in the perception of specific consonant features were observed between the groups.

  20. Listening Logs for Extensive Listening in a Self-Regulated Environment

    ERIC Educational Resources Information Center

    Lee, You-Jin; Cha, Kyung-Whan

    2017-01-01

    Learner journals or diaries have been used in various educational contexts to motivate learning and learner reflection. This study examines how learner journals, especially listening logs for extensive listening in a self-regulated environment, affected university students' listening proficiency, and how the students reported on their listening…

  1. Listening Habits of iPod Users

    ERIC Educational Resources Information Center

    Epstein, Michael; Marozeau, Jeremy; Cleveland, Sandra

    2010-01-01

    Purpose: To estimate real-environment iPod listening levels for listeners in 4 environments to gain insight into whether average listeners receive dosages exceeding occupational noise exposure guidelines as a result of their listening habits. Method: The earbud outputs of iPods were connected directly into the inputs of a digital recorder to make…

  2. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  3. Learning to Listen

    ERIC Educational Resources Information Center

    Safir, Shane

    2017-01-01

    How do school leaders navigate a complex change process? Simply put: They listen. This is the contention that Shane Safir puts forth in this article. She outlines five reasons for becoming a "listening leader": Listening helps leaders tune into and shift the dominant narrative; keep their finger on the pulse of complex change; stay true…

  4. Associations between speech understanding and auditory and visual tests of verbal working memory: effects of linguistic complexity, task, age, and hearing loss

    PubMed Central

    Smith, Sherri L.; Pichora-Fuller, M. Kathleen

    2015-01-01

    Listeners with hearing loss commonly report having difficulty understanding speech, particularly in noisy environments. Their difficulties could be due to auditory and cognitive processing problems. Performance on speech-in-noise tests has been correlated with reading working memory span (RWMS), a measure often chosen to avoid the effects of hearing loss. If the goal is to assess the cognitive consequences of listeners’ auditory processing abilities, however, then listening working memory span (LWMS) could be a more informative measure. Some studies have examined the effects of different degrees and types of masking on working memory, but less is known about the demands placed on working memory depending on the linguistic complexity of the target speech or the task used to measure speech understanding in listeners with hearing loss. Compared to RWMS, LWMS measures using different speech targets and maskers may provide a more ecologically valid approach. To examine the contributions of RWMS and LWMS to speech understanding, we administered two working memory measures (a traditional RWMS measure and a new LWMS measure), and a battery of tests varying in the linguistic complexity of the speech materials, the presence of babble masking, and the task. Participants were a group of younger listeners with normal hearing and two groups of older listeners with hearing loss (n = 24 per group). There was a significant group difference and a wider range in performance on LWMS than on RWMS. There was a significant correlation between both working memory measures only for the oldest listeners with hearing loss. Notably, there were only few significant correlations among the working memory and speech understanding measures. These findings suggest that working memory measures reflect individual differences that are distinct from those tapped by these measures of speech understanding. PMID:26441769

  5. Music Engineering as a Novel Strategy for Enhancing Music Enjoyment in the Cochlear Implant Recipient.

    PubMed

    Kohlberg, Gavriel D; Mancuso, Dean M; Chari, Divya A; Lalwani, Anil K

    2015-01-01

    Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Compared to the original song, modified versions containing only 1-3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience.

  6. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    PubMed

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  7. Music Engineering as a Novel Strategy for Enhancing Music Enjoyment in the Cochlear Implant Recipient

    PubMed Central

    Kohlberg, Gavriel D.; Mancuso, Dean M.; Chari, Divya A.; Lalwani, Anil K.

    2015-01-01

    Objective. Enjoyment of music remains an elusive goal following cochlear implantation. We test the hypothesis that reengineering music to reduce its complexity can enhance the listening experience for the cochlear implant (CI) listener. Methods. Normal hearing (NH) adults (N = 16) and CI listeners (N = 9) evaluated a piece of country music on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version along with 20 modified, less complex, versions created by including subsets of the musical instruments from the original song. NH participants listened to the segments both with and without CI simulation processing. Results. Compared to the original song, modified versions containing only 1–3 instruments were less enjoyable to the NH listeners but more enjoyable to the CI listeners and the NH listeners with CI simulation. Excluding vocals and including rhythmic instruments improved enjoyment for NH listeners with CI simulation but made no difference for CI listeners. Conclusions. Reengineering a piece of music to reduce its complexity has the potential to enhance music enjoyment for the cochlear implantee. Thus, in addition to improvements in software and hardware, engineering music specifically for the CI listener may be an alternative means to enhance their listening experience. PMID:26543322

  8. Binaural room simulation

    NASA Technical Reports Server (NTRS)

    Lehnert, H.; Blauert, Jens; Pompetzki, W.

    1991-01-01

    In every-day listening the auditory event perceived by a listener is determined not only by the sound signal that a sound emits but also by a variety of environmental parameters. These parameters are the position, orientation and directional characteristics of the sound source, the listener's position and orientation, the geometrical and acoustical properties of surfaces which affect the sound field and the sound propagation properties of the surrounding fluid. A complete set of these parameters can be called an Acoustic Environment. If the auditory event perceived by a listener is manipulated in such a way that the listener is shifted acoustically into a different acoustic environment without moving himself physically, a Virtual Acoustic Environment has been created. Here, we deal with a special technique to set up nearly arbitrary Virtual Acoustic Environments, the Binaural Room Simulation. The purpose of the Binaural Room Simulation is to compute the binaural impulse response related to a virtual acoustic environment taking into account all parameters mentioned above. One possible way to describe a Virtual Acoustic Environment is the concept of the virtual sound sources. Each of the virtual sources emits a certain signal which is correlated but not necessarily identical with the signal emitted by the direct sound source. If source and receiver are non moving, the acoustic environment becomes a linear time-invariant system. Then, the Binaural Impulse Response from the source to a listener' s eardrums contains all relevant auditory information related to the Virtual Acoustic Environment. Listening into the simulated environment can easily be achieved by convolving the Binaural Impulse Response with dry signals and representing the results via headphones.

  9. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  10. Reading and listening in people with aphasia: effects of syntactic complexity.

    PubMed

    DeDe, Gayle

    2013-11-01

    The purpose of this study was to compare online effects of syntactic complexity in written and spoken sentence comprehension in people with aphasia (PWA) and adults with no brain damage (NBD). The participants in Experiment 1 were NBD older and younger adults (n = 20 per group). The participants in Experiment 2 were 10 PWA. In both experiments, the participants read and listened to sentences in self-paced reading and listening tasks. The experimental materials consisted of object cleft sentences (e.g., It was the girl who the boy hugged.) and subject cleft sentences (e.g., It was the boy who hugged the girl.). The predicted effects of syntactic complexity were observed in both Experiments 1 and 2: Reading and listening times were longer for the verb in sentences with object compared to subject relative clauses. The NBD controls showed exaggerated effects of syntactic complexity in reading compared to listening. The PWA did not show different modality effects from the NBD participants. Although effects of syntactic complexity were somewhat exaggerated in reading compared with listening, both the PWA and the NBD controls showed similar effects in both modalities.

  11. The influence of music on mental effort and driving performance.

    PubMed

    Ünal, Ayça Berfu; Steg, Linda; Epstude, Kai

    2012-09-01

    The current research examined the influence of loud music on driving performance, and whether mental effort mediated this effect. Participants (N=69) drove in a driving simulator either with or without listening to music. In order to test whether music would have similar effects on driving performance in different situations, we manipulated the simulated traffic environment such that the driving context consisted of both complex and monotonous driving situations. In addition, we systematically kept track of drivers' mental load by making the participants verbally report their mental effort at certain moments while driving. We found that listening to music increased mental effort while driving, irrespective of the driving situation being complex or monotonous, providing support to the general assumption that music can be a distracting auditory stimulus while driving. However, drivers who listened to music performed as well as the drivers who did not listen to music, indicating that music did not impair their driving performance. Importantly, the increases in mental effort while listening to music pointed out that drivers try to regulate their mental effort as a cognitive compensatory strategy to deal with task demands. Interestingly, we observed significant improvements in driving performance in two of the driving situations. It seems like mental effort might mediate the effect of music on driving performance in situations requiring sustained attention. Other process variables, such as arousal and boredom, should also be incorporated to study designs in order to reveal more on the nature of how music affects driving. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments.

    PubMed

    Gifford, René H; Dorman, Michael F; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L W; Roland, Peter; Buchman, Craig A

    2013-01-01

    The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. The present study included a within-subjects, repeated-measures design including 21 English-speaking and 17 Polish-speaking cochlear implant (CI) recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250, and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an eight-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: CI plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best-aided condition). A subset of six English-speaking listeners were also assessed on measures of interaural time difference thresholds for a 250-Hz signal. Small, but significant, improvements in performance (1.7-2.1 dB and 6-10 percentage points) were found for the best-aided condition versus the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of electric and acoustic stimulation (EAS) benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold after surgery and improvement in speech understanding in reverberation. There was a significant correlation between interaural time difference threshold at 250 Hz and EAS-related benefit for the adaptive speech reception threshold. The findings of this study suggest that (1) preserved low-frequency hearing improves speech understanding for CI recipients, (2) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing, and (3) preservation of binaural timing cues, although poorer than observed for individuals with normal hearing, is possible after unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. The results of this study demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of CI criteria to include individuals with low-frequency thresholds in even the normal to near-normal range.

  13. Cochlear implantation with hearing preservation yields significant benefit for speech recognition in complex listening environments

    PubMed Central

    Gifford, René H.; Dorman, Michael F.; Skarzynski, Henryk; Lorens, Artur; Polak, Marek; Driscoll, Colin L. W.; Roland, Peter; Buchman, Craig A.

    2012-01-01

    Objective The aim of this study was to assess the benefit of having preserved acoustic hearing in the implanted ear for speech recognition in complex listening environments. Design The current study included a within subjects, repeated-measures design including 21 English speaking and 17 Polish speaking cochlear implant recipients with preserved acoustic hearing in the implanted ear. The patients were implanted with electrodes that varied in insertion depth from 10 to 31 mm. Mean preoperative low-frequency thresholds (average of 125, 250 and 500 Hz) in the implanted ear were 39.3 and 23.4 dB HL for the English- and Polish-speaking participants, respectively. In one condition, speech perception was assessed in an 8-loudspeaker environment in which the speech signals were presented from one loudspeaker and restaurant noise was presented from all loudspeakers. In another condition, the signals were presented in a simulation of a reverberant environment with a reverberation time of 0.6 sec. The response measures included speech reception thresholds (SRTs) and percent correct sentence understanding for two test conditions: cochlear implant (CI) plus low-frequency hearing in the contralateral ear (bimodal condition) and CI plus low-frequency hearing in both ears (best aided condition). A subset of 6 English-speaking listeners were also assessed on measures of interaural time difference (ITD) thresholds for a 250-Hz signal. Results Small, but significant, improvements in performance (1.7 – 2.1 dB and 6 – 10 percentage points) were found for the best-aided condition vs. the bimodal condition. Postoperative thresholds in the implanted ear were correlated with the degree of EAS benefit for speech recognition in diffuse noise. There was no reliable relationship among measures of audiometric threshold in the implanted ear nor elevation in threshold following surgery and improvement in speech understanding in reverberation. There was a significant correlation between ITD threshold at 250 Hz and EAS-related benefit for the adaptive SRT. Conclusions Our results suggest that (i) preserved low-frequency hearing improves speech understanding for CI recipients (ii) testing in complex listening environments, in which binaural timing cues differ for signal and noise, may best demonstrate the value of having two ears with low-frequency acoustic hearing and (iii) preservation of binaural timing cues, albeit poorer than observed for individuals with normal hearing, is possible following unilateral cochlear implantation with hearing preservation and is associated with EAS benefit. Our results demonstrate significant communicative benefit for hearing preservation in the implanted ear and provide support for the expansion of cochlear implant criteria to include individuals with low-frequency thresholds in even the normal to near-normal range. PMID:23446225

  14. Sources and Suggestions to Lower Listening Comprehension Anxiety in the EFL Classroom: A Case Study

    ERIC Educational Resources Information Center

    Sharif, Mohd. Yasin; Ferdous, Farhiba

    2012-01-01

    Listening is a creative skill that demands active involvement. The listeners share their knowledge from both linguistics and non linguistics sources. Listening comprehension (LC) tasks which is always accompanied by anxiety needs closer examination. In the listening process a low-anxiety classroom environment inspires the listeners to participate…

  15. Developing authentic clinical simulations for effective listening and communication in pediatric rehabilitation service delivery.

    PubMed

    King, Gillian; Shepherd, Tracy A; Servais, Michelle; Willoughby, Colleen; Bolack, Linda; Strachan, Deborah; Moodie, Sheila; Baldwin, Patricia; Knickle, Kerry; Parker, Kathryn; Savage, Diane; McNaughton, Nancy

    2016-10-01

    To describe the creation and validation of six simulations concerned with effective listening and interpersonal communication in pediatric rehabilitation. The simulations involved clinicians from various disciplines, were based on clinical scenarios related to client issues, and reflected core aspects of listening/communication. Each simulation had a key learning objective, thus focusing clinicians on specific listening skills. The article outlines the process used to turn written scenarios into digital video simulations, including steps taken to establish content validity and authenticity, and to establish a series of videos based on the complexity of their learning objectives, given contextual factors and associated macrocognitive processes that influence the ability to listen. A complexity rating scale was developed and used to establish a gradient of easy/simple, intermediate, and hard/complex simulations. The development process exemplifies an evidence-based, integrated knowledge translation approach to the teaching and learning of listening and communication skills.

  16. Individual Differences Reveal Correlates of Hidden Hearing Deficits

    PubMed Central

    Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G.

    2015-01-01

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of “normal hearing.” PMID:25653371

  17. Teachers as Listeners: Implications for Teacher Education.

    ERIC Educational Resources Information Center

    Bozik, Mary

    Although teacher education programs spend very little time on the development of listening skills, the importance of listening to communicative effectiveness can hardly be exaggerated. As good listeners, teachers: (1) establish a classroom environment conducive to learning; (2) make better pedagogical decisions based on good listening skills; and…

  18. Repeated Listening Increases the Liking for Music Regardless of Its Complexity: Implications for the Appreciation and Aesthetics of Music

    PubMed Central

    Madison, Guy; Schiölde, Gunilla

    2017-01-01

    Psychological and aesthetic theories predict that music is appreciated at optimal, peak levels of familiarity and complexity, and that appreciation of music exhibits an inverted U-shaped relationship with familiarity as well as complexity. Because increased familiarity conceivably leads to improved processing and less perceived complexity, we test whether there is an interaction between familiarity and complexity. Specifically, increased familiarity should render the music subjectively less complex, and therefore move the apex of the U curve toward greater complexity. A naturalistic listening experiment was conducted, featuring 40 music examples (ME) divided by experts into 4 levels of complexity prior to the main experiment. The MEs were presented 28 times each across a period of approximately 4 weeks, and individual ratings were assessed throughout the experiment. Ratings of liking increased monotonically with repeated listening at all levels of complexity; both the simplest and the most complex MEs were liked more as a function of listening time, without any indication of a U-shaped relation. Although the MEs were previously unknown to the participants, the strongest predictor of liking was familiarity in terms of having listened to similar music before, i.e., familiarity with musical style. We conclude that familiarity is the single most important variable for explaining differences in liking among music, regardless of the complexity of the music. PMID:28408864

  19. Understanding the Listening Process: Rethinking the "One Size Fits All" Model

    ERIC Educational Resources Information Center

    Wolvin, Andrew

    2013-01-01

    Robert Bostrom's seminal contributions to listening theory and research represent an impressive legacy and provide listening scholars with important perspectives on the complexities of listening cognition and behavior. Bostrom's work provides a solid foundation on which to build models that more realistically explain how listeners function…

  20. Cognitive spare capacity: evaluation data and its association with comprehension of dynamic conversations

    PubMed Central

    Keidser, Gitte; Best, Virginia; Freeston, Katrina; Boyce, Alexandra

    2015-01-01

    It is well-established that communication involves the working memory system, which becomes increasingly engaged in understanding speech as the input signal degrades. The more resources allocated to recovering a degraded input signal, the fewer resources, referred to as cognitive spare capacity (CSC), remain for higher-level processing of speech. Using simulated natural listening environments, the aims of this paper were to (1) evaluate an English version of a recently introduced auditory test to measure CSC that targets the updating process of the executive function, (2) investigate if the test predicts speech comprehension better than the reading span test (RST) commonly used to measure working memory capacity, and (3) determine if the test is sensitive to increasing the number of attended locations during listening. In Experiment I, the CSC test was presented using a male and a female talker, in quiet and in spatially separated babble- and cafeteria-noises, in an audio-only and in an audio-visual mode. Data collected on 21 listeners with normal and impaired hearing confirmed that the English version of the CSC test is sensitive to population group, noise condition, and clarity of speech, but not presentation modality. In Experiment II, performance by 27 normal-hearing listeners on a novel speech comprehension test presented in noise was significantly associated with working memory capacity, but not with CSC. Moreover, this group showed no significant difference in CSC as the number of talker locations in the test increased. There was no consistent association between the CSC test and the RST. It is recommended that future studies investigate the psychometric properties of the CSC test, and examine its sensitivity to the complexity of the listening environment in participants with both normal and impaired hearing. PMID:25999904

  1. Selective Attention Enhances Beta-Band Cortical Oscillation to Speech under “Cocktail-Party” Listening Conditions

    PubMed Central

    Gao, Yayue; Wang, Qian; Ding, Yu; Wang, Changming; Li, Haifeng; Wu, Xihong; Qu, Tianshu; Li, Liang

    2017-01-01

    Human listeners are able to selectively attend to target speech in a noisy environment with multiple-people talking. Using recordings of scalp electroencephalogram (EEG), this study investigated how selective attention facilitates the cortical representation of target speech under a simulated “cocktail-party” listening condition with speech-on-speech masking. The result shows that the cortical representation of target-speech signals under the multiple-people talking condition was specifically improved by selective attention relative to the non-selective-attention listening condition, and the beta-band activity was most strongly modulated by selective attention. Moreover, measured with the Granger Causality value, selective attention to the single target speech in the mixed-speech complex enhanced the following four causal connectivities for the beta-band oscillation: the ones (1) from site FT7 to the right motor area, (2) from the left frontal area to the right motor area, (3) from the central frontal area to the right motor area, and (4) from the central frontal area to the right frontal area. However, the selective-attention-induced change in beta-band causal connectivity from the central frontal area to the right motor area, but not other beta-band causal connectivities, was significantly correlated with the selective-attention-induced change in the cortical beta-band representation of target speech. These findings suggest that under the “cocktail-party” listening condition, the beta-band oscillation in EEGs to target speech is specifically facilitated by selective attention to the target speech that is embedded in the mixed-speech complex. The selective attention-induced unmasking of target speech may be associated with the improved beta-band functional connectivity from the central frontal area to the right motor area, suggesting a top-down attentional modulation of the speech-motor process. PMID:28239344

  2. Selective Attention Enhances Beta-Band Cortical Oscillation to Speech under "Cocktail-Party" Listening Conditions.

    PubMed

    Gao, Yayue; Wang, Qian; Ding, Yu; Wang, Changming; Li, Haifeng; Wu, Xihong; Qu, Tianshu; Li, Liang

    2017-01-01

    Human listeners are able to selectively attend to target speech in a noisy environment with multiple-people talking. Using recordings of scalp electroencephalogram (EEG), this study investigated how selective attention facilitates the cortical representation of target speech under a simulated "cocktail-party" listening condition with speech-on-speech masking. The result shows that the cortical representation of target-speech signals under the multiple-people talking condition was specifically improved by selective attention relative to the non-selective-attention listening condition, and the beta-band activity was most strongly modulated by selective attention. Moreover, measured with the Granger Causality value, selective attention to the single target speech in the mixed-speech complex enhanced the following four causal connectivities for the beta-band oscillation: the ones (1) from site FT7 to the right motor area, (2) from the left frontal area to the right motor area, (3) from the central frontal area to the right motor area, and (4) from the central frontal area to the right frontal area. However, the selective-attention-induced change in beta-band causal connectivity from the central frontal area to the right motor area, but not other beta-band causal connectivities, was significantly correlated with the selective-attention-induced change in the cortical beta-band representation of target speech. These findings suggest that under the "cocktail-party" listening condition, the beta-band oscillation in EEGs to target speech is specifically facilitated by selective attention to the target speech that is embedded in the mixed-speech complex. The selective attention-induced unmasking of target speech may be associated with the improved beta-band functional connectivity from the central frontal area to the right motor area, suggesting a top-down attentional modulation of the speech-motor process.

  3. The effects of listening environment and earphone style on preferred listening levels of normal hearing adults using an MP3 player.

    PubMed

    Hodgetts, William E; Rieger, Jana M; Szarko, Ryan A

    2007-06-01

    The main objective of this study was to determine the influence of listening environment and earphone style on the preferred-listening levels (PLLs) measured in users' ear canals with a commercially-available MP3 player. It was hypothesized that listeners would prefer higher levels with earbud headphones as opposed to over-the-ear headphones, and that the effects would depend on the environment in which the user was listening. A secondary objective was to use the measured PLLs to determine the permissible listening duration to reach 100% daily noise dose. There were two independent variables in this study. The first, headphone style, had three levels: earbud, over-the-ear, and over-the-ear with noise reduction (the same headphones with a noise reduction circuit). The second, environment, also had 3 levels: quiet, street noise and multi-talker babble. The dependent variable was ear canal A-weighted sound pressure level. A 3 x 3 within-subjects repeated-measures ANOVA was used to analyze the data. Thirty-eight normal hearing adults were recruited from the Faculty of Rehabilitation Medicine at the University of Alberta. Each subject listened to the same song and adjusted the level until it "sounded best" to them in each of the 9 conditions. Significant main effects were found for both the headphone style and environment factors. On average, listeners had higher preferred listening levels with the earbud headphones, than with the over-the-ear headphones. When the noise reduction circuit was used with the over-the-ear headphones, the average PLL was even lower. On average, listeners had higher PLLs in street noise than in multi-talker babble and both of these were higher than the PLL for the quiet condition. The interaction between headphone style and environment was also significant. Details of individual contrasts are explored. Overall, PLLs were quite conservative, which would theoretically allow for extended permissible listening durations. Finally, we investigated the maximum output level of the MP3 player in the ear canals of authors 1 and 3 of this paper. Levels were highest with the earbud style, followed by the over-the-ear with noise reduction. The over-the-ear headphone without noise reduction had the lowest maximum output. The majority of MP3 players are sold with the earbud style of headphones. Preferred listening levels are higher with this style of headphone compared to the over-the-ear style. Moreover, as the noise level in the environment increases, earbud users are even more susceptible to background noise and consequently increase the level of the music to overcome this. The result is an increased sound pressure level at the eardrum. However, the levels chosen by our subjects suggest that MP3 listening levels may not be as significant a concern as has been reported recently in the mainstream media.

  4. Experiential Learning and Learning Environments: The Case of Active Listening Skills

    ERIC Educational Resources Information Center

    Huerta-Wong, Juan Enrique; Schoech, Richard

    2010-01-01

    Social work education research frequently has suggested an interaction between teaching techniques and learning environments. However, this interaction has never been tested. This study compared virtual and face-to-face learning environments and included active listening concepts to test whether the effectiveness of learning environments depends…

  5. Perceptual Fidelity vs. Engineering Compromises In Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ahumada, Albert (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactively is not always possible in a "true" virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, a lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering Constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  6. Perceptual Fidelity Versus Engineering Compromises in Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Ellis, Stephen R. (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor)

    1997-01-01

    Immersive, three-dimensional displays are increasingly becoming a goal of advanced human-machine interfaces. While the technology for achieving truly useful multisensory environments is still being developed, techniques for generating three-dimensional sound are now both sophisticated and practical enough to be applied to acoustic displays. The ultimate goal of virtual acoustics is to simulate the complex acoustic field experienced by a listener freely moving around within an environment. Of course, such complexity, freedom of movement and interactivity is not always possible in a 'true' virtual environment, much less in lower-fidelity multimedia systems. However, many of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to multimedia. In fact, some of the problems that have been studied will be even more of an issue for lower fidelity systems that are attempting to address the requirements of a huge, diverse and ultimately unknown audience. Examples include individual differences in head-related transfer functions, A lack of real interactively (head-tracking) in many multimedia displays, and perceptual degradation due to low sampling rates and/or low-bit compression. This paper discusses some of the engineering constraints faced during implementation of virtual acoustic environments and the perceptual consequences of these constraints. Specific examples are given for NASA applications such as telerobotic control, aeronautical displays, and shuttle launch communications. An attempt will also be made to relate these issues to low-fidelity implementations such as the internet.

  7. Complex Listening: Supporting Students to Listen as Mathematical Sense-Makers

    ERIC Educational Resources Information Center

    Hintz, Allison; Tyson, Kersti

    2015-01-01

    Participating in reform-oriented mathematical discussion calls on teachers and students to listen to one another in new and different ways. However, listening is an understudied dimension of teaching and learning mathematics. In this analysis, we draw on a sociocultural perspective and a conceptual framing of three types of listening--evaluative,…

  8. Relatively effortless listening promotes understanding and recall of medical instructions in older adults

    PubMed Central

    DiDonato, Roberta M.; Surprenant, Aimée M.

    2015-01-01

    Communication success under adverse conditions requires efficient and effective recruitment of both bottom-up (sensori-perceptual) and top-down (cognitive-linguistic) resources to decode the intended auditory-verbal message. Employing these limited capacity resources has been shown to vary across the lifespan, with evidence indicating that younger adults out-perform older adults for both comprehension and memory of the message. This study examined how sources of interference arising from the speaker (message spoken with conversational vs. clear speech technique), the listener (hearing-listening and cognitive-linguistic factors), and the environment (in competing speech babble noise vs. quiet) interact and influence learning and memory performance using more ecologically valid methods than has been done previously. The results suggest that when older adults listened to complex medical prescription instructions with “clear speech,” (presented at audible levels through insertion earphones) their learning efficiency, immediate, and delayed memory performance improved relative to their performance when they listened with a normal conversational speech rate (presented at audible levels in sound field). This better learning and memory performance for clear speech listening was maintained even in the presence of speech babble noise. The finding that there was the largest learning-practice effect on 2nd trial performance in the conversational speech when the clear speech listening condition was first is suggestive of greater experience-dependent perceptual learning or adaptation to the speaker's speech and voice pattern in clear speech. This suggests that experience-dependent perceptual learning plays a role in facilitating the language processing and comprehension of a message and subsequent memory encoding. PMID:26106353

  9. Individual differences reveal correlates of hidden hearing deficits.

    PubMed

    Bharadwaj, Hari M; Masud, Salwa; Mehraei, Golbarg; Verhulst, Sarah; Shinn-Cunningham, Barbara G

    2015-02-04

    Clinical audiometry has long focused on determining the detection thresholds for pure tones, which depend on intact cochlear mechanics and hair cell function. Yet many listeners with normal hearing thresholds complain of communication difficulties, and the causes for such problems are not well understood. Here, we explore whether normal-hearing listeners exhibit such suprathreshold deficits, affecting the fidelity with which subcortical areas encode the temporal structure of clearly audible sound. Using an array of measures, we evaluated a cohort of young adults with thresholds in the normal range to assess both cochlear mechanical function and temporal coding of suprathreshold sounds. Listeners differed widely in both electrophysiological and behavioral measures of temporal coding fidelity. These measures correlated significantly with each other. Conversely, these differences were unrelated to the modest variation in otoacoustic emissions, cochlear tuning, or the residual differences in hearing threshold present in our cohort. Electroencephalography revealed that listeners with poor subcortical encoding had poor cortical sensitivity to changes in interaural time differences, which are critical for localizing sound sources and analyzing complex scenes. These listeners also performed poorly when asked to direct selective attention to one of two competing speech streams, a task that mimics the challenges of many everyday listening environments. Together with previous animal and computational models, our results suggest that hidden hearing deficits, likely originating at the level of the cochlear nerve, are part of "normal hearing." Copyright © 2015 the authors 0270-6474/15/352161-12$15.00/0.

  10. Enhancing Foreign Language Learning through Listening Strategies Delivered in L1: An Experimental Study

    ERIC Educational Resources Information Center

    Bozorgian, Hossein; Pillay, Hitendra

    2013-01-01

    Listening used in language teaching refers to a complex process that allows us to understand spoken language. The current study, conducted in Iran with an experimental design, investigated the effectiveness of teaching listening strategies delivered in L1 (Persian) and its effect on listening comprehension in L2. Five listening strategies:…

  11. Using mediation techniques to manage conflict and create healthy work environments.

    PubMed

    Gerardi, Debra

    2004-01-01

    Healthcare organizations must find ways for managing conflict and developing effective working relationships to create healthy work environments. The effects of unresolved conflict on clinical outcomes, staff retention, and the financial health of the organization lead to many unnecessary costs that divert resources from clinical care. The complexity of delivering critical care services makes conflict resolution difficult. Developing collaborative working relationships helps to manage conflict in complex environments. Working relationships are based on the ability to deal with differences. Dealing with differences requires skill development and techniques for balancing interests and communicating effectively. Techniques used by mediators are effective for resolving disputes and developing working relationships. With practice, these techniques are easily transferable to the clinical setting. Listening for understanding, reframing, elevating the definition of the problem, and forming clear agreements can foster working relationships, decrease the level of conflict, and create healthy work environments that benefit patients and professionals.

  12. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  13. Amplitude modulation detection by human listeners in reverberant sound fields: Effects of prior listening exposure.

    PubMed

    Zahorik, Pavel; Anderson, Paul W

    2013-01-01

    Previous work [Zahorik et al., POMA, 15, 050002 (2012)] has reported that for both broadband and narrowband noise carrier signals in a simulated reverberant sound field, human sensitivity to amplitude modulation (AM) is higher than would be predicted based on the acoustical modulation transfer function (MTF) of the listening environment. These results may be suggestive of mechanisms that functionally enhance modulation in reverberant listening, although many details of this enhancement effect are unknown. Given recent findings that demonstrate improvements in speech understanding with prior exposure to reverberant listening environments, it is of interest to determine whether listening exposure to a reverberant room might also influence AM detection in the room, and perhaps contribute to the AM enhancement effect. Here, AM detection thresholds were estimated (using an adaptive 2-alternative forced-choice procedure) in each of two listening conditions: one in which consistent listening exposure to a particular room was provided, and a second that intentionally disrupted listening exposure by varying the room from trial-to-trial. Results suggest that consistent prior listening exposure contributes to enhanced AM sensitivity in rooms. [Work supported by the NIH/NIDCD.].

  14. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing

    PubMed Central

    Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far (‘radial’) and left-right (‘angular’) movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup. PMID:28675088

  15. Sensitivity to Angular and Radial Source Movements as a Function of Acoustic Complexity in Normal and Impaired Hearing.

    PubMed

    Lundbeck, Micha; Grimm, Giso; Hohmann, Volker; Laugesen, Søren; Neher, Tobias

    2017-01-01

    In contrast to static sounds, spatially dynamic sounds have received little attention in psychoacoustic research so far. This holds true especially for acoustically complex (reverberant, multisource) conditions and impaired hearing. The current study therefore investigated the influence of reverberation and the number of concurrent sound sources on source movement detection in young normal-hearing (YNH) and elderly hearing-impaired (EHI) listeners. A listening environment based on natural environmental sounds was simulated using virtual acoustics and rendered over headphones. Both near-far ('radial') and left-right ('angular') movements of a frontal target source were considered. The acoustic complexity was varied by adding static lateral distractor sound sources as well as reverberation. Acoustic analyses confirmed the expected changes in stimulus features that are thought to underlie radial and angular source movements under anechoic conditions and suggested a special role of monaural spectral changes under reverberant conditions. Analyses of the detection thresholds showed that, with the exception of the single-source scenarios, the EHI group was less sensitive to source movements than the YNH group, despite adequate stimulus audibility. Adding static sound sources clearly impaired the detectability of angular source movements for the EHI (but not the YNH) group. Reverberation, on the other hand, clearly impaired radial source movement detection for the EHI (but not the YNH) listeners. These results illustrate the feasibility of studying factors related to auditory movement perception with the help of the developed test setup.

  16. Design and Implementation of an Intelligent Virtual Environment for Improving Speaking and Listening Skills

    ERIC Educational Resources Information Center

    Hassani, Kaveh; Nahvi, Ali; Ahmadi, Ali

    2016-01-01

    In this paper, we present an intelligent architecture, called intelligent virtual environment for language learning, with embedded pedagogical agents for improving listening and speaking skills of non-native English language learners. The proposed architecture integrates virtual environments into the Intelligent Computer-Assisted Language…

  17. Auditory stream segregation with multi-tonal complexes in hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Rogers, Deanna S.; Lentz, Jennifer J.

    2004-05-01

    The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.

  18. Teaching Listening Skills to JFL Students in Australia.

    ERIC Educational Resources Information Center

    Danaher, Mike

    1996-01-01

    Examines issues affecting the teaching and learning of listening skills within the study of Japanese as a Foreign Language. Listening within foreign-language learning is a complex skill, and students encounter several difficulties in learning to listen for comprehension. Teachers face concerns ranging from resource availability to how to teach…

  19. "Listen and Understand What I Am Saying": Church-Listening as a Challenge for Non-Native Listeners of English in the United Kingdom

    ERIC Educational Resources Information Center

    Malmström, Hans

    2015-01-01

    This article uses computer-assisted analysis to study the listening environment provided by Bible readings and preaching during church services. It focuses on the vocabulary size needed to comprehend 95% and 98% of the running words of the input (lexical coverage levels indicating comprehension in connection with listening) and on the place of…

  20. The Effect of Gender on the N1-P2 Auditory Complex while Listening and Speaking with Altered Auditory Feedback

    ERIC Educational Resources Information Center

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered…

  1. Listening Strategy Preferences in Multimedia Environment: A Study on Iranian Female Language Learners

    ERIC Educational Resources Information Center

    Fini, Lili

    2016-01-01

    Listening skill has been recently paid great attention comparing with the other three language skills since having communication is the first and most essential need. Language learners have been using the three different listening strategies (Cognitive, Meta-cognitive, and Socio-affective) to improve their listening skills in multimedia…

  2. Active Listening Strategies of Academically Successful University Students

    ERIC Educational Resources Information Center

    Canpolat, Murat; Kuzu, Sekvan; Yildirim, Bilal; Canpolat, Sevilay

    2015-01-01

    Problem Statement: In formal educational environments, the quality of student listening affects learning considerably. Students who are uninterested in a lesson listen reluctantly, wanting time to pass quickly and the class to end as soon as possible. In such situations, students become passive and, though appearing to be listening, will not use…

  3. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina

    2013-02-01

    Totally blind listeners often demonstrate better than normal capabilities when performing spatial hearing tasks. Accurate representation of three-dimensional auditory space requires the processing of available distance information between the listener and the sound source; however, auditory distance cues vary greatly depending upon the acoustic properties of the environment, and it is not known which distance cues are important to totally blind listeners. Our data show that totally blind listeners display better performance compared to sighted age-matched controls for distance discrimination tasks in anechoic and reverberant virtual rooms simulated using a room-image procedure. Totally blind listeners use two major auditory distance cues to stationary sound sources, level and direct-to-reverberant ratio, more effectively than sighted controls for many of the virtual distances tested. These results show that significant compensation among totally blind listeners for virtual auditory spatial distance leads to benefits across a range of simulated acoustic environments. No significant differences in performance were observed between listeners with partial non-correctable visual losses and sighted controls, suggesting that sensory compensation for virtual distance does not occur for listeners with partial vision loss.

  4. The Mediating Effect of Listening Metacognitive Awareness between Test-Taking Motivation and Listening Test Score: An Expectancy-Value Theory Approach

    PubMed Central

    Xu, Jian

    2017-01-01

    The present study investigated test-taking motivation in L2 listening testing context by applying Expectancy-Value Theory as the framework. Specifically, this study was intended to examine the complex relationships among expectancy, importance, interest, listening anxiety, listening metacognitive awareness, and listening test score using data from a large-scale and high-stakes language test among Chinese first-year undergraduates. Structural equation modeling was used to examine the mediating effect of listening metacognitive awareness on the relationship between expectancy, importance, interest, listening anxiety, and listening test score. According to the results, test takers’ listening scores can be predicted by expectancy, interest, and listening anxiety significantly. The relationship between expectancy, interest, listening anxiety, and listening test score was mediated by listening metacognitive awareness. The findings have implications for test takers to improve their test taking motivation and listening metacognitive awareness, as well as for L2 teachers to intervene in L2 listening classrooms. PMID:29312063

  5. The Mediating Effect of Listening Metacognitive Awareness between Test-Taking Motivation and Listening Test Score: An Expectancy-Value Theory Approach.

    PubMed

    Xu, Jian

    2017-01-01

    The present study investigated test-taking motivation in L2 listening testing context by applying Expectancy-Value Theory as the framework. Specifically, this study was intended to examine the complex relationships among expectancy, importance, interest, listening anxiety, listening metacognitive awareness, and listening test score using data from a large-scale and high-stakes language test among Chinese first-year undergraduates. Structural equation modeling was used to examine the mediating effect of listening metacognitive awareness on the relationship between expectancy, importance, interest, listening anxiety, and listening test score. According to the results, test takers' listening scores can be predicted by expectancy, interest, and listening anxiety significantly. The relationship between expectancy, interest, listening anxiety, and listening test score was mediated by listening metacognitive awareness. The findings have implications for test takers to improve their test taking motivation and listening metacognitive awareness, as well as for L2 teachers to intervene in L2 listening classrooms.

  6. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  7. Cultural and Demographic Factors Influencing Noise Exposure Estimates from Use of Portable Listening Devices in an Urban Environment

    ERIC Educational Resources Information Center

    Fligor, Brian J.; Levey, Sandra; Levey, Tania

    2014-01-01

    Purpose: This study examined listening levels and duration of portable listening devices (PLDs) used by people with diversity of ethnicity, education, music genre, and PLD manufacturer. The goal was to estimate participants' PLD noise exposure and identify factors influencing user behavior. Method: This study measured listening levels of 160…

  8. An Investigation of Spatial Hearing in Children with Normal Hearing and with Cochlear Implants and the Impact of Executive Function

    NASA Astrophysics Data System (ADS)

    Misurelli, Sara M.

    The ability to analyze an "auditory scene"---that is, to selectively attend to a target source while simultaneously segregating and ignoring distracting information---is one of the most important and complex skills utilized by normal hearing (NH) adults. The NH adult auditory system and brain work rather well to segregate auditory sources in adverse environments. However, for some children and individuals with hearing loss, selectively attending to one source in noisy environments can be extremely challenging. In a normal auditory system, information arriving at each ear is integrated, and thus these binaural cues aid in speech understanding in noise. A growing number of individuals who are deaf now receive cochlear implants (CIs), which supply hearing through electrical stimulation to the auditory nerve. In particular, bilateral cochlear implants (BICIs) are now becoming more prevalent, especially in children. However, because CI sound processing lacks both fine structure cues and coordination between stimulation at the two ears, binaural cues may either be absent or inconsistent. For children with NH and with BiCIs, this difficulty in segregating sources is of particular concern because their learning and development commonly occurs within the context of complex auditory environments. This dissertation intends to explore and understand the ability of children with NH and with BiCIs to function in everyday noisy environments. The goals of this work are to (1) Investigate source segregation abilities in children with NH and with BiCIs; (2) Examine the effect of target-interferer similarity and the benefits of source segregation for children with NH and with BiCIs; (3) Investigate measures of executive function that may predict performance in complex and realistic auditory tasks of source segregation for listeners with NH; and (4) Examine source segregation abilities in NH listeners, from school-age to adults.

  9. How People Listen to Languages They Don't Know.

    ERIC Educational Resources Information Center

    Lorch, Marjorie Perlman; Meara, Paul

    1989-01-01

    Investigation of how 19 adult males listened to and recognized unknown foreign languages (Farsi, Punjabi, Spanish, Indonesian, Arabic, Urdu) indicated that the untrained listeners made complex judgments in describing, transcribing, and identifying phonetic, segmental, suprasegmental, and other impressionistic language details. (Author/CB)

  10. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  11. Listening Differently: A Pedagogy for Expanded Listening

    ERIC Educational Resources Information Center

    Gallagher, Michael; Prior, Jonathan; Needham, Martin; Holmes, Rachel

    2017-01-01

    Mainstream education promotes a narrow conception of listening, centred on the reception and comprehension of human meanings. As such, it is ill-equipped to hear how sound propagates affects, generates atmospheres, shapes environments and enacts power. Yet these aspects of sound are vital to how education functions. We therefore argue that there…

  12. Impact of Noise Reduction Algorithm in Cochlear Implant Processing on Music Enjoyment.

    PubMed

    Kohlberg, Gavriel D; Mancuso, Dean M; Griffin, Brianna M; Spitzer, Jaclyn B; Lalwani, Anil K

    2016-06-01

    Noise reduction algorithm (NRA) in speech processing strategy has positive impact on speech perception among cochlear implant (CI) listeners. We sought to evaluate the effect of NRA on music enjoyment. Prospective analysis of music enjoyment. Academic medical center. Normal-hearing (NH) adults (N = 16) and CI listeners (N = 9). Subjective rating of music excerpts. NH and CI listeners evaluated country music piece on three enjoyment modalities: pleasantness, musicality, and naturalness. Participants listened to the original version and 20 modified, less complex versions created by including subsets of musical instruments from the original song. NH participants listened to the segments through CI simulation and CI listeners listened to the segments with their usual speech processing strategy, with and without NRA. Decreasing the number of instruments was significantly associated with increase in the pleasantness and naturalness in both NH and CI subjects (p < 0.05). However, there was no difference in music enjoyment with or without NRA for either NH listeners with CI simulation or CI listeners across all three modalities of pleasantness, musicality, and naturalness (p > 0.05): this was true for the original and the modified music segments with one to three instruments (p > 0.05). NRA does not affect music enjoyment in CI listener or NH individual with CI simulation. This suggests that strategies to enhance speech processing will not necessarily have a positive impact on music enjoyment. However, reducing the complexity of music shows promise in enhancing music enjoyment and should be further explored.

  13. Change deafness for real spatialized environmental scenes.

    PubMed

    Gaston, Jeremy; Dickerson, Kelly; Hipp, Daniel; Gerhardstein, Peter

    2017-01-01

    The everyday auditory environment is complex and dynamic; often, multiple sounds co-occur and compete for a listener's cognitive resources. 'Change deafness', framed as the auditory analog to the well-documented phenomenon of 'change blindness', describes the finding that changes presented within complex environments are often missed. The present study examines a number of stimulus factors that may influence change deafness under real-world listening conditions. Specifically, an AX (same-different) discrimination task was used to examine the effects of both spatial separation over a loudspeaker array and the type of change (sound source additions and removals) on discrimination of changes embedded in complex backgrounds. Results using signal detection theory and accuracy analyses indicated that, under most conditions, errors were significantly reduced for spatially distributed relative to non-spatial scenes. A second goal of the present study was to evaluate a possible link between memory for scene contents and change discrimination. Memory was evaluated by presenting a cued recall test following each trial of the discrimination task. Results using signal detection theory and accuracy analyses indicated that recall ability was similar in terms of accuracy, but there were reductions in sensitivity compared to previous reports. Finally, the present study used a large and representative sample of outdoor, urban, and environmental sounds, presented in unique combinations of nearly 1000 trials per participant. This enabled the exploration of the relationship between change perception and the perceptual similarity between change targets and background scene sounds. These (post hoc) analyses suggest both a categorical and a stimulus-level relationship between scene similarity and the magnitude of change errors.

  14. Tapping to a Slow Tempo in the Presence of Simple and Complex Meters Reveals Experience-Specific Biases for Processing Music

    PubMed Central

    Ullal-Gupta, Sangeeta; Hannon, Erin E.; Snyder, Joel S.

    2014-01-01

    Musical meters vary considerably across cultures, yet relatively little is known about how culture-specific experience influences metrical processing. In Experiment 1, we compared American and Indian listeners' synchronous tapping to slow sequences. Inter-tone intervals contained silence or to-be-ignored rhythms that were designed to induce a simple meter (familiar to Americans and Indians) or a complex meter (familiar only to Indians). A subset of trials contained an abrupt switch from one rhythm to another to assess the disruptive effects of contradicting the initially implied meter. In the unfilled condition, both groups tapped earlier than the target and showed large tap-tone asynchronies (measured in relative phase). When inter-tone intervals were filled with simple-meter rhythms, American listeners tapped later than targets, but their asynchronies were smaller and declined more rapidly. Likewise, asynchronies rose sharply following a switch away from simple-meter but not from complex-meter rhythm. By contrast, Indian listeners performed similarly across all rhythm types, with asynchronies rapidly declining over the course of complex- and simple-meter trials. For these listeners, a switch from either simple or complex meter increased asynchronies. Experiment 2 tested American listeners but doubled the duration of the synchronization phase prior to (and after) the switch. Here, compared with simple meters, complex-meter rhythms elicited larger asynchronies that declined at a slower rate, however, asynchronies increased after the switch for all conditions. Our results provide evidence that ease of meter processing depends to a great extent on the amount of experience with specific meters. PMID:25075514

  15. Brain bases for auditory stimulus-driven figure-ground segregation.

    PubMed

    Teki, Sundeep; Chait, Maria; Kumar, Sukhbinder; von Kriegstein, Katharina; Griffiths, Timothy D

    2011-01-05

    Auditory figure-ground segregation, listeners' ability to selectively hear out a sound of interest from a background of competing sounds, is a fundamental aspect of scene analysis. In contrast to the disordered acoustic environment we experience during everyday listening, most studies of auditory segregation have used relatively simple, temporally regular signals. We developed a new figure-ground stimulus that incorporates stochastic variation of the figure and background that captures the rich spectrotemporal complexity of natural acoustic scenes. Figure and background signals overlap in spectrotemporal space, but vary in the statistics of fluctuation, such that the only way to extract the figure is by integrating the patterns over time and frequency. Our behavioral results demonstrate that human listeners are remarkably sensitive to the appearance of such figures. In a functional magnetic resonance imaging experiment, aimed at investigating preattentive, stimulus-driven, auditory segregation mechanisms, naive subjects listened to these stimuli while performing an irrelevant task. Results demonstrate significant activations in the intraparietal sulcus (IPS) and the superior temporal sulcus related to bottom-up, stimulus-driven figure-ground decomposition. We did not observe any significant activation in the primary auditory cortex. Our results support a role for automatic, bottom-up mechanisms in the IPS in mediating stimulus-driven, auditory figure-ground segregation, which is consistent with accumulating evidence implicating the IPS in structuring sensory input and perceptual organization.

  16. Speech perception for adult cochlear implant recipients in a realistic background noise: effectiveness of preprocessing strategies and external options for improving speech recognition in noise.

    PubMed

    Gifford, René H; Revit, Lawrence J

    2010-01-01

    Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects' preferred listening programs as well as with the addition of either Beam preprocessing (Cochlear Corporation) or the T-Mic accessory option (Advanced Bionics). In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested "Everyday," "Noise," and "Focus" preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. American Academy of Audiology.

  17. Familiarity Overrides Complexity in Rhythm Perception: A Cross-Cultural Comparison of American and Turkish Listeners

    ERIC Educational Resources Information Center

    Hannon, Erin E.; Soley, Gaye; Ullal, Sangeeta

    2012-01-01

    Despite the ubiquity of dancing and synchronized movement to music, relatively few studies have examined cognitive representations of musical rhythm and meter among listeners from contrasting cultures. We aimed to disentangle the contributions of culture-general and culture-specific influences by examining American and Turkish listeners' detection…

  18. Mental Load in Listening, Speech Shadowing and Simultaneous Interpreting: A Pupillometric Study.

    ERIC Educational Resources Information Center

    Tommola, Jorma; Hyona, Jukka

    This study investigated the sensitivity of the pupillary response as an indicator of average mental load during three language processing tasks of varying complexity. The tasks included: (1) listening (without any subsequent comprehension testing); (2) speech shadowing (repeating a message in the same language while listening to it); and (3)…

  19. Text Characteristics of Task Input and Difficulty in Second Language Listening Comprehension

    ERIC Educational Resources Information Center

    Revesz, Andrea; Brunfaut, Tineke

    2013-01-01

    This study investigated the effects of a group of task factors on advanced English as a second language learners' actual and perceived listening performance. We examined whether the speed, linguistic complexity, and explicitness of the listening text along with characteristics of the text necessary for task completion influenced comprehension. We…

  20. Spatial Audio on the Web: Or Why Can't I hear Anything Over There?

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Schlickenmaier, Herbert (Technical Monitor); Johnson, Gerald (Technical Monitor); Frey, Mary Anne (Technical Monitor); Schneider, Victor S. (Technical Monitor); Ahunada, Albert J. (Technical Monitor)

    1997-01-01

    Auditory complexity, freedom of movement and interactivity is not always possible in a "true" virtual environment, much less in web-based audio. However, a lot of the perceptual and engineering constraints (and frustrations) that researchers, engineers and listeners have experienced in virtual audio are relevant to spatial audio on the web. My talk will discuss some of these engineering constraints and their perceptual consequences, and attempt to relate these issues to implementation on the web.

  1. The Evaluation of the Performed Activities According to ELVES Method

    ERIC Educational Resources Information Center

    Erdem, Aliye

    2018-01-01

    Listening habit has an important share in the individual perceiving his/her environment and the world properly and complying with the social environment he/she lives. Because listening is an important skill which enables the individual to use the communication skills he/she learned both at school and out of school properly and to understand…

  2. "I Want to Listen to My Students' Lives": Developing an Ecological Perspective in Learning to Teach

    ERIC Educational Resources Information Center

    Cook-Sather, Alison; Curl, Heather

    2014-01-01

    Preparing teachers who want to "listen to their students' lives"requires creating opportunities for prospective teachers to perceive and learn about their students' lives and how those unfold within and as part of complex systems. That means supporting prospective teachers not only in understanding students as complex beings who have to…

  3. Perception of Simultaneous Auditive Contents

    NASA Astrophysics Data System (ADS)

    Tschinkel, Christian

    Based on a model of pluralistic music, we may approach an aesthetic concept of music, which employs dichotic listening situations. The concept of dichotic listening stems from neuropsychological test conditions in lateralization experiments on brain hemispheres, in which each ear is exposed to a different auditory content. In the framework of such sound experiments, the question which primarily arises concerns a new kind of hearing, which is also conceivable without earphones as a spatial composition, and which may superficially be linked to its degree of complexity. From a psychological perspective, the degree of complexity is correlated with the degree of attention given, with the listener's musical or listening experience and the level of his appreciation. Therefore, we may possibly also expect a measurable increase in physical activity. Furthermore, a dialectic interpretation of such "dualistic" music presents itself.

  4. CALL--Enhanced L2 Listening Skills--Aiming for Automatization in a Multimedia Environment

    ERIC Educational Resources Information Center

    Mayor, Maria Jesus Blasco

    2009-01-01

    Computer Assisted Language Learning (CALL) and L2 listening comprehension skill training are bound together for good. A neglected macroskill for decades, developing listening comprehension skill is now considered crucial for L2 acquisition. Thus this paper makes an attempt to offer latest information on processing theories and L2 listening…

  5. Social Connectedness and Perceived Listening Effort in Adult Cochlear Implant Users: A Grounded Theory to Establish Content Validity for a New Patient-Reported Outcome Measure.

    PubMed

    Hughes, Sarah E; Hutchings, Hayley A; Rapport, Frances L; McMahon, Catherine M; Boisvert, Isabelle

    2018-02-08

    Individuals with hearing loss often report a need for increased effort when listening, particularly in challenging acoustic environments. Despite audiologists' recognition of the impact of listening effort on individuals' quality of life, there are currently no standardized clinical measures of listening effort, including patient-reported outcome measures (PROMs). To generate items and content for a new PROM, this qualitative study explored the perceptions, understanding, and experiences of listening effort in adults with severe-profound sensorineural hearing loss before and after cochlear implantation. Three focus groups (1 to 3) were conducted. Purposive sampling was used to recruit 17 participants from a cochlear implant (CI) center in the United Kingdom. The participants included adults (n = 15, mean age = 64.1 years, range 42 to 84 years) with acquired severe-profound sensorineural hearing loss who satisfied the UK's national candidacy criteria for cochlear implantation and their normal-hearing significant others (n = 2). Participants were CI candidates who used hearing aids (HAs) and were awaiting CI surgery or CI recipients who used a unilateral CI or a CI and contralateral HA (CI + HA). Data from a pilot focus group conducted with 2 CI recipients were included in the analysis. The data, verbatim transcripts of the focus group proceedings, were analyzed qualitatively using constructivist grounded theory (GT) methodology. A GT of listening effort in cochlear implantation was developed from participants' accounts. The participants provided rich, nuanced descriptions of the complex and multidimensional nature of their listening effort. Interpreting and integrating these descriptions through GT methodology, listening effort was described as the mental energy required to attend to and process the auditory signal, as well as the effort required to adapt to, and compensate for, a hearing loss. Analyses also suggested that listening effort for most participants was motivated by a need to maintain a sense of social connectedness (i.e., the subjective awareness of being in touch with one's social world). Before implantation, low social connectedness in the presence of high listening effort encouraged self-alienating behaviors and resulted in social isolation with adverse effects for participant's well-being and quality of life. A CI moderated but did not remove the requirement for listening effort. Listening effort, in combination with the improved auditory signal supplied by the CI, enabled most participants to listen and communicate more effectively. These participants reported a restored sense of social connectedness and an acceptance of the continued need for listening effort. Social connectedness, effort-reward balance, and listening effort as a multidimensional phenomenon were the core constructs identified as important to participants' experiences and understanding of listening effort. The study's findings suggest: (1) perceived listening effort is related to social and psychological factors and (2) these factors may influence how individuals with hearing loss report on the actual cognitive processing demands of listening. These findings provide evidence in support of the Framework for Understanding Effortful Listening a heuristic that describes listening effort as a function of both motivation and demands on cognitive capacity. This GT will inform item development and establish the content validity for a new PROM for measuring listening effort.

  6. Attentional and Contextual Priors in Sound Perception.

    PubMed

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.

  7. Attentional and Contextual Priors in Sound Perception

    PubMed Central

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis. PMID:26882228

  8. Effect of noise spectra and a listening task upon passenger annoyance in a helicopter interior noise environment

    NASA Technical Reports Server (NTRS)

    Clevenson, S. A.; Leatherwood, J. D.

    1979-01-01

    The effects of helicopter interior noise on passenger annoyance were studied. Both reverie and listening situations were studied as well as the relative effectiveness of several descriptors (i.e., overall sound pressure level, A-weighted sound pressure level, and speech interference level) for quantifying annoyance response for these situations. The noise stimuli were based upon recordings of the interior noise of a civil helicopter research aircraft. These noises were presented at levels ranging from approximately 68 to 86 dB(A) with various gear clash tones selectively attenuated to give a range of spectra. Results indicated that annoyance during a listening condition is generally higher than annoyance during a reverie condition for corresponding interior noise environments. Attenuation of the planetary gear clash tone results in increases in listening performance but has negligible effect upon annoyance for a given noise level. The noise descriptor most effective for estimating annoyance response under conditions of reverie and listening situations is shown to be the A-weighted sound pressure level.

  9. From Research to the General Music Classroom

    ERIC Educational Resources Information Center

    Madsen, Clifford K.

    2011-01-01

    One challenge for music educators is to find techniques to help students "listen across time" to the examples they are assigned to study and to stay focused on a piece as they listen. Measurement tools to assess music listening have a long history, ranging from very simple to very complex, and very dated to very recent. This article traces the…

  10. The Effectiveness of Multimedia Application on Students Listening Comprehension

    ERIC Educational Resources Information Center

    Pangaribuan, Tagor; Sinaga, Andromeda; Sipayung, Kammer Tuahman

    2017-01-01

    Listening comprehension is a complex skill particulaly in mastered by non-native speaker settings. This research aimed at finding out the effect of multimedia application on students' listening. The research design is experimental, with a t-test. The population is the sixth semester of HKBP Nommensen University at the academic year of 2016/2017,…

  11. Speech intelligibility index predictions for young and old listeners in automobile noise: Can the index be improved by incorporating factors other than absolute threshold?

    NASA Astrophysics Data System (ADS)

    Saweikis, Meghan; Surprenant, Aimée M.; Davies, Patricia; Gallant, Don

    2003-10-01

    While young and old subjects with comparable audiograms tend to perform comparably on speech recognition tasks in quiet environments, the older subjects have more difficulty than the younger subjects with recognition tasks in degraded listening conditions. This suggests that factors other than an absolute threshold may account for some of the difficulty older listeners have on recognition tasks in noisy environments. Many metrics, including the Speech Intelligibility Index (SII), used to measure speech intelligibility, only consider an absolute threshold when accounting for age related hearing loss. Therefore these metrics tend to overestimate the performance for elderly listeners in noisy environments [Tobias et al., J. Acoust. Soc. Am. 83, 859-895 (1988)]. The present studies examine the predictive capabilities of the SII in an environment with automobile noise present. This is of interest because people's evaluation of the automobile interior sound is closely linked to their ability to carry on conversations with their fellow passengers. The four studies examine whether, for subjects with age related hearing loss, the accuracy of the SII can be improved by incorporating factors other than an absolute threshold into the model. [Work supported by Ford Motor Company.

  12. The benefits of remote microphone technology for adults with cochlear implants.

    PubMed

    Fitzpatrick, Elizabeth M; Séguin, Christiane; Schramm, David R; Armstrong, Shelly; Chénier, Josée

    2009-10-01

    Cochlear implantation has become a standard practice for adults with severe to profound hearing loss who demonstrate limited benefit from hearing aids. Despite the substantial auditory benefits provided by cochlear implants, many adults experience difficulty understanding speech in noisy environments and in other challenging listening conditions such as television. Remote microphone technology may provide some benefit in these situations; however, little is known about whether these systems are effective in improving speech understanding in difficult acoustic environments for this population. This study was undertaken with adult cochlear implant recipients to assess the potential benefits of remote microphone technology. The objectives were to examine the measurable and perceived benefit of remote microphone devices during television viewing and to assess the benefits of a frequency-modulated system for speech understanding in noise. Fifteen adult unilateral cochlear implant users were fit with remote microphone devices in a clinical environment. The study used a combination of direct measurements and patient perceptions to assess speech understanding with and without remote microphone technology. The direct measures involved a within-subject repeated-measures design. Direct measures of patients' speech understanding during television viewing were collected using their cochlear implant alone and with their implant device coupled to an assistive listening device. Questionnaires were administered to document patients' perceptions of benefits during the television-listening tasks. Speech recognition tests of open-set sentences in noise with and without remote microphone technology were also administered. Participants showed improved speech understanding for television listening when using remote microphone devices coupled to their cochlear implant compared with a cochlear implant alone. This benefit was documented both when listening to news and talk show recordings. Questionnaire results also showed statistically significant differences between listening with a cochlear implant alone and listening with a remote microphone device. Participants judged that remote microphone technology provided them with better comprehension, more confidence, and greater ease of listening. Use of a frequency-modulated system coupled to a cochlear implant also showed significant improvement over a cochlear implant alone for open-set sentence recognition in +10 and +5 dB signal to noise ratios. Benefits were measured during remote microphone use in focused-listening situations in a clinical setting, for both television viewing and speech understanding in noise in the audiometric sound suite. The results suggest that adult cochlear implant users should be counseled regarding the potential for enhanced speech understanding in difficult listening environments through the use of remote microphone technology.

  13. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    PubMed

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  14. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    PubMed Central

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called “cocktail-party” problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments. PMID:25540608

  15. The Use of Help Options in Multimedia Listening Environments to Aid Language Learning: A Review

    ERIC Educational Resources Information Center

    Mohsen, Mohammed Ali

    2016-01-01

    This paper provides a comprehensive review on the use of help options (HOs) in the multimedia listening context to aid listening comprehension (LC) and improve incidental vocabulary learning. The paper also aims to synthesize the research findings obtained from the use of HOs in Computer-Assisted Language Learning (CALL) literature and reveals the…

  16. Binaural model-based dynamic-range compression.

    PubMed

    Ernst, Stephan M A; Kortlang, Steffen; Grimm, Giso; Bisitz, Thomas; Kollmeier, Birger; Ewert, Stephan D

    2018-01-26

    Binaural cues such as interaural level differences (ILDs) are used to organise auditory perception and to segregate sound sources in complex acoustical environments. In bilaterally fitted hearing aids, dynamic-range compression operating independently at each ear potentially alters these ILDs, thus distorting binaural perception and sound source segregation. A binaurally-linked model-based fast-acting dynamic compression algorithm designed to approximate the normal-hearing basilar membrane (BM) input-output function in hearing-impaired listeners is suggested. A multi-center evaluation in comparison with an alternative binaural and two bilateral fittings was performed to assess the effect of binaural synchronisation on (a) speech intelligibility and (b) perceived quality in realistic conditions. 30 and 12 hearing impaired (HI) listeners were aided individually with the algorithms for both experimental parts, respectively. A small preference towards the proposed model-based algorithm in the direct quality comparison was found. However, no benefit of binaural-synchronisation regarding speech intelligibility was found, suggesting a dominant role of the better ear in all experimental conditions. The suggested binaural synchronisation of compression algorithms showed a limited effect on the tested outcome measures, however, linking could be situationally beneficial to preserve a natural binaural perception of the acoustical environment.

  17. Acoustic and perceptual effects of magnifying interaural difference cues in a simulated "binaural" hearing aid.

    PubMed

    de Taillez, Tobias; Grimm, Giso; Kollmeier, Birger; Neher, Tobias

    2018-06-01

    To investigate the influence of an algorithm designed to enhance or magnify interaural difference cues on speech signals in noisy, spatially complex conditions using both technical and perceptual measurements. To also investigate the combination of interaural magnification (IM), monaural microphone directionality (DIR), and binaural coherence-based noise reduction (BC). Speech-in-noise stimuli were generated using virtual acoustics. A computational model of binaural hearing was used to analyse the spatial effects of IM. Predicted speech quality changes and signal-to-noise-ratio (SNR) improvements were also considered. Additionally, a listening test was carried out to assess speech intelligibility and quality. Listeners aged 65-79 years with and without sensorineural hearing loss (N = 10 each). IM increased the horizontal separation of concurrent directional sound sources without introducing any major artefacts. In situations with diffuse noise, however, the interaural difference cues were distorted. Preprocessing the binaural input signals with DIR reduced distortion. IM influenced neither speech intelligibility nor speech quality. The IM algorithm tested here failed to improve speech perception in noise, probably because of the dispersion and inconsistent magnification of interaural difference cues in complex environments.

  18. Fractal structure enables temporal prediction in music.

    PubMed

    Rankin, Summer K; Fink, Philip W; Large, Edward W

    2014-10-01

    1/f serial correlations and statistical self-similarity (fractal structure) have been measured in various dimensions of musical compositions. Musical performances also display 1/f properties in expressive tempo fluctuations, and listeners predict tempo changes when synchronizing. Here the authors show that the 1/f structure is sufficient for listeners to predict the onset times of upcoming musical events. These results reveal what information listeners use to anticipate events in complex, non-isochronous acoustic rhythms, and this will entail innovative models of temporal synchronization. This finding could improve therapies for Parkinson's and related disorders and inform deeper understanding of how endogenous neural rhythms anticipate events in complex, temporally structured communication signals.

  19. Speech Perception for Adult Cochlear Implant Recipients in a Realistic Background Noise: Effectiveness of Preprocessing Strategies and External Options for Improving Speech Recognition in Noise

    PubMed Central

    Gifford, René H.; Revit, Lawrence J.

    2014-01-01

    Background Although cochlear implant patients are achieving increasingly higher levels of performance, speech perception in noise continues to be problematic. The newest generations of implant speech processors are equipped with preprocessing and/or external accessories that are purported to improve listening in noise. Most speech perception measures in the clinical setting, however, do not provide a close approximation to real-world listening environments. Purpose To assess speech perception for adult cochlear implant recipients in the presence of a realistic restaurant simulation generated by an eight-loudspeaker (R-SPACE™) array in order to determine whether commercially available preprocessing strategies and/or external accessories yield improved sentence recognition in noise. Research Design Single-subject, repeated-measures design with two groups of participants: Advanced Bionics and Cochlear Corporation recipients. Study Sample Thirty-four subjects, ranging in age from 18 to 90 yr (mean 54.5 yr), participated in this prospective study. Fourteen subjects were Advanced Bionics recipients, and 20 subjects were Cochlear Corporation recipients. Intervention Speech reception thresholds (SRTs) in semidiffuse restaurant noise originating from an eight-loudspeaker array were assessed with the subjects’ preferred listening programs as well as with the addition of either Beam™ preprocessing (Cochlear Corporation) or the T-Mic® accessory option (Advanced Bionics). Data Collection and Analysis In Experiment 1, adaptive SRTs with the Hearing in Noise Test sentences were obtained for all 34 subjects. For Cochlear Corporation recipients, SRTs were obtained with their preferred everyday listening program as well as with the addition of Focus preprocessing. For Advanced Bionics recipients, SRTs were obtained with the integrated behind-the-ear (BTE) mic as well as with the T-Mic. Statistical analysis using a repeated-measures analysis of variance (ANOVA) evaluated the effects of the preprocessing strategy or external accessory in reducing the SRT in noise. In addition, a standard t-test was run to evaluate effectiveness across manufacturer for improving the SRT in noise. In Experiment 2, 16 of the 20 Cochlear Corporation subjects were reassessed obtaining an SRT in noise using the manufacturer-suggested “Everyday,” “Noise,” and “Focus” preprocessing strategies. A repeated-measures ANOVA was employed to assess the effects of preprocessing. Results The primary findings were (i) both Noise and Focus preprocessing strategies (Cochlear Corporation) significantly improved the SRT in noise as compared to Everyday preprocessing, (ii) the T-Mic accessory option (Advanced Bionics) significantly improved the SRT as compared to the BTE mic, and (iii) Focus preprocessing and the T-Mic resulted in similar degrees of improvement that were not found to be significantly different from one another. Conclusion Options available in current cochlear implant sound processors are able to significantly improve speech understanding in a realistic, semidiffuse noise with both Cochlear Corporation and Advanced Bionics systems. For Cochlear Corporation recipients, Focus preprocessing yields the best speech-recognition performance in a complex listening environment; however, it is recommended that Noise preprocessing be used as the new default for everyday listening environments to avoid the need for switching programs throughout the day. For Advanced Bionics recipients, the T-Mic offers significantly improved performance in noise and is recommended for everyday use in all listening environments. PMID:20807480

  20. Sound Fields in Complex Listening Environments

    PubMed Central

    2011-01-01

    The conditions of sound fields used in research, especially testing and fitting of hearing aids, are usually simplified or reduced to fundamental physical fields, such as the free or the diffuse sound field. The concepts of such ideal conditions are easily introduced in theoretical and experimental investigations and in models for directional microphones, for example. When it comes to real-world application of hearing aids, however, the field conditions are more complex with regard to specific stationary and transient properties in room transfer functions and the corresponding impulse responses and binaural parameters. Sound fields can be categorized in outdoor rural and urban and indoor environments. Furthermore, sound fields in closed spaces of various sizes and shapes and in situations of transport in vehicles, trains, and aircrafts are compared with regard to the binaural signals. In laboratory tests, sources of uncertainties are individual differences in binaural cues and too less controlled sound field conditions. Furthermore, laboratory sound fields do not cover the variety of complex sound environments. Spatial audio formats such as higher-order ambisonics are candidates for sound field references not only in room acoustics and audio engineering but also in audiology. PMID:21676999

  1. Listening with a foreign-accent: The interlanguage speech intelligibility benefit in Mandarin speakers of English

    PubMed Central

    Xie, Xin; Fowler, Carol A.

    2013-01-01

    This study examined the intelligibility of native and Mandarin-accented English speech for native English and native Mandarin listeners. In the latter group, it also examined the role of the language environment and English proficiency. Three groups of listeners were tested: native English listeners (NE), Mandarin-speaking Chinese listeners in the US (M-US) and Mandarin listeners in Beijing, China (M-BJ). As a group, M-US and M-BJ listeners were matched on English proficiency and age of acquisition. A nonword transcription task was used. Identification accuracy for word-final stops in the nonwords established two independent interlanguage intelligibility effects. An interlanguage speech intelligibility benefit for listeners (ISIB-L) was manifest by both groups of Mandarin listeners outperforming native English listeners in identification of Mandarin-accented speech. In the benefit for talkers (ISIB-T), only M-BJ listeners were more accurate identifying Mandarin-accented speech than native English speech. Thus, both Mandarin groups demonstrated an ISIB-L while only the M-BJ group overall demonstrated an ISIB-T. The English proficiency of listeners was found to modulate the magnitude of the ISIB-T in both groups. Regression analyses also suggested that the listener groups differ in their use of acoustic information to identify voicing in stop consonants. PMID:24293741

  2. The Impact of Frequency Modulation (FM) System Use and Caregiver Training on Young Children with Hearing Impairment in a Noisy Listening Environment

    ERIC Educational Resources Information Center

    Nguyen, Huong Thi Thien

    2011-01-01

    The two objectives of this single-subject study were to assess how an FM system use impacts parent-child interaction in a noisy listening environment, and how a parent/caregiver training affect the interaction between parent/caregiver and child. Two 5-year-old children with hearing loss and their parent/caregiver participated. Experiment 1 was…

  3. Observer weighting strategies in interaural time-difference discrimination and monaural level discrimination for a multi-tone complex

    NASA Astrophysics Data System (ADS)

    Dye, Raymond H.; Stellmack, Mark A.; Jurcin, Noah F.

    2005-05-01

    Two experiments measured listeners' abilities to weight information from different components in a complex of 553, 753, and 953 Hz. The goal was to determine whether or not the ability to adjust perceptual weights generalized across tasks. Weights were measured by binary logistic regression between stimulus values that were sampled from Gaussian distributions and listeners' responses. The first task was interaural time discrimination in which listeners judged the laterality of the target component. The second task was monaural level discrimination in which listeners indicated whether the level of the target component decreased or increased across two intervals. For both experiments, each of the three components served as the target. Ten listeners participated in both experiments. The results showed that those individuals who adjusted perceptual weights in the interaural time experiment could also do so in the monaural level discrimination task. The fact that the same individuals appeared to be analytic in both tasks is an indication that the weights measure the ability to attend to a particular region of the spectrum while ignoring other spectral regions. .

  4. Effects of sensorineural hearing loss on visually guided attention in a multitalker environment.

    PubMed

    Best, Virginia; Marrone, Nicole; Mason, Christine R; Kidd, Gerald; Shinn-Cunningham, Barbara G

    2009-03-01

    This study asked whether or not listeners with sensorineural hearing loss have an impaired ability to use top-down attention to enhance speech intelligibility in the presence of interfering talkers. Listeners were presented with a target string of spoken digits embedded in a mixture of five spatially separated speech streams. The benefit of providing simple visual cues indicating when and/or where the target would occur was measured in listeners with hearing loss, listeners with normal hearing, and a control group of listeners with normal hearing who were tested at a lower target-to-masker ratio to equate their baseline (no cue) performance with the hearing-loss group. All groups received robust benefits from the visual cues. The magnitude of the spatial-cue benefit, however, was significantly smaller in listeners with hearing loss. Results suggest that reduced utility of selective attention for resolving competition between simultaneous sounds contributes to the communication difficulties experienced by listeners with hearing loss in everyday listening situations.

  5. Attentional Capacity Limits Gap Detection during Concurrent Sound Segregation.

    PubMed

    Leung, Ada W S; Jolicoeur, Pierre; Alain, Claude

    2015-11-01

    Detecting a brief silent interval (i.e., a gap) is more difficult when listeners perceive two concurrent sounds rather than one in a sound containing a mistuned harmonic in otherwise in-tune harmonics. This impairment in gap detection may reflect the interaction of low-level encoding or the division of attention between two sound objects, both of which could interfere with signal detection. To distinguish between these two alternatives, we compared ERPs during active and passive listening with complex harmonic tones that could include a gap, a mistuned harmonic, both features, or neither. During active listening, participants indicated whether they heard a gap irrespective of mistuning. During passive listening, participants watched a subtitled muted movie of their choice while the same sounds were presented. Gap detection was impaired when the complex sounds included a mistuned harmonic that popped out as a separate object. The ERP analysis revealed an early gap-related activity that was little affected by mistuning during the active or passive listening condition. However, during active listening, there was a marked decrease in the late positive wave that was thought to index attention and response-related processes. These results suggest that the limitation in detecting the gap is related to attentional processing, possibly divided attention induced by the concurrent sound objects, rather than deficits in preattentional sensory encoding.

  6. Looking Behavior and Audiovisual Speech Understanding in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss.

    PubMed

    Lewis, Dawna E; Smith, Nicholas A; Spalding, Jody L; Valente, Daniel L

    Visual information from talkers facilitates speech intelligibility for listeners when audibility is challenged by environmental noise and hearing loss. Less is known about how listeners actively process and attend to visual information from different talkers in complex multi-talker environments. This study tracked looking behavior in children with normal hearing (NH), mild bilateral hearing loss (MBHL), and unilateral hearing loss (UHL) in a complex multi-talker environment to examine the extent to which children look at talkers and whether looking patterns relate to performance on a speech-understanding task. It was hypothesized that performance would decrease as perceptual complexity increased and that children with hearing loss would perform more poorly than their peers with NH. Children with MBHL or UHL were expected to demonstrate greater attention to individual talkers during multi-talker exchanges, indicating that they were more likely to attempt to use visual information from talkers to assist in speech understanding in adverse acoustics. It also was of interest to examine whether MBHL, versus UHL, would differentially affect performance and looking behavior. Eighteen children with NH, eight children with MBHL, and 10 children with UHL participated (8-12 years). They followed audiovisual instructions for placing objects on a mat under three conditions: a single talker providing instructions via a video monitor, four possible talkers alternately providing instructions on separate monitors in front of the listener, and the same four talkers providing both target and nontarget information. Multi-talker background noise was presented at a 5 dB signal-to-noise ratio during testing. An eye tracker monitored looking behavior while children performed the experimental task. Behavioral task performance was higher for children with NH than for either group of children with hearing loss. There were no differences in performance between children with UHL and children with MBHL. Eye-tracker analysis revealed that children with NH looked more at the screens overall than did children with MBHL or UHL, though individual differences were greater in the groups with hearing loss. Listeners in all groups spent a small proportion of time looking at relevant screens as talkers spoke. Although looking was distributed across all screens, there was a bias toward the right side of the display. There was no relationship between overall looking behavior and performance on the task. The present study examined the processing of audiovisual speech in the context of a naturalistic task. Results demonstrated that children distributed their looking to a variety of sources during the task, but that children with NH were more likely to look at screens than were those with MBHL/UHL. However, all groups looked at the relevant talkers as they were speaking only a small proportion of the time. Despite variability in looking behavior, listeners were able to follow the audiovisual instructions and children with NH demonstrated better performance than children with MBHL/UHL. These results suggest that performance on some challenging multi-talker audiovisual tasks is not dependent on visual fixation to relevant talkers for children with NH or with MBHL/UHL.

  7. Effects of Hearing Loss on Dual-Task Performance in an Audiovisual Virtual Reality Simulation of Listening While Walking.

    PubMed

    Lau, Sin Tung; Pichora-Fuller, M Kathleen; Li, Karen Z H; Singh, Gurjit; Campos, Jennifer L

    2016-07-01

    Most activities of daily living require the dynamic integration of sights, sounds, and movements as people navigate complex environments. Nevertheless, little is known about the effects of hearing loss (HL) or hearing aid (HA) use on listening during multitasking challenges. The objective of the current study was to investigate the effect of age-related hearing loss (ARHL) on word recognition accuracy in a dual-task experiment. Virtual reality (VR) technologies in a specialized laboratory (Challenging Environment Assessment Laboratory) were used to produce a controlled and safe simulated environment for listening while walking. In a simulation of a downtown street intersection, participants completed two single-task conditions, listening-only (standing stationary) and walking-only (walking on a treadmill to cross the simulated intersection with no speech presented), and a dual-task condition (listening while walking). For the listening task, they were required to recognize words spoken by a target talker when there was a competing talker. For some blocks of trials, the target talker was always located at 0° azimuth (100% probability condition); for other blocks, the target talker was more likely (60% of trials) to be located at the center (0° azimuth) and less likely (40% of trials) to be located at the left (270° azimuth). The participants were eight older adults with bilateral HL (mean age = 73.3 yr, standard deviation [SD] = 8.4; three males) who wore their own HAs during testing and eight controls with normal hearing (NH) thresholds (mean age = 69.9 yr, SD = 5.4; two males). No participant had clinically significant visual, cognitive, or mobility impairments. Word recognition accuracy and kinematic parameters (head and trunk angles, step width and length, stride time, cadence) were analyzed using mixed factorial analysis of variances with group as a between-subjects factor. Task condition (single versus dual) and probability (100% versus 60%) were within-subject factors. In analyses of the 60% listening condition, spatial expectation (likely versus unlikely) was a within-subject factor. Differences between groups in age and baseline measures of hearing, mobility, and cognition were tested using t tests. The NH group had significantly better word recognition accuracy than the HL group. Both groups performed better when the probability was higher and the target location more likely. For word recognition, dual-task costs for the HL group did not depend on condition, whereas the NH group demonstrated a surprising dual-task benefit in conditions with lower probability or spatial expectation. For the kinematic parameters, both groups demonstrated a more upright and less variable head position and more variable trunk position during dual-task conditions compared to the walking-only condition, suggesting that safe walking was prioritized. The HL group demonstrated more overall stride time variability than the NH group. This study provides new knowledge about the effects of ARHL, HA use, and aging on word recognition when individuals also perform a mobility-related task that is typically experienced in everyday life. This research may help inform the development of more effective function-based approaches to assessment and intervention for people who are hard-of-hearing. American Academy of Audiology.

  8. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.

  9. Detection of independent functional networks during music listening using electroencephalogram and sLORETA-ICA.

    PubMed

    Jäncke, Lutz; Alahmadi, Nsreen

    2016-04-13

    The measurement of brain activation during music listening is a topic that is attracting increased attention from many researchers. Because of their high spatial accuracy, functional MRI measurements are often used for measuring brain activation in the context of music listening. However, this technique faces the issues of contaminating scanner noise and an uncomfortable experimental environment. Electroencephalogram (EEG), however, is a neural registration technique that allows the measurement of neurophysiological activation in silent and more comfortable experimental environments. Thus, it is optimal for recording brain activations during pleasant music stimulation. Using a new mathematical approach to calculate intracortical independent components (sLORETA-IC) on the basis of scalp-recorded EEG, we identified specific intracortical independent components during listening of a musical piece and scales, which differ substantially from intracortical independent components calculated from the resting state EEG. Most intracortical independent components are located bilaterally in perisylvian brain areas known to be involved in auditory processing and specifically in music perception. Some intracortical independent components differ between the music and scale listening conditions. The most prominent difference is found in the anterior part of the perisylvian brain region, with stronger activations seen in the left-sided anterior perisylvian regions during music listening, most likely indicating semantic processing during music listening. A further finding is that the intracortical independent components obtained for the music and scale listening are most prominent in higher frequency bands (e.g. beta-2 and beta-3), whereas the resting state intracortical independent components are active in lower frequency bands (alpha-1 and theta). This new technique for calculating intracortical independent components is able to differentiate independent neural networks associated with music and scale listening. Thus, this tool offers new opportunities for studying neural activations during music listening using the silent and more convenient EEG technology.

  10. Research Advances In Medical Care For Polytrauma Injuries And Blast Injuries

    DTIC Science & Technology

    2011-01-25

    by air or ground ambulance. • A stethoscope that can be used to listen to heart and breath sounds in the challenging environment • The ability for...ability of medical personnel in both military and civilian settings. Description The noise immune stethoscope can be used in high-noise environments...The new stethoscope uses a traditional acoustic listening mode with the addition of ultrasound-based technology that is “noise immune.” Current

  11. Supportive Listening

    ERIC Educational Resources Information Center

    Jones, Susanne M.

    2011-01-01

    "Listening" is a multidimensional construct that consists of complex (a) cognitive processes, such as attending to, understanding, receiving, and interpreting messages; (b) affective processes, such as being motivated and stimulated to attend to another person's messages; and (c) behavioral processes, such as responding with verbal and nonverbal…

  12. [Japanese learners' processing time for reading English relative clauses analyzed in relation to their English listening proficiency].

    PubMed

    Oyama, Yoshinori

    2011-06-01

    The present study examined Japanese university students' processing time for English subject and object relative clauses in relation to their English listening proficiency. In Analysis 1, the relation between English listening proficiency and reading span test scores was analyzed. The results showed that the high and low listening comprehension groups' reading span test scores do not differ. Analysis 2 investigated English listening proficiency and processing time for sentences with subject and object relative clauses. The results showed that reading the relative clause ending and the main verb section of a sentence with an object relative clause (such as "attacked" and "admitted" in the sentence "The reporter that the senator attacked admitted the error") takes less time for learners with high English listening scores than for learners with low English listening scores. In Analysis 3, English listening proficiency and comprehension accuracy for sentences with subject and object relative clauses were examined. The results showed no significant difference in comprehension accuracy between the high and low listening-comprehension groups. These results indicate that processing time for English relative clauses is related to the cognitive processes involved in listening comprehension, which requires immediate processing of syntactically complex audio information.

  13. Detecting changes in dynamic and complex acoustic environments

    PubMed Central

    Boubenec, Yves; Lawlor, Jennifer; Górska, Urszula; Shamma, Shihab; Englitz, Bernhard

    2017-01-01

    Natural sounds such as wind or rain, are characterized by the statistical occurrence of their constituents. Despite their complexity, listeners readily detect changes in these contexts. We here address the neural basis of statistical decision-making using a combination of psychophysics, EEG and modelling. In a texture-based, change-detection paradigm, human performance and reaction times improved with longer pre-change exposure, consistent with improved estimation of baseline statistics. Change-locked and decision-related EEG responses were found in a centro-parietal scalp location, whose slope depended on change size, consistent with sensory evidence accumulation. The potential's amplitude scaled with the duration of pre-change exposure, suggesting a time-dependent decision threshold. Auditory cortex-related potentials showed no response to the change. A dual timescale, statistical estimation model accounted for subjects' performance. Furthermore, a decision-augmented auditory cortex model accounted for performance and reaction times, suggesting that the primary cortical representation requires little post-processing to enable change-detection in complex acoustic environments. DOI: http://dx.doi.org/10.7554/eLife.24910.001 PMID:28262095

  14. Improving Listening Comprehension through a Whole-Schema Approach.

    ERIC Educational Resources Information Center

    Ellermeyer, Deborah

    1993-01-01

    Examines the development of the schema, or cognitive structure, theory of reading comprehension. Advances a model for improving listening comprehension within the classroom through a teacher-facilitated approach which leads students to selecting and utilizing existing schema within a whole-language environment. (MDM)

  15. Is there a hearing aid for the thinking person?

    PubMed

    Hafter, Ervin R

    2010-10-01

    The history of auditory prosthesis has generally concentrated on bottom-up processing, that is, on audibility. However, a growing interest in top-down processing has focused on correlations between success with a hearing aid and such higher order processing as the patient's intelligence, problem solving and language skills, and the perceived effort of day-to-day listening. Examination of two cases of cognitive effects in hearing that illustrate less-often-studied issues: (1) Individual subjects in a study use different listening strategies, a fact that, if not known to the experimenter, can lead to errors in interpretation; (2) A measure of shared attention can point to otherwise unknown functional effects of an algorithm used in hearing aids. In the two examples described above: (1) Patients with cochlear implants served in a study of the binaural precedence effect, that is, echo suppression. (2) Individuals identifying speech-in-noise benefit from noise reduction (NR) when the criterion was improved performance in simultaneous tests of verbal memory or visual reaction times. Studies of hearing impairment, either in the laboratory or in a fitting session, should include study of the complex stimuli that make up the natural environment, conditions where the thinking auditory brain adopts strategies for dealing with large amounts of input data. In addition to well-known factors that must be included in communication, such things as familiarity, syntax, and semantics, the work here shows that strategic listening can affect even how we deal with seemingly simpler requirements, localizing sounds in a reverberant auditory scene and listening for speech in noise when busy with other cognitive tasks. American Academy of Audiology.

  16. Effects of computer-based immediate feedback on foreign language listening comprehension and test-associated anxiety.

    PubMed

    Lee, Shu-Ping; Su, Hui-Kai; Lee, Shin-Da

    2012-06-01

    This study investigated the effects of immediate feedback on computer-based foreign language listening comprehension tests and on intrapersonal test-associated anxiety in 72 English major college students at a Taiwanese University. Foreign language listening comprehension of computer-based tests designed by MOODLE, a dynamic e-learning environment, with or without immediate feedback together with the state-trait anxiety inventory (STAI) were tested and repeated after one week. The analysis indicated that immediate feedback during testing caused significantly higher anxiety and resulted in significantly higher listening scores than in the control group, which had no feedback. However, repeated feedback did not affect the test anxiety and listening scores. Computer-based immediate feedback did not lower debilitating effects of anxiety but enhanced students' intrapersonal eustress-like anxiety and probably improved their attention during listening tests. Computer-based tests with immediate feedback might help foreign language learners to increase attention in foreign language listening comprehension.

  17. Sound Localization in Multisource Environments

    DTIC Science & Technology

    2009-03-01

    A total of 7 paid volunteer listeners (3 males and 4 females, 20-25 years of age ) par- ticipated in the experiment. All had normal hearing (i.e...effects of the loudspeaker frequency responses, and were then sent from an experimental control computer to a Mark of the Unicorn (MOTU 24 I/O) digital-to...after the overall multisource stimulus has been presented (the ’post-cue’ condition). 3.2 Methods 3.2.1 Listeners Eight listeners, ranging in age from

  18. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

    PubMed Central

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D.; Chait, Maria

    2016-01-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” PMID:27325682

  19. Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence.

    PubMed

    Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria

    2016-09-01

    To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." © The Author 2016. Published by Oxford University Press.

  20. Instituting a music listening intervention for critically ill patients receiving mechanical ventilation: Exemplars from two patient cases

    PubMed Central

    Heiderscheit, Annie; Chlan, Linda; Donley, Kim

    2011-01-01

    Music is an ideal intervention to reduce anxiety and promote relaxation in critically ill patients receiving mechanical ventilatory support. This article reviews the basis for a music listening intervention and describes two case examples with patients utilizing a music listening intervention to illustrate the implementation and use of the music listening protocol in this dynamic environment. The case examples illustrate the importance and necessity of engaging a music therapist in not only assessing the music preferences of patients, but also for implementing a music listening protocol to manage the varied and challenging needs of patients in the critical care setting. Additionally, the case examples presented in this paper demonstrate the wide array of music patients prefer and how the ease of a music listening protocol allows mechanically ventilated patients to engage in managing their own anxiety during this distressful experience. PMID:22081788

  1. Plastic modes of listening: affordance in constructed sound environments

    NASA Astrophysics Data System (ADS)

    Sjolin, Anders

    This thesis is concerned with how the ecological approach to perception with the inclusion of listening modes, informs the creation of sound art installation, or more specifically as referred to in this thesis as constructed sound environments. The basis for the thesis has been a practiced based research where the aim and purpose of the written part of this PhD project has been to critically investigate the area of sound art, in order to map various approaches towards participating in and listening to a constructed sound environment. The main areas has been the notion of affordance as coined by James J. Gibson (1986), listening modes as coined by Pierre Schaeffer (1966) and further developed by Michel Chion (1994), aural architects as coined by Blesser and Salter (2007) and the holistic approach towards understanding sound art developed by Brandon LaBelle (2006). The findings within the written part of the thesis, based on a qualitative analysis, have informed the practice that has resulted in artefacts in the form of seven constructed sound environments that also functions as case studies for further analysis. The aim of the practice has been to exemplify the methodology, strategy and progress behind the organisation and construction of sound environments The research concerns point towards the acknowledgment of affordance as the crucial factor in understanding a constructed sound environment. The affordance approach govern the idea that perceiving a sound environment is a top-down process where the autonomic quality of a constructed sound environment is based upon the perception of structures of the sound material and its relationship with speaker placement and surrounding space. This enables a researcher to side step the conflicting poles of musical/abstract and non-musical/realistic classification of sound elements and regard these poles as included, not separated elements in the analysis of a constructed sound environment.

  2. Physical and perceptual estimation of differences between loudspeakers

    NASA Astrophysics Data System (ADS)

    Lavandier, Mathieu; Herzog, Philippe; Meunier, Sabine

    2006-12-01

    Differentiating the restitution of timbre by several loudspeakers may result from standard measurements, or from listening tests. This work proposes a protocol keeping a close relationship between the objective and perceptual evaluations: the stimuli are musical excerpts, and the measuring environment is a standard listening room. The protocol involves recordings made at a listener position, and objective dissimilarities are computed using an auditory model simulating masking effects. The resulting data correlate very well with listening tests using the same recordings, and show similar dependencies on the major parameters identified from the dissimilarity matrices. To cite this article: M. Lavandier et al., C. R. Mecanique 334 (2006).

  3. Effect of Listeners' Linguistic Background on Perceptual Judgements of Hypernasality

    ERIC Educational Resources Information Center

    Lee, Alice; Brown, Susanna; Gibbon, Fiona E.

    2008-01-01

    Background: Many speech and language therapists work in a multilingual environment, making cross-linguistic studies of speech disorders clinically and theoretically important. Aims: To investigate the effect of listeners' linguistic background on their perceptual ratings of hypernasality and the reliability of the ratings. Methods &…

  4. Adaptive spatial filtering improves speech reception in noise while preserving binaural cues.

    PubMed

    Bissmeyer, Susan R S; Goldsworthy, Raymond L

    2017-09-01

    Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.

  5. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  6. STS-35 Commander Brand listens to trainer during water egress exercises

    NASA Technical Reports Server (NTRS)

    1990-01-01

    STS-35 Commander Vance D. Brand listens to training personnel during launch emergency egress procedures conducted in JSC's Weightless Environment Training Facility (WETF) Bldg 29. Brand, wearing a launch and entry suit (LES) and launch and entry helmet (LEH), is seated on the pool side while reviewing instructions.

  7. Learner Perceptions of Reliance on Captions in EFL Multimedia Listening Comprehension

    ERIC Educational Resources Information Center

    Leveridge, Aubrey Neil; Yang, Jie Chi

    2014-01-01

    Instructional support has been widely discussed as a strategy to optimize student-learning experiences. This study examines instructional support within the context of a multimedia language-learning environment, with the predominant focus on learners' perceptions of captioning support for listening comprehension. The study seeks to answer two…

  8. Supporting Student Differences in Listening Comprehension and Vocabulary Learning with Multimedia Annotations

    ERIC Educational Resources Information Center

    Jones, Linda C.

    2009-01-01

    This article describes how effectively multimedia learning environments can assist second language (L2) students of different spatial and verbal abilities with listening comprehension and vocabulary learning. In particular, it explores how written and pictorial annotations interacted with high/low spatial and verbal ability learners and thus…

  9. Twelve tips for using applied improvisation in medical education.

    PubMed

    Hoffmann-Longtin, Krista; Rossing, Jonathan P; Weinstein, Elizabeth

    2018-04-01

    Future physicians will practice medicine in a more complex environment than ever, where skills of interpersonal communication, collaboration and adaptability to change are critical. Applied improvisation (or AI) is an instructional strategy which adapts the concepts of improvisational theater to teach these types of complex skills in other contexts. Unique to AI is its very active teaching approach, adapting theater games to help learners meet curricular objectives. In medical education, AI is particularly helpful when attempting to build students' comfort with and skills in complex, interpersonal behaviors such as effective listening, person-centeredness, teamwork and communication. This article draws on current evidence and the authors' experiences to present best practices for incorporating AI into teaching medicine. These practical tips help faculty new to AI get started by establishing goals, choosing appropriate games, understanding effective debriefing, considering evaluation strategies and managing resistance within the context of medical education.

  10. The influence of informational masking in reverberant, multi-talker environments.

    PubMed

    Westermann, Adam; Buchholz, Jörg M

    2015-08-01

    The relevance of informational masking (IM) in real-world listening is not well understood. In literature, IM effects of up to 10 dB in measured speech reception thresholds (SRTs) are reported. However, these experiments typically employed simplified spatial configurations and speech corpora that magnified confusions. In this study, SRTs were measured with normal hearing subjects in a simulated cafeteria environment. The environment was reproduced by a 41-channel 3D-loudspeaker array. The target talker was 2 m in front of the listener and masking talkers were either spread throughout the room or colocated with the target. Three types of maskers were realized: one with the same talker as the target (maximum IM), one with talkers different from the target, and one with unintelligible, noise-vocoded talkers (minimal IM). Overall, SRTs improved for the spatially distributed conditions compared to the colocated conditions. Within the spatially distributed conditions, there was no significant difference between thresholds with the different- and vocoded-talker maskers. Conditions with the same-talker masker were the only conditions with substantially higher thresholds, especially in the colocated conditions. These results suggest that IM related to target-masker confusions, at least for normal-hearing listeners, is of low relevance in real-life listening.

  11. Investigation of in-vehicle speech intelligibility metrics for normal hearing and hearing impaired listeners

    NASA Astrophysics Data System (ADS)

    Samardzic, Nikolina

    The effectiveness of in-vehicle speech communication can be a good indicator of the perception of the overall vehicle quality and customer satisfaction. Currently available speech intelligibility metrics do not account in their procedures for essential parameters needed for a complete and accurate evaluation of in-vehicle speech intelligibility. These include the directivity and the distance of the talker with respect to the listener, binaural listening, hearing profile of the listener, vocal effort, and multisensory hearing. In the first part of this research the effectiveness of in-vehicle application of these metrics is investigated in a series of studies to reveal their shortcomings, including a wide range of scores resulting from each of the metrics for a given measurement configuration and vehicle operating condition. In addition, the nature of a possible correlation between the scores obtained from each metric is unknown. The metrics and the subjective perception of speech intelligibility using, for example, the same speech material have not been compared in literature. As a result, in the second part of this research, an alternative method for speech intelligibility evaluation is proposed for use in the automotive industry by utilizing a virtual reality driving environment for ultimately setting targets, including the associated statistical variability, for future in-vehicle speech intelligibility evaluation. The Speech Intelligibility Index (SII) was evaluated at the sentence Speech Receptions Threshold (sSRT) for various listening situations and hearing profiles using acoustic perception jury testing and a variety of talker and listener configurations and background noise. In addition, the effect of individual sources and transfer paths of sound in an operating vehicle to the vehicle interior sound, specifically their effect on speech intelligibility was quantified, in the framework of the newly developed speech intelligibility evaluation method. Lastly, as an example of the significance of speech intelligibility evaluation in the context of an applicable listening environment, as indicated in this research, it was found that the jury test participants required on average an approximate 3 dB increase in sound pressure level of speech material while driving and listening compared to when just listening, for an equivalent speech intelligibility performance and the same listening task.

  12. Effects of Listening Strategy Instruction on News Videotext Comprehension

    ERIC Educational Resources Information Center

    Cross, Jeremy

    2009-01-01

    Developments in broadcast and multimedia technology have generated a readily available and vast supply of videotexts for use in second and foreign language learning contexts. However, without pedagogical direction learners are unlikely to be able to deal with the complexities of this authentic listening resource, and strategy instruction may be…

  13. Learning about Professional Growth through Listening to Teachers

    ERIC Educational Resources Information Center

    Taylor, Phil

    2017-01-01

    This article explores teacher learning and development, drawing on insights gained during two study visits and an international collaborative project. The article also charts a phase in the author's own learning, reflecting a growing recognition of the complexities of professional growth, gained through listening to teachers. A tentative process…

  14. A method to enhance the use of interaural time differences for cochlear implants in reverberant environments

    PubMed Central

    Monaghan, Jessica J. M.; Seeber, Bernhard U.

    2017-01-01

    The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs. PMID:27586742

  15. Spatial release of cognitive load measured in a dual-task paradigm in normal-hearing and hearing-impaired listeners.

    PubMed

    Xia, Jing; Nooraei, Nazanin; Kalluri, Sridhar; Edwards, Brent

    2015-04-01

    This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location. Results showed that when gender cues were available, a 15° spatial separation between talkers reduced the cognitive load of listening even though it did not provide further improvement in speech recognition (Experiment I). Compared to normal-hearing listeners, large individual variability in spatial release of cognitive load was observed among hearing-impaired listeners. Cognitive load was lower when talkers were spatially separated by 60° than when talkers were of different genders, even though speech recognition was comparable in these two conditions (Experiment II). These results suggest that a measure of cognitive load might provide valuable insight into the benefit of spatial cues in multi-talker environments.

  16. Objective analysis of ambisonics for hearing aid applications: Effect of listener's head, room reverberation, and directional microphones.

    PubMed

    Oreinos, Chris; Buchholz, Jörg M

    2015-06-01

    Recently, an increased interest has been demonstrated in evaluating hearing aids (HAs) inside controlled, but at the same time, realistic sound environments. A promising candidate that employs loudspeakers for realizing such sound environments is the listener-centered method of higher-order ambisonics (HOA). Although the accuracy of HOA has been widely studied, it remains unclear to what extent the results can be generalized when (1) a listener wearing HAs that may feature multi-microphone directional algorithms is considered inside the reconstructed sound field and (2) reverberant scenes are recorded and reconstructed. For the purpose of objectively validating HOA for listening tests involving HAs, a framework was developed to simulate the entire path of sounds presented in a modeled room, recorded by a HOA microphone array, decoded to a loudspeaker array, and finally received at the ears and HA microphones of a dummy listener fitted with HAs. Reproduction errors at the ear signals and at the output of a cardioid HA microphone were analyzed for different anechoic and reverberant scenes. It was found that the diffuse reverberation reduces the considered time-averaged HOA reconstruction errors which, depending on the considered application, suggests that reverberation can increase the usable frequency range of a HOA system.

  17. The relationship between speech recognition, behavioural listening effort, and subjective ratings.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2018-06-01

    The purpose of this study was to evaluate the reliability and validity of four subjective questions related to listening effort. A secondary purpose of this study was to evaluate the effects of hearing aid beamforming microphone arrays on word recognition and listening effort. Participants answered subjective questions immediately following testing in a dual-task paradigm with three microphone settings in a moderately reverberant laboratory environment in two noise configurations. Participants rated their: (1) mental work, (2) desire to improve the situation, (3) tiredness, and (4) desire to give up. Data were analysed using repeated measures and reliability analyses. Eighteen adults with symmetrical sensorineural hearing loss participated. Beamforming differentially affected word recognition and listening effort. Analysis revealed the same pattern of results for behavioural listening effort and subjective ratings of desire to improve the situation. Conversely, ratings of work revealed the same pattern of results as word recognition performance. Ratings of tiredness and desire to give up were unaffected by hearing aid microphone or noise configuration. Participant ratings of their desire to control the listening situation appear to reliable subjective indicators of listening effort that align with results from a behavioural measure of listening effort.

  18. Survey of college students' MP3 listening: Habits, safety issues, attitudes, and education.

    PubMed

    Hoover, Alicia; Krishnamurti, Sridhar

    2010-06-01

    To survey listening habits and attitudes of typical college students who use MP3 players and to investigate possible safety issues related to MP3 player listening. College students who were frequent MP3 player users (N = 428) filled out a 30-item online survey. Specific areas probed by the present survey included frequency and duration of MP3 player use, MP3 player volume levels used, types of earphones used, typical environments in which MP3 player was worn, specific activities related to safety while listening to MP3 players, and attitudes toward MP3 player use. The majority of listeners wore MP3 players for less than 2 hr daily at safe volume levels. About one third of respondents reported being distracted while wearing an MP3 player, and more than one third of listeners experienced soreness in their ears after a listening session. About one third of respondents reported occasionally using their MP3 players at maximum volume levels. Listeners indicated willingness to (a) reduce volume levels, (b) decrease listening duration, and (c) buy specialized earphones to conserve their hearing. The study found concerns regarding the occasional use of MP3 players at full volume and reduced environmental awareness among some college student users.

  19. Looking at the world with your ears: how do we get the size of an object from its sound?

    PubMed

    Grassi, Massimo; Pastore, Massimiliano; Lemaitre, Guillaume

    2013-05-01

    Identifying the properties of on-going events by the sound they produce is crucial for our interaction with the environment when visual information is not available. Here, we investigated the ability of listeners to estimate the size of an object (a ball) dropped on a plate with ecological listening conditions (balls were dropped in real time) and response methods (listeners estimate ball-size by drawing a disk). Previous studies had shown that listeners can veridically estimate the size of objects by the sound they produce, but it is yet unclear which acoustical index listeners use to produce their estimates. In particular, it is unclear whether listeners listen to amplitude (related to loudness) or frequency (related to the sound's brightness) domain cue to produce their estimates. In the current study, in order to understand which cue is used by the listener to recover the size of the object, we manipulated the sound source event in such a way that frequency and amplitude cues provided contrasting size-information (balls were dropped from various heights). Results showed that listeners' estimations were accurate regardless of the experimental manipulations performed in the experiments. In addition, results suggest that listeners were likely integrating frequency and amplitude acoustical cues in order to produce their estimate and although these cues were often providing contrasting size-information. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Rate discrimination at low pulse rates in normal-hearing and cochlear implant listeners: Influence of intracochlear stimulation site.

    PubMed

    Stahl, Pierre; Macherey, Olivier; Meunier, Sabine; Roman, Stéphane

    2016-04-01

    Temporal pitch perception in cochlear implantees remains weaker than in normal hearing listeners and is usually limited to rates below about 300 pulses per second (pps). Recent studies have suggested that stimulating the apical part of the cochlea may improve the temporal coding of pitch by cochlear implants (CIs), compared to stimulating other sites. The present study focuses on rate discrimination at low pulse rates (ranging from 20 to 104 pps). Two experiments measured and compared pulse rate difference limens (DLs) at four fundamental frequencies (ranging from 20 to 104 Hz) in both CI and normal-hearing (NH) listeners. Experiment 1 measured DLs in users of the (Med-El CI, Innsbruck, Austria) device for two electrodes (one apical and one basal). In experiment 2, DLs for NH listeners were compared for unresolved harmonic complex tones filtered in two frequency regions (lower cut-off frequencies of 1200 and 3600 Hz, respectively) and for different bandwidths. Pulse rate discrimination performance was significantly better when stimulation was provided by the apical electrode in CI users and by the lower-frequency tone complexes in NH listeners. This set of data appears consistent with better temporal coding when stimulation originates from apical regions of the cochlea.

  1. fMRI investigation of sentence comprehension by eye and by ear: modality fingerprints on cognitive processes.

    PubMed

    Michael, E B; Keller, T A; Carpenter, P A; Just, M A

    2001-08-01

    The neural substrate underlying reading vs. listening comprehension of sentences was compared using fMRI. One way in which this issue was addressed was by comparing the patterns of activation particularly in cortical association areas that classically are implicated in language processing. The precise locations of the activation differed between the two modalities. In the left inferior frontal gyrus (Broca's area), the activation associated with listening was more anterior and inferior than the activation associated with reading, suggesting more semantic processing during listening comprehension. In the left posterior superior and middle temporal region (roughly, Wernicke's area), the activation for listening was closer to primary auditory cortex (more anterior and somewhat more lateral) than the activation for reading. In several regions, the activation was much more left lateralized for reading than for listening. In addition to differences in the location of the activation, there were also differences in the total amount of activation in the two modalities in several regions. A second way in which the modality comparison was addressed was by examining how the neural systems responded to comprehension workload in the two modalities by systematically varying the structural complexity of the sentences to be processed. Here, the distribution of the workload increase associated with the processing of additional structural complexity was very similar across the two input modalities. The results suggest a number of subtle differences in the cognitive processing underlying listening vs. reading comprehension. Copyright 2001 Wiley-Liss, Inc.

  2. Coding strategies for cochlear implants under adverse environments

    NASA Astrophysics Data System (ADS)

    Tahmina, Qudsia

    Cochlear implants are electronic prosthetic devices that restores partial hearing in patients with severe to profound hearing loss. Although most coding strategies have significantly improved the perception of speech in quite listening conditions, there remains limitations on speech perception under adverse environments such as in background noise, reverberation and band-limited channels, and we propose strategies that improve the intelligibility of speech transmitted over the telephone networks, reverberated speech and speech in the presence of background noise. For telephone processed speech, we propose to examine the effects of adding low-frequency and high- frequency information to the band-limited telephone speech. Four listening conditions were designed to simulate the receiving frequency characteristics of telephone handsets. Results indicated improvement in cochlear implant and bimodal listening when telephone speech was augmented with high frequency information and therefore this study provides support for design of algorithms to extend the bandwidth towards higher frequencies. The results also indicated added benefit from hearing aids for bimodal listeners in all four types of listening conditions. Speech understanding in acoustically reverberant environments is always a difficult task for hearing impaired listeners. Reverberated sounds consists of direct sound, early reflections and late reflections. Late reflections are known to be detrimental to speech intelligibility. In this study, we propose a reverberation suppression strategy based on spectral subtraction to suppress the reverberant energies from late reflections. Results from listening tests for two reverberant conditions (RT60 = 0.3s and 1.0s) indicated significant improvement when stimuli was processed with SS strategy. The proposed strategy operates with little to no prior information on the signal and the room characteristics and therefore, can potentially be implemented in real-time CI speech processors. For speech in background noise, we propose a mechanism underlying the contribution of harmonics to the benefit of electroacoustic stimulations in cochlear implants. The proposed strategy is based on harmonic modeling and uses synthesis driven approach to synthesize the harmonics in voiced segments of speech. Based on objective measures, results indicated improvement in speech quality. This study warrants further work into development of algorithms to regenerate harmonics of voiced segments in the presence of noise.

  3. Listening, Play, and Social Attraction in the Mentoring of New Teachers

    ERIC Educational Resources Information Center

    Young, Raymond W.; Cates, Carl M.

    2010-01-01

    This study explores the roles of mentors and proteges as they manage dialectical tensions in a professional environment. Sixty-two first-year teachers in a county school district in the southeastern USA answered a questionnaire about their mentors' empathic and directive listening, playful communication, social attractiveness, and ability to help…

  4. Development of a Computer-Based Measure of Listening Comprehension of Science Talk

    ERIC Educational Resources Information Center

    Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien

    2015-01-01

    The purpose of this study was to develop a computer-based assessment for elementary school students' listening comprehension of science talk within an inquiry-oriented environment. The development procedure had 3 steps: a literature review to define the framework of the test, collecting and identifying key constructs of science talk, and…

  5. Perception of Spectral Contrast by Hearing-Impaired Listeners

    ERIC Educational Resources Information Center

    Dreisbach, Laura E.; Leek, Marjorie R.; Lentz, Jennifer J.

    2005-01-01

    The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and…

  6. Tutorial on the Psychophysics and Technology of Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1998-01-01

    Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues. The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which the localization of real and synthesized stimuli are directly compared in psychophysical studies. To this end, the results of psychophysical experiments examining the perceptual validity of the synthesis technique will be reviewed and factors that can enhance perceptual accuracy and realism will be discussed. Of particular interest is the relationship between individual differences in HRTFs and in behavior, the role of reverberant cues in reducing the perceptual errors observed with virtual sound sources, and the importance of developing perceptually valid methods of simplifying the synthesis technique. Recent attempts to implement the synthesis technique in real time systems will also be discussed and an attempt made to interpret their quoted system specifications in terms of perceptual performance. Finally, some critical research and technology development issues for the future will be outlined.

  7. Evaluation of Adaptive Noise Management Technologies for School-Age Children with Hearing Loss.

    PubMed

    Wolfe, Jace; Duke, Mila; Schafer, Erin; Jones, Christine; Rakita, Lori

    2017-05-01

    Children with hearing loss experience significant difficulty understanding speech in noisy and reverberant situations. Adaptive noise management technologies, such as fully adaptive directional microphones and digital noise reduction, have the potential to improve communication in noise for children with hearing aids. However, there are no published studies evaluating the potential benefits children receive from the use of adaptive noise management technologies in simulated real-world environments as well as in daily situations. The objective of this study was to compare speech recognition, speech intelligibility ratings (SIRs), and sound preferences of children using hearing aids equipped with and without adaptive noise management technologies. A single-group, repeated measures design was used to evaluate performance differences obtained in four simulated environments. In each simulated environment, participants were tested in a basic listening program with minimal noise management features, a manual program designed for that scene, and the hearing instruments' adaptive operating system that steered hearing instrument parameterization based on the characteristics of the environment. Twelve children with mild to moderately severe sensorineural hearing loss. Speech recognition and SIRs were evaluated in three hearing aid programs with and without noise management technologies across two different test sessions and various listening environments. Also, the participants' perceptual hearing performance in daily real-world listening situations with two of the hearing aid programs was evaluated during a four- to six-week field trial that took place between the two laboratory sessions. On average, the use of adaptive noise management technology improved sentence recognition in noise for speech presented in front of the participant but resulted in a decrement in performance for signals arriving from behind when the participant was facing forward. However, the improvement with adaptive noise management exceeded the decrement obtained when the signal arrived from behind. Most participants reported better subjective SIRs when using adaptive noise management technologies, particularly when the signal of interest arrived from in front of the listener. In addition, most participants reported a preference for the technology with an automatically switching, adaptive directional microphone and adaptive noise reduction in real-world listening situations when compared to conventional, omnidirectional microphone use with minimal noise reduction processing. Use of the adaptive noise management technologies evaluated in this study improves school-age children's speech recognition in noise for signals arriving from the front. Although a small decrement in speech recognition in noise was observed for signals arriving from behind the listener, most participants reported a preference for use of noise management technology both when the signal arrived from in front and from behind the child. The results of this study suggest that adaptive noise management technologies should be considered for use with school-age children when listening in academic and social situations. American Academy of Audiology

  8. Listening to the Narratives of Our Patients as Part of Holistic Nursing Care.

    PubMed

    Alicea-Planas, Jessica

    2016-06-01

    Nurses in all settings interact with individuals often identified as vulnerable or marginalized, and at times are frustrated by their own inability to "make a difference." By allowing oneself to listen, a fuller appreciation of the individual circumstance, or that which is unwritten, can be appreciated. Storytelling is a way to set the stage for experiences to be shared and can provide insight into lives. The narratives told by patients are often complex, affected by various influences of the environment, and personal, which in combination with nursing informs their individual healing journeys. Using a philosophy of nursing that encompasses all of the distinct influences on these narratives can allow nurses to more holistically care and advocate for their patients. As this case study shows, nursing plays a significant role in the narratives of others. Although many vulnerable populations live in a perpetual cycle of poverty and poor health, some nurses are able to assess the intricacies of a situation and facilitate understanding, as part of their support, caring, and advocacy for their patients. © The Author(s) 2015.

  9. Effects of noise on speech recognition: Challenges for communication by service members.

    PubMed

    Le Prell, Colleen G; Clavier, Odile H

    2017-06-01

    Speech communication often takes place in noisy environments; this is an urgent issue for military personnel who must communicate in high-noise environments. The effects of noise on speech recognition vary significantly according to the sources of noise, the number and types of talkers, and the listener's hearing ability. In this review, speech communication is first described as it relates to current standards of hearing assessment for military and civilian populations. The next section categorizes types of noise (also called maskers) according to their temporal characteristics (steady or fluctuating) and perceptive effects (energetic or informational masking). Next, speech recognition difficulties experienced by listeners with hearing loss and by older listeners are summarized, and questions on the possible causes of speech-in-noise difficulty are discussed, including recent suggestions of "hidden hearing loss". The final section describes tests used by military and civilian researchers, audiologists, and hearing technicians to assess performance of an individual in recognizing speech in background noise, as well as metrics that predict performance based on a listener and background noise profile. This article provides readers with an overview of the challenges associated with speech communication in noisy backgrounds, as well as its assessment and potential impact on functional performance, and provides guidance for important new research directions relevant not only to military personnel, but also to employees who work in high noise environments. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients.

    PubMed

    Gfeller, K; Christ, A; Knutson, J F; Witt, S; Murray, K T; Tyler, R S

    2000-01-01

    This paper describes the listening habits and musical enjoyment of postlingually deafened adults who use cochlear implants. Sixty-five implant recipients (35 females, 30 males) participated in a survey containing questions about musical background, prior involvement in music, and audiologic success with the implant in various listening circumstances. Responses were correlated with measures of cognition and speech recognition. Sixty-seven implant recipients completed daily diaries (7 consecutive days) in which they reported hours spent in specific music activities. Results indicate a wide range of success with music. In general, people enjoy music less postimplantation than prior to hearing loss. Musical enjoyment is influenced by the listening environment (e.g., a quiet room) and features of the music.

  11. The effects of reverberant self- and overlap-masking on speech recognition in cochlear implant listeners.

    PubMed

    Desmond, Jill M; Collins, Leslie M; Throckmorton, Chandra S

    2014-06-01

    Many cochlear implant (CI) listeners experience decreased speech recognition in reverberant environments [Kokkinakis et al., J. Acoust. Soc. Am. 129(5), 3221-3232 (2011)], which may be caused by a combination of self- and overlap-masking [Bolt and MacDonald, J. Acoust. Soc. Am. 21(6), 577-580 (1949)]. Determining the extent to which these effects decrease speech recognition for CI listeners may influence reverberation mitigation algorithms. This study compared speech recognition with ideal self-masking mitigation, with ideal overlap-masking mitigation, and with no mitigation. Under these conditions, mitigating either self- or overlap-masking resulted in significant improvements in speech recognition for both normal hearing subjects utilizing an acoustic model and for CI listeners using their own devices.

  12. Help Options for L2 Listening in CALL: A Research Agenda

    ERIC Educational Resources Information Center

    Cross, Jeremy

    2017-01-01

    In this article, I present an agenda for researching help options for second language (L2) listening in computer-assisted language learning (CALL) environments. I outline several theories which researchers in the area draw on, then present common points of concern identified from a review of related literature. This serves as a means to…

  13. Quieting: A Practical Guide to Noise Control. NBS Handbook 119.

    ERIC Educational Resources Information Center

    Berendt, Raymond D.; And Others

    This guide describes the ways in which sounds are generated, travel, and affect the listener's hearing and well-being. Recommendations are given for controlling noise at the source and along its path of travel, and for protecting the listener. Remedies are given for noise commonly encountered in homes, work environments, schools, while traveling,…

  14. Evaluating Listening and Speaking Skills in a Mobile Game-Based Learning Environment with Situational Contexts

    ERIC Educational Resources Information Center

    Hwang, Wu-Yuin; Shih, Timothy K.; Ma, Zhao-Heng; Shadiev, Rustam; Chen, Shu-Yu

    2016-01-01

    Game-based learning activities that facilitate students' listening and speaking skills were designed in this study. To participate in learning activities, students in the control group used traditional methods, while students in the experimental group used a mobile system. In our study, we looked into the feasibility of mobile game-based learning…

  15. Home Reading Environment and Brain Activation in Preschool Children Listening to Stories.

    PubMed

    Hutton, John S; Horowitz-Kraus, Tzipi; Mendelsohn, Alan L; DeWitt, Tom; Holland, Scott K

    2015-09-01

    Parent-child reading is widely advocated to promote cognitive development, including in recommendations from the American Academy of Pediatrics to begin this practice at birth. Although parent-child reading has been shown in behavioral studies to improve oral language and print concepts, quantifiable effects on the brain have not been previously studied. Our study used blood oxygen level-dependent functional magnetic resonance imaging to examine the relationship between home reading environment and brain activity during a story listening task in a sample of preschool-age children. We hypothesized that while listening to stories, children with greater home reading exposure would exhibit higher activation of left-sided brain regions involved with semantic processing (extraction of meaning). Nineteen 3- to 5-year-old children were selected from a longitudinal study of normal brain development. All completed blood oxygen level-dependent functional magnetic resonance imaging using an age-appropriate story listening task, where narrative alternated with tones. We performed a series of whole-brain regression analyses applying composite, subscale, and individual reading-related items from the validated StimQ-P measure of home cognitive environment as explanatory variables for neural activation. Higher reading exposure (StimQ-P Reading subscale score) was positively correlated (P < .05, corrected) with neural activation in the left-sided parietal-temporal-occipital association cortex, a "hub" region supporting semantic language processing, controlling for household income. In preschool children listening to stories, greater home reading exposure is positively associated with activation of brain areas supporting mental imagery and narrative comprehension, controlling for household income. These neural biomarkers may help inform eco-bio-developmental models of emergent literacy. Copyright © 2015 by the American Academy of Pediatrics.

  16. Content-specific coordination of listeners' to speakers' EEG during communication.

    PubMed

    Kuhlen, Anna K; Allefeld, Carsten; Haynes, John-Dylan

    2012-01-01

    Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people-a person speaking and a person listening. The EEG of one set of twelve participants ("speakers") was recorded while they were narrating short stories. The EEG of another set of twelve participants ("listeners") was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called "situation models". With this study we link a coordination of neural activity between individuals directly to verbally communicated information.

  17. The effect of language experience on perceptual normalization of Mandarin tones and non-speech pitch contours.

    PubMed

    Luo, Xin; Ashmore, Krista B

    2014-06-01

    Context-dependent pitch perception helps listeners recognize tones produced by speakers with different fundamental frequencies (f0s). The role of language experience in tone normalization remains unclear. In this cross-language study of tone normalization, native Mandarin and English listeners were asked to recognize Mandarin Tone 1 (high-flat) and Tone 2 (mid-rising) with a preceding Mandarin sentence. To further test whether context-dependent pitch perception is speech-specific or domain-general, both language groups were asked to identify non-speech flat and rising pitch contours with a preceding non-speech flat pitch contour. Results showed that both Mandarin and English listeners made more rising responses with non-speech than with speech stimuli, due to differences in spectral complexity and listening task between the two stimulus types. English listeners made more rising responses than Mandarin listeners with both speech and non-speech stimuli. Contrastive context effects (more rising responses in the high-f0 context than in the low-f0 context) were found with both speech and non-speech stimuli for Mandarin listeners, but not for English listeners. English listeners' lack of tone experience may have caused more rising responses and limited use of context f0 cues. These results suggest that context-dependent pitch perception in tone normalization is domain-general, but influenced by long-term language experience.

  18. Masking Release for Igbo and English.

    PubMed

    Ebem, Deborah U; Desloge, Joseph G; Reed, Charlotte M; Braida, Louis D; Uguru, Joy O

    2013-09-01

    In this research, we explored the effect of noise interruption rate on speech intelligibility. Specifically, we used the Hearing In Noise Test (HINT) procedure with the original HINT stimuli (English) and Igbo stimuli to assess speech reception ability in interrupted noise. For a given noise level, the HINT test provides an estimate of the signal-to-noise ratio (SNR) required for 50%-correct speech intelligibility. The SNR for 50%-correct intelligibility changes depending upon the interruption rate of the noise. This phenomenon (called Masking Release) has been studied extensively in English but not for Igbo - which is an African tonal language spoken predominantly in South Eastern Nigeria. This experiment explored and compared the phenomenon of Masking Release for (i) native English speakers listening to English, (ii) native Igbo speakers listening to English, and (iii) native Igbo speakers listening to Igbo. Since Igbo is a tonal language and English is a non-tonal language, this allowed us to compare Masking Release patterns on native speakers of tonal and non-tonal languages. Our results for native English speakers listening to English HINT show that the SNR and the masking release are orderly and consistent with other English HINT data for English speakers. Our result for Igbo speakers listening to English HINT sentences show that there is greater variability in results across the different Igbo listeners than across the English listeners. This result likely reflects different levels of ability in the English language across the Igbo listeners. The masking release values in dB are less than for English listeners. Our results for Igbo speakers listening to Igbo show that in general, the SNRs for Igbo sentences are lower than for English/English and Igbo/English. This means that the Igbo listeners could understand 50% of the Igbo sentences at SNRs less than those required for English sentences by either native or non-native listeners. This result can be explained by the fact that the perception of Igbo utterances by Igbo subjects may have been aided by the prediction of tonal and vowel harmony features existent in the Igbo language. In agreement with other studies, our results also show that in a noisy environment listeners are able to perceive their native language better than a second language. The ability of native language speakers to perceive their language better than a second language in a noisy environment may be attributed to the fact that: Native speakers are more familiar with the sounds of their language than second language speakers.One of the features of language is that it is predictable hence even in noise a native speaker may be able to predict a succeeding word that is scarcely audible. These contextual effects are facilitated by familiarity.

  19. Rethinking Classroom Participation: Listening to Silent Voices

    ERIC Educational Resources Information Center

    Schultz, Katherine

    2009-01-01

    Many educators understand how to gauge learning by paying close attention to student talk. Few know how to interpret and attend to student silence as a form of participation. In her new book, Katherine Schultz examines the complex role student silence can play in teaching and learning. Urging teachers to listen to student silence in new ways, this…

  20. Monitoring Student Listening Techniques: An Approach to Teaching the Foundations of a Skill.

    ERIC Educational Resources Information Center

    Swanson, Charles H.

    To teach listening as a discreet skill, teachers need a suitable definition of the word "skill." The author suggests defining a skill as a complex of techniques and behaviors from which performers select, depending upon the situation, to fullfill their purposes. The curricular design should be based on four components: (1) establishing attention,…

  1. Listening and understanding

    PubMed Central

    Parrott, Linda J.

    1984-01-01

    The activities involved in mediating reinforcement for a speaker's behavior constitute only one phase of a listener's reaction to verbal stimulation. Other phases include listening and understanding what a speaker has said. It is argued that the relative subtlety of these activities is reason for their careful scrutiny, not their complete neglect. Listening is conceptualized as a functional relation obtaining between the responding of an organism and the stimulating of an object. A current instance of listening is regarded as a point in the evolution of similar instances, whereby one's history of perceptual activity may be regarded as existing in one's current interbehavior. Understanding reactions are similarly analyzed; however, they are considerably more complex than listening reactions due to the preponderance of implicit responding involved in reactions of this type. Implicit responding occurs by way of substitute stimulation, and an analysis of the serviceability of verbal stimuli in this regard is made. Understanding is conceptualized as seeing, hearing, or otherwise reacting to actual things in the presence of their “names” alone. The value of an inferential analysis of listening and understanding is also discussed, with the conclusion that unless some attempt is made to elaborate on the nature and operation of these activities, the more apparent reinforcement mediational activities of a listener are merely asserted without an explanation for their occurrence. PMID:22478594

  2. Vowel normalization for accent: An investigation of perceptual plasticity in young adults

    NASA Astrophysics Data System (ADS)

    Evans, Bronwen G.; Iverson, Paul

    2004-05-01

    Previous work has emphasized the role of early experience in the ability to accurately perceive and produce foreign or foreign-accented speech. This study examines how listeners at a much later stage in language development-early adulthood-adapt to a non-native accent within the same language. A longitudinal study investigated whether listeners who had had no previous experience of living in multidialectal environments adapted their speech perception and production when attending university. Participants were tested before beginning university and then again 3 months later. An acoustic analysis of production was carried out and perceptual tests were used to investigate changes in word intelligibility and vowel categorization. Preliminary results suggest that listeners are able to adjust their phonetic representations and that these patterns of adjustment are linked to the changes in production that speakers typically make due to sociolinguistic factors when living in multidialectal environments.

  3. Cortical network differences in the sighted versus early blind for recognition of human-produced action sounds

    PubMed Central

    Lewis, James W.; Frum, Chris; Brefczynski-Lewis, Julie A.; Talkington, William J.; Walker, Nathan A.; Rapuano, Kristina M.; Kovach, Amanda L.

    2012-01-01

    Both sighted and blind individuals can readily interpret meaning behind everyday real-world sounds. In sighted listeners, we previously reported that regions along the bilateral posterior superior temporal sulci (pSTS) and middle temporal gyri (pMTG) are preferentially activated when presented with recognizable action sounds. These regions have generally been hypothesized to represent primary loci for complex motion processing, including visual biological motion processing and audio-visual integration. However, it remained unclear whether, or to what degree, life-long visual experience might impact functions related to hearing perception or memory of sound-source actions. Using functional magnetic resonance imaging (fMRI), we compared brain regions activated in congenitally blind versus sighted listeners in response to hearing a wide range of recognizable human-produced action sounds (excluding vocalizations) versus unrecognized, backward-played versions of those sounds. Here we show that recognized human action sounds commonly evoked activity in both groups along most of the left pSTS/pMTG complex, though with relatively greater activity in the right pSTS/pMTG by the blind group. These results indicate that portions of the postero-lateral temporal cortices contain domain-specific hubs for biological and/or complex motion processing independent of sensory-modality experience. Contrasting the two groups, the sighted listeners preferentially activated bilateral parietal plus medial and lateral frontal networks, while the blind listeners preferentially activated left anterior insula plus bilateral anterior calcarine and medial occipital regions, including what would otherwise have been visual-related cortex. These global-level network differences suggest that blind and sighted listeners may preferentially use different memory retrieval strategies when attempting to recognize action sounds. PMID:21305666

  4. Predictive uncertainty in auditory sequence processing

    PubMed Central

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  5. How much does language proficiency by non-native listeners influence speech audiometric tests in noise?

    PubMed

    Warzybok, Anna; Brand, Thomas; Wagener, Kirsten C; Kollmeier, Birger

    2015-01-01

    The current study investigates the extent to which the linguistic complexity of three commonly employed speech recognition tests and second language proficiency influence speech recognition thresholds (SRTs) in noise in non-native listeners. SRTs were measured for non-natives and natives using three German speech recognition tests: the digit triplet test (DTT), the Oldenburg sentence test (OLSA), and the Göttingen sentence test (GÖSA). Sixty-four non-native and eight native listeners participated. Non-natives can show native-like SRTs in noise only for the linguistically easy speech material (DTT). Furthermore, the limitation of phonemic-acoustical cues in digit triplets affects speech recognition to the same extent in non-natives and natives. For more complex and less familiar speech materials, non-natives, ranging from basic to advanced proficiency in German, require on average 3-dB better signal-to-noise ratio for the OLSA and 6-dB for the GÖSA to obtain 50% speech recognition compared to native listeners. In clinical audiology, SRT measurements with a closed-set speech test (i.e. DTT for screening or OLSA test for clinical purposes) should be used with non-native listeners rather than open-set speech tests (such as the GÖSA or HINT), especially if a closed-set version in the patient's own native language is available.

  6. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study.

    PubMed

    Dykstra, Andrew R; Halgren, Eric; Gutschalk, Alexander; Eskandar, Emad N; Cash, Sydney S

    2016-01-01

    In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release) has not been well-characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity (HGA) between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus) as well as a broad P3b-like potential (between ~300 and 600 ms) with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  7. Effects of Help Options in a Multimedia Listening Environment on L2 Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Mohsen, Mohammed Ali

    2016-01-01

    Several types of help options have been incorporated into reading and listening comprehension activities to aid second language (L2) vocabulary acquisition. Textbook authors, teachers, and sometimes even students may pick and choose which help options they wish to use. In this paper, I investigate the effects of two help options in a multimedia…

  8. Music Analysis Down the (You) Tube? Exploring the Potential of Cross-Media Listening for the Music Classroom

    ERIC Educational Resources Information Center

    Webb, Michael

    2007-01-01

    School students' immersion in a rich entertainment media environment has implications for classroom listening. Increasing interaction among media, design, games, communications and arts fields has led to a growing trend in the creative alignment of music and moving image. Video sharing sites such as YouTube are assisting in the proliferation and…

  9. Dimensions Underlying the Perceived Similarity of Acoustic Environments

    PubMed Central

    Aletta, Francesco; Axelsson, Östen; Kang, Jian

    2017-01-01

    Scientific research on how people perceive or experience and/or understand the acoustic environment as a whole (i.e., soundscape) is still in development. In order to predict how people would perceive an acoustic environment, it is central to identify its underlying acoustic properties. This was the purpose of the present study. Three successive experiments were conducted. With the aid of 30 university students, the first experiment mapped the underlying dimensions of perceived similarity among 50 acoustic environments, using a visual sorting task of their spectrograms. Three dimensions were identified: (1) Distinguishable–Indistinguishable sound sources, (2) Background–Foreground sounds, and (3) Intrusive–Smooth sound sources. The second experiment was aimed to validate the results from Experiment 1 by a listening experiment. However, a majority of the 10 expert listeners involved in Experiment 2 used a qualitatively different approach than the 30 university students in Experiment 1. A third experiment was conducted in which 10 more expert listeners performed the same task as per Experiment 2, with spliced audio signals. Nevertheless, Experiment 3 provided a statistically significantly worse result than Experiment 2. These results suggest that information about the meaning of the recorded sounds could be retrieved in the spectrograms, and that the meaning of the sounds may be captured with the aid of holistic features of the acoustic environment, but such features are still unexplored and further in-depth research is needed in this field. PMID:28747894

  10. A Limiting Feature of the Mozart Effect: Listening Enhances Mental Rotation Abilities in Non-Musicians but Not Musicians

    ERIC Educational Resources Information Center

    Aheadi, Afshin; Dixon, Peter; Glover, Scott

    2010-01-01

    The "Mozart effect" occurs when performance on spatial cognitive tasks improves following exposure to Mozart. It is hypothesized that the Mozart effect arises because listening to complex music activates similar regions of the right cerebral hemisphere as are involved in spatial cognition. A counter-intuitive prediction of this hypothesis (and one…

  11. Children's Performance in Complex Listening Conditions: Effects of Hearing Loss and Digital Noise Reduction

    ERIC Educational Resources Information Center

    Pittman, Andrea

    2011-01-01

    Purpose: To determine the effect of hearing loss (HL) on children's performance for an auditory task under demanding listening conditions and to determine the effect of digital noise reduction (DNR) on that performance. Method: Fifty children with normal hearing (NH) and 30 children with HL (8-12 years of age) categorized words in the presence of…

  12. Virtual environment display for a 3D audio room simulation

    NASA Technical Reports Server (NTRS)

    Chapin, William L.; Foster, Scott H.

    1992-01-01

    The development of a virtual environment simulation system integrating a 3D acoustic audio model with an immersive 3D visual scene is discussed. The system complements the acoustic model and is specified to: allow the listener to freely move about the space, a room of manipulable size, shape, and audio character, while interactively relocating the sound sources; reinforce the listener's feeling of telepresence in the acoustical environment with visual and proprioceptive sensations; enhance the audio with the graphic and interactive components, rather than overwhelm or reduce it; and serve as a research testbed and technology transfer demonstration. The hardware/software design of two demonstration systems, one installed and one portable, are discussed through the development of four iterative configurations.

  13. Content-specific coordination of listeners' to speakers' EEG during communication

    PubMed Central

    Kuhlen, Anna K.; Allefeld, Carsten; Haynes, John-Dylan

    2012-01-01

    Cognitive neuroscience has recently begun to extend its focus from the isolated individual mind to two or more individuals coordinating with each other. In this study we uncover a coordination of neural activity between the ongoing electroencephalogram (EEG) of two people—a person speaking and a person listening. The EEG of one set of twelve participants (“speakers”) was recorded while they were narrating short stories. The EEG of another set of twelve participants (“listeners”) was recorded while watching audiovisual recordings of these stories. Specifically, listeners watched the superimposed videos of two speakers simultaneously and were instructed to attend either to one or the other speaker. This allowed us to isolate neural coordination due to processing the communicated content from the effects of sensory input. We find several neural signatures of communication: First, the EEG is more similar among listeners attending to the same speaker than among listeners attending to different speakers, indicating that listeners' EEG reflects content-specific information. Secondly, listeners' EEG activity correlates with the attended speakers' EEG, peaking at a time delay of about 12.5 s. This correlation takes place not only between homologous, but also between non-homologous brain areas in speakers and listeners. A semantic analysis of the stories suggests that listeners coordinate with speakers at the level of complex semantic representations, so-called “situation models”. With this study we link a coordination of neural activity between individuals directly to verbally communicated information. PMID:23060770

  14. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment

    PubMed Central

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A.

    2013-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension. PMID:24578684

  15. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment.

    PubMed

    Avivi-Reich, Meital; Daneman, Meredyth; Schneider, Bruce A

    2014-01-01

    Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.

  16. The Effect of Conventional and Transparent Surgical Masks on Speech Understanding in Individuals with and without Hearing Loss.

    PubMed

    Atcherson, Samuel R; Mendel, Lisa Lucks; Baltimore, Wesley J; Patro, Chhayakanta; Lee, Sungmin; Pousson, Monique; Spann, M Joshua

    2017-01-01

    It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listeners with moderate sensorineural hearing loss, and ten listeners with severe-to-profound hearing loss. Selected lists from the Connected Speech Test were digitally recorded with and without surgical masks and then presented to the listeners at 65 dB HL in five conditions against a background of four-talker babble (+10 dB SNR): without a mask (auditory only), without a mask (auditory and visual), with a transparent mask (auditory only), with a transparent mask (auditory and visual), and with a paper mask (auditory only). A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ∼2 dB root mean square. Listeners with NH performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise for listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise. American Academy of Audiology

  17. Long-term temporal tracking of speech rate affects spoken-word recognition.

    PubMed

    Baese-Berk, Melissa M; Heffner, Christopher C; Dilley, Laura C; Pitt, Mark A; Morrill, Tuuli H; McAuley, J Devin

    2014-08-01

    Humans unconsciously track a wide array of distributional characteristics in their sensory environment. Recent research in spoken-language processing has demonstrated that the speech rate surrounding a target region within an utterance influences which words, and how many words, listeners hear later in that utterance. On the basis of hypotheses that listeners track timing information in speech over long timescales, we investigated the possibility that the perception of words is sensitive to speech rate over such a timescale (e.g., an extended conversation). Results demonstrated that listeners tracked variation in the overall pace of speech over an extended duration (analogous to that of a conversation that listeners might have outside the lab) and that this global speech rate influenced which words listeners reported hearing. The effects of speech rate became stronger over time. Our findings are consistent with the hypothesis that neural entrainment by speech occurs on multiple timescales, some lasting more than an hour. © The Author(s) 2014.

  18. Acoustic and social design of schools-ways to improve the school listening environment

    NASA Astrophysics Data System (ADS)

    Hagen, Mechthild

    2005-04-01

    Results of noise research indicate that communication, and as a result, teaching, learning and the social atmosphere are impeded by noise in schools. The development of strategies to reduce noise levels has often not been effective. A more promising approach seems to be to pro-actively support the ability to listen and to understand. The presentation describes the approach to an acoustic and social school design developed and explored within the project ``GanzOhrSein'' by the Education Department of the Ludwig-Maximilians-University of Munich. The scope includes an analysis of the current ``school soundscape,'' an introduction to the concept of the project to improve individual listening abilities and the conditions for listening, as well as practical examples and relevant research results. We conclude that an acoustic school design should combine acoustic changes in classrooms with educational activities to support listening at schools and thus contribute to improving individual learning conditions and to reducing stress on both pupils and teachers.

  19. Remote listening and passive acoustic detection in a 3-D environment

    NASA Astrophysics Data System (ADS)

    Barnhill, Colin

    Teleconferencing environments are a necessity in business, education and personal communication. They allow for the communication of information to remote locations without the need for travel and the necessary time and expense required for that travel. Visual information can be communicated using cameras and monitors. The advantage of visual communication is that an image can capture multiple objects and convey them, using a monitor, to a large group of people regardless of the receiver's location. This is not the case for audio. Currently, most experimental teleconferencing systems' audio is based on stereo recording and reproduction techniques. The problem with this solution is that it is only effective for one or two receivers. To accurately capture a sound environment consisting of multiple sources and to recreate that for a group of people is an unsolved problem. This work will focus on new methods of multiple source 3-D environment sound capture and applications using these captured environments. Using spherical microphone arrays, it is now possible to capture a true 3-D environment A spherical harmonic transform on the array's surface allows us to determine the basis functions (spherical harmonics) for all spherical wave solutions (up to a fixed order). This spherical harmonic decomposition (SHD) allows us to not only look at the time and frequency characteristics of an audio signal but also the spatial characteristics of an audio signal. In this way, a spherical harmonic transform is analogous to a Fourier transform in that a Fourier transform transforms a signal into the frequency domain and a spherical harmonic transform transforms a signal into the spatial domain. The SHD also decouples the input signals from the microphone locations. Using the SHD of a soundfield, new algorithms are available for remote listening, acoustic detection, and signal enhancement The new algorithms presented in this paper show distinct advantages over previous detection and listening algorithms especially for multiple speech sources and room environments. The algorithms use high order (spherical harmonic) beamforming and power signal characteristics for source localization and signal enhancement These methods are applied to remote listening, surveillance, and teleconferencing.

  20. Changes in Preference for Infant-Directed Speech in Low and Moderate Noise by 4.5- to 13-Month-Olds

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; Hussain, Isma

    2006-01-01

    Although a large literature discusses infants' preference for infant-directed speech (IDS), few studies have examined how this preference might change over time or across listening situations. The work reported here compares infants' preference for IDS while listening in a quiet versus a noisy environment, and across 3 points in development: 4.5…

  1. An Alternative to Language Learner Dependence on L2 Caption-Reading Input for Comprehension of Sitcoms in a Multimedia Learning Environment

    ERIC Educational Resources Information Center

    Li, C.-H.

    2014-01-01

    Most second/foreign language (L2) learners have difficulty understanding listening input because of its implicit and ephemeral nature, and they typically have better reading comprehension than listening comprehension skills. This study examines the effects of using an interactive advance-organizer activity on the DVD video comprehension of L2…

  2. Using Standardized Clients in the Classroom: An Evaluation of a Training Module to Teach Active Listening Skills to Social Work Students

    ERIC Educational Resources Information Center

    Rogers, Anissa; Welch, Benjamin

    2009-01-01

    This article describes the implementation of a module that utilizes drama students to teach social work students how to use active listening skills in an interview environment. The module was implemented during a semester-long micro skills practice course taught to 13 undergraduate social work seniors in a western liberal arts university. Four…

  3. Airlift Operation Modeling Using Discrete Event Simulation (DES)

    DTIC Science & Technology

    2009-12-01

    Java ......................................................................................................20 2. Simkit...JRE Java Runtime Environment JVM Java Virtual Machine lbs Pounds LAM Load Allocation Mode LRM Landing Spot Reassignment Mode LEGO Listener Event...SOFTWARE DEVELOPMENT ENVIRONMENT The following are the software tools and development environment used for constructing the models. 1. Java Java

  4. Mind-wandering and alterations to default mode network connectivity when listening to naturalistic versus artificial sounds.

    PubMed

    Gould van Praag, Cassandra D; Garfinkel, Sarah N; Sparasci, Oliver; Mees, Alex; Philippides, Andrew O; Ware, Mark; Ottaviani, Cristina; Critchley, Hugo D

    2017-03-27

    Naturalistic environments have been demonstrated to promote relaxation and wellbeing. We assess opposing theoretical accounts for these effects through investigation of autonomic arousal and alterations of activation and functional connectivity within the default mode network (DMN) of the brain while participants listened to sounds from artificial and natural environments. We found no evidence for increased DMN activity in the naturalistic compared to artificial or control condition, however, seed based functional connectivity showed a shift from anterior to posterior midline functional coupling in the naturalistic condition. These changes were accompanied by an increase in peak high frequency heart rate variability, indicating an increase in parasympathetic activity in the naturalistic condition in line with the Stress Recovery Theory of nature exposure. Changes in heart rate and the peak high frequency were correlated with baseline functional connectivity within the DMN and baseline parasympathetic tone respectively, highlighting the importance of individual neural and autonomic differences in the response to nature exposure. Our findings may help explain reported health benefits of exposure to natural environments, through identification of alterations to autonomic activity and functional coupling within the DMN when listening to naturalistic sounds.

  5. Mind-wandering and alterations to default mode network connectivity when listening to naturalistic versus artificial sounds

    PubMed Central

    Gould van Praag, Cassandra D.; Garfinkel, Sarah N.; Sparasci, Oliver; Mees, Alex; Philippides, Andrew O.; Ware, Mark; Ottaviani, Cristina; Critchley, Hugo D.

    2017-01-01

    Naturalistic environments have been demonstrated to promote relaxation and wellbeing. We assess opposing theoretical accounts for these effects through investigation of autonomic arousal and alterations of activation and functional connectivity within the default mode network (DMN) of the brain while participants listened to sounds from artificial and natural environments. We found no evidence for increased DMN activity in the naturalistic compared to artificial or control condition, however, seed based functional connectivity showed a shift from anterior to posterior midline functional coupling in the naturalistic condition. These changes were accompanied by an increase in peak high frequency heart rate variability, indicating an increase in parasympathetic activity in the naturalistic condition in line with the Stress Recovery Theory of nature exposure. Changes in heart rate and the peak high frequency were correlated with baseline functional connectivity within the DMN and baseline parasympathetic tone respectively, highlighting the importance of individual neural and autonomic differences in the response to nature exposure. Our findings may help explain reported health benefits of exposure to natural environments, through identification of alterations to autonomic activity and functional coupling within the DMN when listening to naturalistic sounds. PMID:28345604

  6. Foraging Ecology Predicts Learning Performance in Insectivorous Bats

    PubMed Central

    Clarin, Theresa M. A.; Ruczyński, Ireneusz; Page, Rachel A.

    2013-01-01

    Bats are unusual among mammals in showing great ecological diversity even among closely related species and are thus well suited for studies of adaptation to the ecological background. Here we investigate whether behavioral flexibility and simple- and complex-rule learning performance can be predicted by foraging ecology. We predict faster learning and higher flexibility in animals hunting in more complex, variable environments than in animals hunting in more simple, stable environments. To test this hypothesis, we studied three closely related insectivorous European bat species of the genus Myotis that belong to three different functional groups based on foraging habitats: M. capaccinii, an open water forager, M. myotis, a passive listening gleaner, and M. emarginatus, a clutter specialist. We predicted that M. capaccinii would show the least flexibility and slowest learning reflecting its relatively unstructured foraging habitat and the stereotypy of its natural foraging behavior, while the other two species would show greater flexibility and more rapid learning reflecting the complexity of their natural foraging tasks. We used a purposefully unnatural and thus species-fair crawling maze to test simple- and complex-rule learning, flexibility and re-learning performance. We found that M. capaccinii learned a simple rule as fast as the other species, but was slower in complex rule learning and was less flexible in response to changes in reward location. We found no differences in re-learning ability among species. Our results corroborate the hypothesis that animals’ cognitive skills reflect the demands of their ecological niche. PMID:23755146

  7. Music enjoyment with cochlear implantation.

    PubMed

    Prevoteau, Charlotte; Chen, Stephanie Y; Lalwani, Anil K

    2018-10-01

    Since the advent of cochlear implant (CI) surgery in the 1960s, there have been remarkable technological and surgical advances enabling excellent speech perception in quiet with many CI users able to use the telephone. However, many CI users struggle with music perception, particularly with the pitch-based and melodic elements of music. Yet remarkably, despite poor music perception, many CI users enjoy listening to music based on self-report questionnaires, and prospective studies have suggested a disassociation between music perception and enjoyment. Music enjoyment is arguably a more functional measure of one's listening experience, and thus enhancing one's listening experience is a worthy goal. Recent studies have shown that re-engineering music to reduce its complexity may enhance enjoyment in CI users and also delineate differences in musical preferences from normal hearing listeners. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  9. Perception of dissonance by people with normal hearing and sensorineural hearing loss

    NASA Astrophysics Data System (ADS)

    Tufts, Jennifer B.; Molis, Michelle R.; Leek, Marjorie R.

    2005-08-01

    The purpose of this study was to determine whether the perceived sensory dissonance of pairs of pure tones (PT dyads) or pairs of harmonic complex tones (HC dyads) is altered due to sensorineural hearing loss. Four normal-hearing (NH) and four hearing-impaired (HI) listeners judged the sensory dissonance of PT dyads geometrically centered at 500 and 2000 Hz, and of HC dyads with fundamental frequencies geometrically centered at 500 Hz. The frequency separation of the members of the dyads varied from 0 Hz to just over an octave. In addition, frequency selectivity was assessed at 500 and 2000 Hz for each listener. Maximum dissonance was perceived at frequency separations smaller than the auditory filter bandwidth for both groups of listners, but maximum dissonance for HI listeners occurred at a greater proportion of their bandwidths at 500 Hz than at 2000 Hz. Further, their auditory filter bandwidths at 500 Hz were significantly wider than those of the NH listeners. For both the PT and HC dyads, curves displaying dissonance as a function of frequency separation were more compressed for the HI listeners, possibly reflecting less contrast between their perceptions of consonance and dissonance compared with the NH listeners.

  10. Comparison of speech recognition with adaptive digital and FM remote microphone hearing assistance technology by listeners who use hearing aids.

    PubMed

    Thibodeau, Linda

    2014-06-01

    The purpose of this study was to compare the benefits of 3 types of remote microphone hearing assistance technology (HAT), adaptive digital broadband, adaptive frequency modulation (FM), and fixed FM, through objective and subjective measures of speech recognition in clinical and real-world settings. Participants included 11 adults, ages 16 to 78 years, with primarily moderate-to-severe bilateral hearing impairment (HI), who wore binaural behind-the-ear hearing aids; and 15 adults, ages 18 to 30 years, with normal hearing. Sentence recognition in quiet and in noise and subjective ratings were obtained in 3 conditions of wireless signal processing. Performance by the listeners with HI when using the adaptive digital technology was significantly better than that obtained with the FM technology, with the greatest benefits at the highest noise levels. The majority of listeners also preferred the digital technology when listening in a real-world noisy environment. The wireless technology allowed persons with HI to surpass persons with normal hearing in speech recognition in noise, with the greatest benefit occurring with adaptive digital technology. The use of adaptive digital technology combined with speechreading cues would allow persons with HI to engage in communication in environments that would have otherwise not been possible with traditional wireless technology.

  11. Self-Monitoring of Listening Abilities in Normal-Hearing Children, Normal-Hearing Adults, and Children with Cochlear Implants

    PubMed Central

    Rothpletz, Ann M.; Wightman, Frederic L.; Kistler, Doris J.

    2012-01-01

    Background Self-monitoring has been shown to be an essential skill for various aspects of our lives, including our health, education, and interpersonal relationships. Likewise, the ability to monitor one’s speech reception in noisy environments may be a fundamental skill for communication, particularly for those who are often confronted with challenging listening environments, such as students and children with hearing loss. Purpose The purpose of this project was to determine if normal-hearing children, normal-hearing adults, and children with cochlear implants can monitor their listening ability in noise and recognize when they are not able to perceive spoken messages. Research Design Participants were administered an Objective-Subjective listening task in which their subjective judgments of their ability to understand sentences from the Coordinate Response Measure corpus presented in speech spectrum noise were compared to their objective performance on the same task. Study Sample Participants included 41 normal-hearing children, 35 normal-hearing adults, and 10 children with cochlear implants. Data Collection and Analysis On the Objective-Subjective listening task, the level of the masker noise remained constant at 63 dB SPL, while the level of the target sentences varied over a 12 dB range in a block of trials. Psychometric functions, relating proportion correct (Objective condition) and proportion perceived as intelligible (Subjective condition) to target/masker ratio (T/M), were estimated for each participant. Thresholds were defined as the T/M required to produce 51% correct (Objective condition) and 51% perceived as intelligible (Subjective condition). Discrepancy scores between listeners’ threshold estimates in the Objective and Subjective conditions served as an index of self-monitoring ability. In addition, the normal-hearing children were administered tests of cognitive skills and academic achievement, and results from these measures were compared to findings on the Objective-Subjective listening task. Results Nearly half of the children with normal hearing significantly overestimated their listening in noise ability on the Objective-Subjective listening task, compared to less than 9% of the adults. There was a significant correlation between age and results on the Objective-Subjective task, indicating that the younger children in the sample (age 7–12 yr) tended to overestimate their listening ability more than the adolescents and adults. Among the children with cochlear implants, eight of the 10 participants significantly overestimated their listening ability (as compared to 13 of the 24 normal-hearing children in the same age range). We did not find a significant relationship between results on the Objective-Subjective listening task and performance on the given measures of academic achievement or intelligence. Conclusions Findings from this study suggest that many children with normal hearing and children with cochlear implants often fail to recognize when they encounter conditions in which their listening ability is compromised. These results may have practical implications for classroom learning, particularly for children with hearing loss in mainstream settings. PMID:22436118

  12. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  13. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  14. Binaural noise reduction via cue-preserving MMSE filter and adaptive-blocking-based noise PSD estimation

    NASA Astrophysics Data System (ADS)

    Azarpour, Masoumeh; Enzner, Gerald

    2017-12-01

    Binaural noise reduction, with applications for instance in hearing aids, has been a very significant challenge. This task relates to the optimal utilization of the available microphone signals for the estimation of the ambient noise characteristics and for the optimal filtering algorithm to separate the desired speech from the noise. The additional requirements of low computational complexity and low latency further complicate the design. A particular challenge results from the desired reconstruction of binaural speech input with spatial cue preservation. The latter essentially diminishes the utility of multiple-input/single-output filter-and-sum techniques such as beamforming. In this paper, we propose a comprehensive and effective signal processing configuration with which most of the aforementioned criteria can be met suitably. This relates especially to the requirement of efficient online adaptive processing for noise estimation and optimal filtering while preserving the binaural cues. Regarding noise estimation, we consider three different architectures: interaural (ITF), cross-relation (CR), and principal-component (PCA) target blocking. An objective comparison with two other noise PSD estimation algorithms demonstrates the superiority of the blocking-based noise estimators, especially the CR-based and ITF-based blocking architectures. Moreover, we present a new noise reduction filter based on minimum mean-square error (MMSE), which belongs to the class of common gain filters, hence being rigorous in terms of spatial cue preservation but also efficient and competitive for the acoustic noise reduction task. A formal real-time subjective listening test procedure is also developed in this paper. The proposed listening test enables a real-time assessment of the proposed computationally efficient noise reduction algorithms in a realistic acoustic environment, e.g., considering time-varying room impulse responses and the Lombard effect. The listening test outcome reveals that the signals processed by the blocking-based algorithms are significantly preferred over the noisy signal in terms of instantaneous noise attenuation. Furthermore, the listening test data analysis confirms the conclusions drawn based on the objective evaluation.

  15. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  16. Music-induced emotions can be predicted from a combination of brain activity and acoustic features.

    PubMed

    Daly, Ian; Williams, Duncan; Hallowell, James; Hwang, Faustina; Kirke, Alexis; Malik, Asad; Weaver, James; Miranda, Eduardo; Nasuto, Slawomir J

    2015-12-01

    It is widely acknowledged that music can communicate and induce a wide range of emotions in the listener. However, music is a highly-complex audio signal composed of a wide range of complex time- and frequency-varying components. Additionally, music-induced emotions are known to differ greatly between listeners. Therefore, it is not immediately clear what emotions will be induced in a given individual by a piece of music. We attempt to predict the music-induced emotional response in a listener by measuring the activity in the listeners electroencephalogram (EEG). We combine these measures with acoustic descriptors of the music, an approach that allows us to consider music as a complex set of time-varying acoustic features, independently of any specific music theory. Regression models are found which allow us to predict the music-induced emotions of our participants with a correlation between the actual and predicted responses of up to r=0.234,p<0.001. This regression fit suggests that over 20% of the variance of the participant's music induced emotions can be predicted by their neural activity and the properties of the music. Given the large amount of noise, non-stationarity, and non-linearity in both EEG and music, this is an encouraging result. Additionally, the combination of measures of brain activity and acoustic features describing the music played to our participants allows us to predict music-induced emotions with significantly higher accuracies than either feature type alone (p<0.01). Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Adrenocorticotropin widens the focus of attention in humans. A nonliner electroencephalographic analysis.

    PubMed

    Mölle, M; Albrecht, C; Marshall, L; Fehm, H L; Born, J

    1997-01-01

    This study examined the effects of ACTH 4-10, a fragment of adrenocorticotropin (ACTH) with known central nervous system (CNS) activity, on the dimensional complexity of the ongoing electroencephalographic (EEG) activity. Stressful stimuli cause ACTH to be released from the pituitary, and as a neuropeptide ACTH may concurrently exert adaptive influences on the brain's processing of these stimuli. Previous studies have indicated an impairing influence of ACTH on selective attention. Dimensional complexity of the EEG, which indexes the brain's way of stimulus processing, was evaluated while subjects performed tasks with different attention demands. Sixteen healthy men (23 to 33 years) were tested once after placebo and another time after administration of ACTH 4-10 (1.25 mg intravenously (i.v.), 30 minutes before testing). The EEG was recorded while subjects were presented with a dichotic listening task (consisting of the concurrent presentation of tone pips to the left and right ear). Subjects either a) listened to pips in both ears (divided attention), or b) listened selectively to pips in one ear (selective attention), or c) ignored all pips. Dimensional complexity of the EEG was higher during divided than selective attention. ACTH significantly increased the EEG complexity during selective attention, in particular over the midfrontal cortex (Fz, Cz). The effects support the view of a de-focusing action of ACTH during selective attention that could serve to improve the organism's adaptation to stress stimuli.

  18. Neurobiology of Everyday Communication: What Have We Learned From Music?

    PubMed

    Kraus, Nina; White-Schwoch, Travis

    2016-06-09

    Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.

  19. Using Graphical Notations to Assess Children's Experiencing of Simple and Complex Musical Fragments

    ERIC Educational Resources Information Center

    Verschaffel, Lieven; Reybrouck, Mark; Janssens, Marjan; Van Dooren, Wim

    2010-01-01

    The aim of this study was to analyze children's graphical notations as external representations of their experiencing when listening to simple sonic stimuli and complex musical fragments. More specifically, we assessed the impact of four factors on children's notations: age, musical background, complexity of the fragment, and most salient…

  20. Long-term usage of modern signal processing by listeners with severe or profound hearing loss: a retrospective survey.

    PubMed

    Keidser, Gitte; Hartley, David; Carter, Lyndal

    2008-12-01

    To investigate the long-term benefit of multichannel wide dynamic range compression (WDRC) alone and in combination with directional microphones and noise reduction/speech enhancement for listeners with severe or profound hearing loss. At the conclusion of a research project, 39 participants with severe or profound hearing loss were fitted with WDRC in one program and WDRC with directional microphones and speech enhancement enabled in a 2nd program. More than 2 years after the 1st participants exited the project, a retrospective survey was conducted to determine the participants' use of, and satisfaction with, the 2 programs. From the 30 returned questionnaires, it seems that WDRC is used with a high degree of satisfaction in general everyday listening situations. The reported benefit from the addition of a directional microphone and speech enhancement for listening in noisy environments was lower and varied among the users. This variable was significantly correlated with how much the program was used. The less frequent and more varied use of the program with directional microphones and speech enhancement activated in combination suggests that these features may be best offered in a 2nd listening program for listeners with severe or profound hearing loss.

  1. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition.

    PubMed

    Van Engen, Kristin J; McLaughlin, Drew J

    2018-05-04

    Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition. Copyright © 2018. Published by Elsevier B.V.

  2. Cultural and demographic factors influencing noise exposure estimates from use of portable listening devices in an urban environment.

    PubMed

    Fligor, Brian J; Levey, Sandra; Levey, Tania

    2014-08-01

    This study examined listening levels and duration of portable listening devices (PLDs) used by people with diversity of ethnicity, education, music genre, and PLD manufacturer. The goal was to estimate participants' PLD noise exposure and identify factors influencing user behavior. This study measured listening levels of 160 adults in 2 New York City locations: (a) a quiet college campus and (b) Union Square, a busy interchange. Participants completed a questionnaire regarding demographics and PLD use. Ordinary least squares regression was used to explore the significance of demographic and behavioral factors. Average listening level was 94.1 dBA, with 99 of 160 (61.9%) and 92 of 159 (57.5%) exceeding daily (L A8hn) and weekly (L Awkn) recommended exposure limit, respectively. African American participants listened at the highest average levels (99.8 dBA). A majority of PLD users exceeded recommended exposure levels. Factors significant for higher exposure were ethnicity and age; factors not significantly associated with exposure were gender, education, location, awareness of possible association between PLD use and noise-induced hearing loss, mode of transportation, device manufacturer, and music genre. Efforts to effect behavior changes to lessen noise-induced hearing loss risk from PLD use should be sensitive to the cultural differences within the targeted population.

  3. Planning Literacy Environments for Diverse Preschoolers

    ERIC Educational Resources Information Center

    Dennis, Lindsay R.; Lynch, Sharon A.; Stockall, Nancy

    2012-01-01

    "Emergent literacy" is defined as the developmental process beginning at birth in which children acquire the foundation for reading and writing, including language, listening comprehension, concepts of print, alphabetic knowledge, and phonological awareness. The environment within which emergent literacy skills develop is also an important…

  4. Music listening enhances cognitive recovery and mood after middle cerebral artery stroke.

    PubMed

    Särkämö, Teppo; Tervaniemi, Mari; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja

    2008-03-01

    We know from animal studies that a stimulating and enriched environment can enhance recovery after stroke, but little is known about the effects of an enriched sound environment on recovery from neural damage in humans. In humans, music listening activates a wide-spread bilateral network of brain regions related to attention, semantic processing, memory, motor functions, and emotional processing. Music exposure also enhances emotional and cognitive functioning in healthy subjects and in various clinical patient groups. The potential role of music in neurological rehabilitation, however, has not been systematically investigated. This single-blind, randomized, and controlled trial was designed to determine whether everyday music listening can facilitate the recovery of cognitive functions and mood after stroke. In the acute recovery phase, 60 patients with a left or right hemisphere middle cerebral artery (MCA) stroke were randomly assigned to a music group, a language group, or a control group. During the following two months, the music and language groups listened daily to self-selected music or audio books, respectively, while the control group received no listening material. In addition, all patients received standard medical care and rehabilitation. All patients underwent an extensive neuropsychological assessment, which included a wide range of cognitive tests as well as mood and quality of life questionnaires, one week (baseline), 3 months, and 6 months after the stroke. Fifty-four patients completed the study. Results showed that recovery in the domains of verbal memory and focused attention improved significantly more in the music group than in the language and control groups. The music group also experienced less depressed and confused mood than the control group. These findings demonstrate for the first time that music listening during the early post-stroke stage can enhance cognitive recovery and prevent negative mood. The neural mechanisms potentially underlying these effects are discussed.

  5. Network science and the effects of music preference on functional brain connectivity: from Beethoven to Eminem.

    PubMed

    Wilkins, R W; Hodges, D A; Laurienti, P J; Steen, M; Burdette, J H

    2014-08-28

    Most people choose to listen to music that they prefer or 'like' such as classical, country or rock. Previous research has focused on how different characteristics of music (i.e., classical versus country) affect the brain. Yet, when listening to preferred music--regardless of the type--people report they often experience personal thoughts and memories. To date, understanding how this occurs in the brain has remained elusive. Using network science methods, we evaluated differences in functional brain connectivity when individuals listened to complete songs. We show that a circuit important for internally-focused thoughts, known as the default mode network, was most connected when listening to preferred music. We also show that listening to a favorite song alters the connectivity between auditory brain areas and the hippocampus, a region responsible for memory and social emotion consolidation. Given that musical preferences are uniquely individualized phenomena and that music can vary in acoustic complexity and the presence or absence of lyrics, the consistency of our results was unexpected. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem. The neurobiological and neurorehabilitation implications of these results are discussed.

  6. Network Science and the Effects of Music Preference on Functional Brain Connectivity: From Beethoven to Eminem

    PubMed Central

    Wilkins, R. W.; Hodges, D. A.; Laurienti, P. J.; Steen, M.; Burdette, J. H.

    2014-01-01

    Most people choose to listen to music that they prefer or ‘like’ such as classical, country or rock. Previous research has focused on how different characteristics of music (i.e., classical versus country) affect the brain. Yet, when listening to preferred music—regardless of the type—people report they often experience personal thoughts and memories. To date, understanding how this occurs in the brain has remained elusive. Using network science methods, we evaluated differences in functional brain connectivity when individuals listened to complete songs. We show that a circuit important for internally-focused thoughts, known as the default mode network, was most connected when listening to preferred music. We also show that listening to a favorite song alters the connectivity between auditory brain areas and the hippocampus, a region responsible for memory and social emotion consolidation. Given that musical preferences are uniquely individualized phenomena and that music can vary in acoustic complexity and the presence or absence of lyrics, the consistency of our results was unexpected. These findings may explain why comparable emotional and mental states can be experienced by people listening to music that differs as widely as Beethoven and Eminem. The neurobiological and neurorehabilitation implications of these results are discussed. PMID:25167363

  7. Perceptual weighting of individual and concurrent cues for sentence intelligibility: Frequency, envelope, and fine structure

    PubMed Central

    Fogerty, Daniel

    2011-01-01

    The speech signal may be divided into frequency bands, each containing temporal properties of the envelope and fine structure. For maximal speech understanding, listeners must allocate their perceptual resources to the most informative acoustic properties. Understanding this perceptual weighting is essential for the design of assistive listening devices that need to preserve these important speech cues. This study measured the perceptual weighting of young normal-hearing listeners for the envelope and fine structure in each of three frequency bands for sentence materials. Perceptual weights were obtained under two listening contexts: (1) when each acoustic property was presented individually and (2) when multiple acoustic properties were available concurrently. The processing method was designed to vary the availability of each acoustic property independently by adding noise at different levels. Perceptual weights were determined by correlating a listener’s performance with the availability of each acoustic property on a trial-by-trial basis. Results demonstrated that weights were (1) equal when acoustic properties were presented individually and (2) biased toward envelope and mid-frequency information when multiple properties were available. Results suggest a complex interaction between the available acoustic properties and the listening context in determining how best to allocate perceptual resources when listening to speech in noise. PMID:21361454

  8. What we expect is not always what we get: evidence for both the direction-of-change and the specific-stimulus hypotheses of auditory attentional capture.

    PubMed

    Nöstl, Anatole; Marsh, John E; Sörqvist, Patrik

    2014-01-01

    Participants were requested to respond to a sequence of visual targets while listening to a well-known lullaby. One of the notes in the lullaby was occasionally exchanged with a pattern deviant. Experiment 1 found that deviants capture attention as a function of the pitch difference between the deviant and the replaced/expected tone. However, when the pitch difference between the expected tone and the deviant tone is held constant, a violation to the direction-of-pitch change across tones can also capture attention (Experiment 2). Moreover, in more complex auditory environments, wherein it is difficult to build a coherent neural model of the sound environment from which expectations are formed, deviations can capture attention but it appears to matter less whether this is a violation from a specific stimulus or a violation of the current direction-of-change (Experiment 3). The results support the expectation violation account of auditory distraction and suggest that there are at least two different expectations that can be violated: One appears to be bound to a specific stimulus and the other would seem to be bound to a more global cross-stimulus rule such as the direction-of-change based on a sequence of preceding sound events. Factors like base-rate probability of tones within the sound environment might become the driving mechanism of attentional capture--rather than violated expectations--in complex sound environments.

  9. Interface Design Implications for Recalling the Spatial Configuration of Virtual Auditory Environments

    NASA Astrophysics Data System (ADS)

    McMullen, Kyla A.

    Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present study also found that the presence of visual reference frames significantly increased recall accuracy. Additionally, the incorporation of drastic attenuation significantly improved environment recall accuracy. Through investigating the aforementioned concerns, the present study made initial footsteps guiding the design of virtual auditory environments that support spatial configuration recall.

  10. Binaural Speech Understanding With Bilateral Cochlear Implants in Reverberation.

    PubMed

    Kokkinakis, Kostas

    2018-03-08

    The purpose of this study was to investigate whether bilateral cochlear implant (CI) listeners who are fitted with clinical processors are able to benefit from binaural advantages under reverberant conditions. Another aim of this contribution was to determine whether the magnitude of each binaural advantage observed inside a highly reverberant environment differs significantly from the magnitude measured in a near-anechoic environment. Ten adults with postlingual deafness who are bilateral CI users fitted with either Nucleus 5 or Nucleus 6 clinical sound processors (Cochlear Corporation) participated in this study. Speech reception thresholds were measured in sound field and 2 different reverberation conditions (0.06 and 0.6 s) as a function of the listening condition (left, right, both) and the noise spatial location (left, front, right). The presence of the binaural effects of head-shadow, squelch, summation, and spatial release from masking in the 2 different reverberation conditions tested was determined using nonparametric statistical analysis. In the bilateral population tested, when the ambient reverberation time was equal to 0.6 s, results indicated strong positive effects of head-shadow and a weaker spatial release from masking advantage, whereas binaural squelch and summation contributed no statistically significant benefit to bilateral performance under this acoustic condition. These findings are consistent with those of previous studies, which have demonstrated that head-shadow yields the most pronounced advantage in noise. The finding that spatial release from masking produced little to almost no benefit in bilateral listeners is consistent with the hypothesis that additive reverberation degrades spatial cues and negatively affects binaural performance. The magnitude of 4 different binaural advantages was measured on the same group of bilateral CI subjects fitted with clinical processors in 2 different reverberation conditions. The results of this work demonstrate the impeding properties of reverberation on binaural speech understanding. In addition, results indicate that CI recipients who struggle in everyday listening environments are also more likely to benefit less in highly reverberant environments from their bilateral processors.

  11. Child–Adult Differences in Using Dual-Task Paradigms to Measure Listening Effort

    PubMed Central

    Charles, Lauren M.; Ricketts, Todd A.

    2017-01-01

    Purpose The purpose of the project was to investigate the effects modifying the secondary task in a dual-task paradigm to measure objective listening effort. To be specific, the complexity and depth of processing were increased relative to a simple secondary task. Method Three dual-task paradigms were developed for school-age children. The primary task was word recognition. The secondary task was a physical response to a visual probe (simple task), a physical response to a complex probe (increased complexity), or word categorization (increased depth of processing). Sixteen adults (22–32 years, M = 25.4) and 22 children (9–17 years, M = 13.2) were tested using the 3 paradigms in quiet and noise. Results For both groups, manipulations of the secondary task did not affect word recognition performance. For adults, increasing depth of processing increased the calculated effect of noise; however, for children, results with the deep secondary task were the least stable. Conclusions Manipulations of the secondary task differentially affected adults and children. Consistent with previous findings, increased depth of processing enhanced paradigm sensitivity for adults. However, younger participants were more likely to demonstrate the expected effects of noise on listening effort using a secondary task that did not require deep processing. PMID:28346816

  12. STS-55 MS3 Harris listens to technician during JSC WETF egress exercises

    NASA Technical Reports Server (NTRS)

    1992-01-01

    STS-55 Columbia, Orbiter Vehicle (OV) 102, Mission Specialist 3 (MS3) Bernard A. Harris, Jr, wearing launch and entry suit (LES), launch and entry helmet (LEH), and parachute, listens to technician Karen Porter's instructions prior to launch emergency egress (bailout) exercises. The session, held in JSC's Weightless Environment Training Facility (WETF) Bldg 29, used the facility's 25-foot deep pool to simulate the ocean as Harris and other crewmembers practiced water bailout procedures.

  13. Decisive Action Training Environment at the National Training Center. Volume IV

    DTIC Science & Technology

    2016-09-01

    Training Center (NTC) assumes a comprehensive approach to training the force. Operations Group is dedicated to fostering training proficiency in...is most commonly provided by showing the location on a map. This allows all personnel listening to the RTO to gain understanding of the report...out that the smoke is purple in color, and appears to have come from a smoke grenade. The listening staff members analyze the report: “It’s a FASCAM

  14. Distraction and Pedestrian Safety: How Talking on the Phone, Texting, and Listening to Music Impact Crossing the Street

    PubMed Central

    Schwebel, David C.; Stavrinos, Despina; Byington, Katherine W.; Davis, Tiffany; O’Neal, Elizabeth E.; de Jong, Desiree

    2011-01-01

    As use of handheld multimedia devices has exploded globally, safety experts have begun to consider the impact of distraction while talking, text-messaging, or listening to music on traffic safety. This study was designed to test how talking on the phone, texting, and listening to music may influence pedestrian safety. 138 college students crossed an interactive, semi-immersive virtual pedestrian street. They were randomly assigned to one of four groups: crossing while talking on the phone, crossing while texting, crossing while listening to a personal music device, or crossing while undistracted. Participants distracted by music or texting were more likely to be hit by a vehicle in the virtual pedestrian environment than were undistracted participants. Participants in all three distracted groups were more likely to look away from the street environment (and look toward other places, such as their telephone or music device) than were undistracted participants. Findings were maintained after controlling for demographics, walking frequency, and media use frequency. Distraction from multimedia devices has a small but meaningful impact on college students’ pedestrian safety. Future research should consider the cognitive demands of pedestrian safety, and how those processes may be impacted by distraction. Policymakers might consider ways to protect distracted pedestrians from harm and to reduce the number of individuals crossing streets while distracted. PMID:22269509

  15. Distraction and pedestrian safety: how talking on the phone, texting, and listening to music impact crossing the street.

    PubMed

    Schwebel, David C; Stavrinos, Despina; Byington, Katherine W; Davis, Tiffany; O'Neal, Elizabeth E; de Jong, Desiree

    2012-03-01

    As use of handheld multimedia devices has exploded globally, safety experts have begun to consider the impact of distraction while talking, text-messaging, or listening to music on traffic safety. This study was designed to test how talking on the phone, texting, and listening to music may influence pedestrian safety. 138 college students crossed an interactive, semi-immersive virtual pedestrian street. They were randomly assigned to one of four groups: crossing while talking on the phone, crossing while texting, crossing while listening to a personal music device, or crossing while undistracted. Participants distracted by music or texting were more likely to be hit by a vehicle in the virtual pedestrian environment than were undistracted participants. Participants in all three distracted groups were more likely to look away from the street environment (and look toward other places, such as their telephone or music device) than were undistracted participants. Findings were maintained after controlling for demographics, walking frequency, and media use frequency. Distraction from multimedia devices has a small but meaningful impact on college students' pedestrian safety. Future research should consider the cognitive demands of pedestrian safety, and how those processes may be impacted by distraction. Policymakers might consider ways to protect distracted pedestrians from harm and to reduce the number of individuals crossing streets while distracted. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    PubMed Central

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842

  17. A screening approach for classroom acoustics using web-based listening tests and subjective ratings.

    PubMed

    Persson Waye, Kerstin; Magnusson, Lennart; Fredriksson, Sofie; Croy, Ilona

    2015-01-01

    Perception of speech is crucial in school where speech is the main mode of communication. The aim of the study was to evaluate whether a web based approach including listening tests and questionnaires could be used as a screening tool for poor classroom acoustics. The prime focus was the relation between pupils' comprehension of speech, the classroom acoustics and their description of the acoustic qualities of the classroom. In total, 1106 pupils aged 13-19, from 59 classes and 38 schools in Sweden participated in a listening study using Hagerman's sentences administered via Internet. Four listening conditions were applied: high and low background noise level and positions close and far away from the loudspeaker. The pupils described the acoustic quality of the classroom and teachers provided information on the physical features of the classroom using questionnaires. In 69% of the classes, at least three pupils described the sound environment as adverse and in 88% of the classes one or more pupil reported often having difficulties concentrating due to noise. The pupils' comprehension of speech was strongly influenced by the background noise level (p<0.001) and distance to the loudspeakers (p<0.001). Of the physical classroom features, presence of suspended acoustic panels (p<0.05) and length of the classroom (p<0.01) predicted speech comprehension. Of the pupils' descriptions of acoustic qualities, clattery significantly (p<0.05) predicted speech comprehension. Clattery was furthermore associated to difficulties understanding each other, while the description noisy was associated to concentration difficulties. The majority of classrooms do not seem to have an optimal sound environment. The pupil's descriptions of acoustic qualities and listening tests can be one way of predicting sound conditions in the classroom.

  18. Spontaneous sensorimotor coupling with multipart music.

    PubMed

    Hurley, Brian K; Martens, Peter A; Janata, Petr

    2014-08-01

    Music often evokes spontaneous movements in listeners that are synchronized with the music, a phenomenon that has been characterized as being in "the groove." However, the musical factors that contribute to listeners' initiation of stimulus-coupled action remain unclear. Evidence suggests that newly appearing objects in auditory scenes orient listeners' attention, and that in multipart music, newly appearing instrument or voice parts can engage listeners' attention and elicit arousal. We posit that attentional engagement with music can influence listeners' spontaneous stimulus-coupled movement. Here, 2 experiments-involving participants with and without musical training-tested the effect of staggering instrument entrances across time and varying the number of concurrent instrument parts within novel multipart music on listeners' engagement with the music, as assessed by spontaneous sensorimotor behavior and self-reports. Experiment 1 assessed listeners' moment-to-moment ratings of perceived groove, and Experiment 2 examined their spontaneous tapping and head movements. We found that, for both musically trained and untrained participants, music with more instruments led to higher ratings of perceived groove, and that music with staggered instrument entrances elicited both increased sensorimotor coupling and increased reports of perceived groove. Although untrained participants were more likely to rate music as higher in groove, trained participants showed greater propensity for tapping along, and they did so more accurately. The quality of synchronization of head movements with the music, however, did not differ as a function of training. Our results shed new light on the relationship between complex musical scenes, attention, and spontaneous sensorimotor behavior.

  19. Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.

    PubMed

    Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn

    2017-01-01

    Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.

  20. Relative Weighting of Semantic and Syntactic Cues in Native and Non-Native Listeners' Recognition of English Sentences.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-01-01

    Non-native listeners do not recognize English sentences as effectively as native listeners, especially in noise. It is not entirely clear to what extent such group differences arise from differences in relative weight of semantic versus syntactic cues. This study quantified the use and weighting of these contextual cues via Boothroyd and Nittrouer's j and k factors. The j represents the probability of recognizing sentences with or without context, whereas the k represents the degree to which context improves recognition performance. Four groups of 13 normal-hearing young adult listeners participated. One group consisted of native English monolingual (EMN) listeners, whereas the other three consisted of non-native listeners contrasting in their language dominance and first language: English-dominant Russian-English, Russian-dominant Russian-English, and Spanish-dominant Spanish-English bilinguals. All listeners were presented three sets of four-word sentences: high-predictability sentences included both semantic and syntactic cues, low-predictability sentences included syntactic cues only, and zero-predictability sentences included neither semantic nor syntactic cues. Sentences were presented at 65 dB SPL binaurally in the presence of speech-spectrum noise at +3 dB SNR. Listeners orally repeated each sentence and recognition was calculated for individual words as well as the sentence as a whole. Comparable j values across groups for high-predictability, low-predictability, and zero-predictability sentences suggested that all listeners, native and non-native, utilized contextual cues to recognize English sentences. Analysis of the k factor indicated that non-native listeners took advantage of syntax as effectively as EMN listeners. However, only English-dominant bilinguals utilized semantics to the same extent as EMN listeners; semantics did not provide a significant benefit for the two non-English-dominant groups. When combined, semantics and syntax benefitted EMN listeners significantly more than all three non-native groups of listeners. Language background influenced the use and weighting of semantic and syntactic cues in a complex manner. A native language advantage existed in the effective use of both cues combined. A language-dominance effect was seen in the use of semantics. No first-language effect was present for the use of either or both cues. For all non-native listeners, syntax contributed significantly more to sentence recognition than semantics, possibly due to the fact that semantics develops more gradually than syntax in second-language acquisition. The present study provides evidence that Boothroyd and Nittrouer's j and k factors can be successfully used to quantify the effectiveness of contextual cue use in clinically relevant, linguistically diverse populations.

  1. Perceived noisiness under anechoic, semi-reverberant and earphone listening conditions

    NASA Technical Reports Server (NTRS)

    Clarke, F. R.; Kryter, K. D.

    1972-01-01

    Magnitude estimates by each of 31 listeners were obtained for a variety of noise sources under three methods of stimuli presentation: loudspeaker presentation in an anechoic chamber, loudspeaker presentation in a normal semi-reverberant room, and earphone presentation. Comparability of ratings obtained in these environments were evaluated with respect to predictability of ratings from physical measures, reliability of ratings, and to the scale values assigned to various noise stimuli. Acoustic environment was found to have little effect upon physical predictive measures and ratings of perceived noisiness were little affected by the acoustic environment in which they were obtained. The need for further study of possible differing interactions between judged noisiness of steady state sound and the methods of magnitude estimation and paired comparisons is indicated by the finding that in these tests the subjects, though instructed otherwise, apparently judged the maximum rather than the effective magnitude of steady-state noises.

  2. Demodulation processes in auditory perception

    NASA Astrophysics Data System (ADS)

    Feth, Lawrence L.

    1994-08-01

    The long range goal of this project is the understanding of human auditory processing of information conveyed by complex, time-varying signals such as speech, music or important environmental sounds. Our work is guided by the assumption that human auditory communication is a 'modulation - demodulation' process. That is, we assume that sound sources produce a complex stream of sound pressure waves with information encoded as variations ( modulations) of the signal amplitude and frequency. The listeners task then is one of demodulation. Much of past. psychoacoustics work has been based in what we characterize as 'spectrum picture processing.' Complex sounds are Fourier analyzed to produce an amplitude-by-frequency 'picture' and the perception process is modeled as if the listener were analyzing the spectral picture. This approach leads to studies such as 'profile analysis' and the power-spectrum model of masking. Our approach leads us to investigate time-varying, complex sounds. We refer to them as dynamic signals and we have developed auditory signal processing models to help guide our experimental work.

  3. Navigating the auditory scene: an expert role for the hippocampus.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  4. An overview and guide: planning instructional radio.

    PubMed

    Imhoof, M

    1984-03-01

    Successful instructional radio projects require both comprehensive and complex planning. The instructional radio planning team needs to have knowledge and capabilities in several technical, social, and educational areas. Among other skills, the team must understand radio, curriculum design, the subject matter being taught, research and evaluation, and the environment in which the project operates. Once a basic approach to educational planning has been selected and broad educational goals set, radio may be selected as a cost effective means of achieving some of the goals. Assuming radio is a wise choice, there are still several factors which must be analyzed by a team member who is a radio specialist. The most obvious consideration is the inventory and evaluation of the facilities: studios; broadcast, recording, and transmission equipment; classroom radios; and so on. Capabilities of broadcast personnel are another consideration. Initial radio lessons need to teach the learners how to listen to the radio if they have no previous experience with institutional radio broadcasts. A captive, inschool audience ready to listen to radio instructions requires a different use of the medium than a noncaptive audience. With the noncaptive audience, the educational broadcaster must compete with entertaining choices from other media and popular activities and pastimes of the community. The most complex knowledge and analysis required in planning instructional radio concerns the relationship of the content to the medium. Environmental factors are important in planning educational programs. The physical environment may present several constraints on the learning experience and the use of radio. The most obvious is the effect of climate and terrain on the quality of radio reception. The physical environment is easily studied through experience in the target area, but this knowledge plays a significant role in designing effective learning materials for specific learners. Social activities utilized in broadcasts which are contrary to the learners' experience will at best seem strange and at worst be incomprehensible. Curriculum development in an instructional radio project adds more complexity to the planner's task. The most important information needed is whether a new curriculum is to be developed or whether the existing curriculum is to be adapted for radio. Another major analysis task is relating the curriculum to the medium. The project planning team needs to understand the research aims and evaluation methods in instructional radio projects. Sometimes an outside evaluation specialist or team is employed, but in many projects the planning team is responsible for implementing the research design, carrying out the development activities, gathering data, and evaluating the project. Subject matter testing is another technical area of expertise needed by the project team.

  5. Musical Preferences as a Function of Stimulus Complexity of Piano Jazz

    ERIC Educational Resources Information Center

    Gordon, Josh; Gridley, Mark C.

    2013-01-01

    Seven excerpts of modern jazz piano improvisations were selected to represent a range of perceived complexities. Audio recordings of the excerpts were played for 27 listeners who were asked to indicate their level of enjoyment on 7-point scales. Indications of enjoyment followed an inverted-U when plotted against perceived complexity of the music.…

  6. How Can Music Influence the Autonomic Nervous System Response in Patients with Severe Disorder of Consciousness?

    PubMed

    Riganello, Francesco; Cortese, Maria D; Arcuri, Francesco; Quintieri, Maria; Dolce, Giuliano

    2015-01-01

    Activations to pleasant and unpleasant musical stimuli were observed within an extensive neuronal network and different brain structures, as well as in the processing of the syntactic and semantic aspects of the music. Previous studies evidenced a correlation between autonomic activity and emotion evoked by music listening in patients with Disorders of Consciousness (DoC). In this study, we analyzed retrospectively the autonomic response to musical stimuli by mean of normalized units of Low Frequency (nuLF) and Sample Entropy (SampEn) of Heart Rate Variability (HRV) parameters, and their possible correlation to the different complexity of four musical samples (i.e., Mussorgsky, Tchaikovsky, Grieg, and Boccherini) in Healthy subjects and Vegetative State/Unresponsive Wakefulness Syndrome (VS/UWS) patients. The complexity of musical sample was based on Formal Complexity and General Dynamics parameters defined by Imberty's semiology studies. The results showed a significant difference between the two groups for SampEn during the listening of Mussorgsky's music and for nuLF during the listening of Boccherini and Mussorgsky's music. Moreover, the VS/UWS group showed a reduction of nuLF as well as SampEn comparing music of increasing Formal Complexity and General Dynamics. These results put in evidence how the internal structure of the music can change the autonomic response in patients with DoC. Further investigations are required to better comprehend how musical stimulation can modify the autonomic response in DoC patients, in order to administer the stimuli in a more effective way.

  7. Music cognition: a developmental perspective.

    PubMed

    Stalinski, Stephanie M; Schellenberg, E Glenn

    2012-10-01

    Although music is universal, there is a great deal of cultural variability in music structures. Nevertheless, some aspects of music processing generalize across cultures, whereas others rely heavily on the listening environment. Here, we discuss the development of musical knowledge, focusing on four themes: (a) capabilities that are present early in development; (b) culture-general and culture-specific aspects of pitch and rhythm processing; (c) age-related changes in pitch perception; and (d) developmental changes in how listeners perceive emotion in music. Copyright © 2012 Cognitive Science Society, Inc.

  8. The Breakthrough Listen Search for Intelligent Life

    NASA Astrophysics Data System (ADS)

    Croft, Steve; Siemion, Andrew; De Boer, David; Enriquez, J. Emilio; Foster, Griffin; Gajjar, Vishal; Hellbourg, Greg; Hickish, Jack; Isaacson, Howard; Lebofsky, Matt; MacMahon, David; Price, Daniel; Werthimer, Dan

    2018-01-01

    The $100M, 10-year philanthropic "Breakthrough Listen" project is driving an unprecedented expansion of the search for intelligent life beyond Earth. Modern instruments allow ever larger regions of parameter space (luminosity function, duty cycle, beaming fraction, frequency coverage) to be explored, which is enabling us to place meaningful physical limits on the prevalence of transmitting civilizations. Data volumes are huge, and preclude long-term storage of the raw data products, so real-time and machine learning processing techniques must be employed to identify candidate signals as well as simultaneously classifying interfering sources. However, the Galaxy is now known to be a target-rich environment, teeming with habitable planets.Data from Breakthrough Listen can also be used by researchers in other areas of astronomy to study pulsars, fast radio bursts, and a range of other science targets. Breakthrough Listen is already underway in the optical and radio bands, and is also engaging with facilities across the world, including Square Kilometer Array precursors and pathfinders. I will give an overview of the technology, science goals, data products, and roadmap of Breakthrough Listen, as we attempt to answer one of humanity's oldest questions: Are we alone?

  9. Amplitude modulation detection by human listeners in reverberant sound fields: Carrier bandwidth effects and binaural versus monaural comparison.

    PubMed

    Zahorik, Pavel; Kim, Duck O; Kuwada, Shigeyuki; Anderson, Paul W; Brandewie, Eugene; Collecchia, Regina; Srinivasan, Nirmal

    2012-06-01

    Previous work [Zahorik et al., POMA, 12, 050005 (2011)] has reported that for a broadband noise carrier signal in a simulated reverberant sound field, human sensitivity to amplitude modulation (AM) is higher than would be predicted based on the broadband acoustical modulation transfer function (MTF) of the listening environment. Interpretation of this result was complicated by the fact that acoustical MTFs of rooms are often quite different for different carrier frequency regions, and listeners may have selectively responded to advantageous carrier frequency regions where the effective acoustic modulation loss due to the room was less than indicated by a broadband acoustic MTF analysis. Here, AM sensitivity testing and acoustic MTF analyses were expanded to include narrowband noise carriers (1-octave and 1/3-octave bands centered at 4 kHz), as well as monaural and binaural listening conditions. Narrowband results were found to be consistent with broadband results: In a reverberant sound field, human AM sensitivity is higher than indicated by the acoustical MTFs. The effect was greatest for modulation frequencies above 32 Hz and was present whether the stimulation was monaural or binaural. These results are suggestive of mechanisms that functionally enhance modulation in reverberant listening.

  10. Listening to music reduces eye movements.

    PubMed

    Schäfer, Thomas; Fachner, Jörg

    2015-02-01

    Listening to music can change the way that people visually experience the environment, probably as a result of an inwardly directed shift of attention. We investigated whether this attentional shift can be demonstrated by reduced eye movement activity, and if so, whether that reduction depends on absorption. Participants listened to their preferred music, to unknown neutral music, or to no music while viewing a visual stimulus (a picture or a film clip). Preference and absorption were significantly higher for the preferred music than for the unknown music. Participants exhibited longer fixations, fewer saccades, and more blinks when they listened to music than when they sat in silence. However, no differences emerged between the preferred music condition and the neutral music condition. Thus, music significantly reduces eye movement activity, but an attentional shift from the outer to the inner world (i.e., to the emotions and memories evoked by the music) emerged as only one potential explanation. Other explanations, such as a shift of attention from visual to auditory input, are discussed.

  11. Increase in Synchronization of Autonomic Rhythms between Individuals When Listening to Music

    PubMed Central

    Bernardi, Nicolò F.; Codrons, Erwan; di Leo, Rita; Vandoni, Matteo; Cavallaro, Filippo; Vita, Giuseppe; Bernardi, Luciano

    2017-01-01

    In light of theories postulating a role for music in forming emotional and social bonds, here we investigated whether endogenous rhythms synchronize between multiple individuals when listening to music. Cardiovascular and respiratory recordings were taken from multiple individuals (musically trained or music-naïve) simultaneously, at rest and during a live concert comprising music excerpts with varying degrees of complexity of the acoustic envelope. Inter-individual synchronization of cardiorespiratory rhythms showed a subtle but reliable increase during passively listening to music compared to baseline. The low-level auditory features of the music were largely responsible for creating or disrupting such synchronism, explaining ~80% of its variance, over and beyond subjective musical preferences and previous musical training. Listening to simple rhythms and melodies, which largely dominate the choice of music during rituals and mass events, brings individuals together in terms of their physiological rhythms, which could explain why music is widely used to favor social bonds. PMID:29089898

  12. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  13. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    PubMed Central

    Denham, Susan; Bõhm, Tamás M.; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the “ABA-” auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception. PMID:24616656

  14. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli.

    PubMed

    Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception.

  15. Elucidating the relationship between work attention performance and emotions arising from listening to music.

    PubMed

    Shih, Yi-Nuo; Chien, Wei-Hsien; Chiang, Han-Sun

    2016-10-17

    In addition to demonstrating that human emotions improve work attention performance, numerous studies have also established that music alters human emotions. Given the pervasiveness of background music in the workplace, exactly how work attention, emotions and music listening are related is of priority concern in human resource management. This preliminary study investigates the relationship between work attention performance and emotions arising from listening to music. Thirty one males and 34 females, ranging from 20-24 years old, participated in this study following written informed consent. A randomized controlled trial (RCT) was performed in this study, which consisted of six steps and the use of the standard attention test and emotion questionnaire. Background music with lyrics adversely impacts attention performance more than that without lyrics. Analysis results also indicate that listeners self-reported feeling "loved" while music played that implied a higher score on their work-attention performance. Moreover, a greater ability of music to make listeners feel sad implied a lower score on their work-attention performance. Results of this preliminary study demonstrate that background music in the workplace should focus mainly on creating an environment in which listeners feel loved or taken care and avoiding music that causes individuals to feel stressed or sad. We recommend that future research increase the number of research participants to enhance the applicability and replicability of these findings.

  16. Active Listening in a Bat Cocktail Party: Adaptive Echolocation and Flight Behaviors of Big Brown Bats, Eptesicus fuscus, Foraging in a Cluttered Acoustic Environment.

    PubMed

    Warnecke, Michaela; Chiu, Chen; Engelberg, Jonathan; Moss, Cynthia F

    2015-09-01

    In their natural environment, big brown bats forage for small insects in open spaces, as well as in vegetation and in the presence of acoustic clutter. While searching and hunting for prey, bats experience sonar interference, not only from densely cluttered environments, but also from calls of conspecifics foraging in close proximity. Previous work has shown that when two bats compete for a single prey item in a relatively open environment, one of the bats may go silent for extended periods of time, which can serve to minimize sonar interference between conspecifics. Additionally, pairs of big brown bats have been shown to adjust frequency characteristics of their vocalizations to avoid acoustic interference in echo processing. In this study, we extended previous work by examining how the presence of conspecifics and environmental clutter influence the bat's echolocation behavior. By recording multichannel audio and video data of bats engaged in insect capture in open and cluttered spaces, we quantified the bats' vocal and flight behaviors. Big brown bats flew individually and in pairs in an open and cluttered room, and the results of this study shed light on the different strategies that this species employs to negotiate a complex and dynamic environment. © 2015 S. Karger AG, Basel.

  17. Noise Levels in Hong Kong Primary Schools: Implications for Classroom Listening

    ERIC Educational Resources Information Center

    Choi, Ching Yee; McPherson, Bradley

    2005-01-01

    Many researchers have stressed that the acoustic environment is crucial to the speech perception, academic performance, attention, and participation of students in classrooms. Classrooms in highly urbanised locations are especially vulnerable to noise, a major influence on the acoustic environment. The purpose of this investigation was to…

  18. A Context-Aware Ubiquitous Learning Environment for Language Listening and Speaking

    ERIC Educational Resources Information Center

    Liu, T.-Y.

    2009-01-01

    This paper reported the results of a study that aimed to construct a sensor and handheld augmented reality (AR)-supported ubiquitous learning (u-learning) environment called the Handheld English Language Learning Organization (HELLO), which is geared towards enhancing students' language learning. The HELLO integrates sensors, AR, ubiquitous…

  19. The Components of Good Acoustics in a High Performance School

    ERIC Educational Resources Information Center

    Stewart, William

    2009-01-01

    Acoustics has received greater importance in the learning environment in recent years. In August 2000, The Acoustical Society of America (ASA) published the study "Classroom Acoustics: A Resource for Creating Learning Environments with Desirable Listening Conditions" providing a framework for understanding the qualities, descriptors of the…

  20. Talker variability in audio-visual speech perception

    PubMed Central

    Heald, Shannon L. M.; Nusbaum, Howard C.

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919

  1. Talker variability in audio-visual speech perception.

    PubMed

    Heald, Shannon L M; Nusbaum, Howard C

    2014-01-01

    A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.

  2. Comparison of user volume control settings for portable music players with three earphone configurations in quiet and noisy environments.

    PubMed

    Henry, Paula; Foots, Ashley

    2012-03-01

    Listening to music is one of the most common forms of recreational noise exposure. Previous investigators have demonstrated that maximum output levels from headphones can exceed safe levels. Although preferred listening levels (PLL) in quiet environments may be at acceptable levels, the addition of background noise will add to the overall noise exposure of a listener. Use of listening devices that block out some of the background noise would potentially allow listeners to select lower PLLs for their music. Although one solution is in-the-ear earphones, an alternative solution is the use of earmuffs in conjunction with earbuds. There were two objectives to this experiment. The first was to determine if an alternative to in-the-ear earphones for noise attenuation (the addition of earmuffs to earbuds) would allow for lower PLLs through a portable media player (PMP) than earbuds. The second was to determine if a surrounding background noise would yield different PLLs than a directional noise source. This was an experimental study. Twenty-four adults with normal hearing. PLLs were measured for three earphone configurations in three listening conditions. The earphone configurations included earbuds, canal earphones, and earbuds in combination with hearing protection devices (HPDs). The listening conditions included quiet, noise from one loudspeaker, and noise from four surrounding loudspeakers. Participants listened in each noise and earphone combination for as long as they needed to determine their PLL for that condition. Once the participant determined their PLL, investigators made a 5 sec recording of the music through a probe tube microphone. The average PLLs in each noise and earphone combination were used as the dependent variable. Ear canal level PLLs were converted to free-field equivalents to compare to noise exposure standards and previously published data. The average PLL as measured in the ear canal was 74 dBA in the quiet conditions and 84 dBA in the noise conditions. Paired comparisons of the PLL in the presence of background noise for each pair of earphone configurations indicated significant differences for each comparison. An inverse relationship was observed between attenuation and PLL whereby the greater the attenuation, the lower the PLL. A comparison of the single noise source condition versus the surrounding noise condition did not result in a significant effect. The present work suggests that earphones that take advantage of noise attenuation can reduce the level at which listeners set music in the presence of background noise. An alternative to in-the-ear earphones for noise attenuation is the addition of earmuffs to earbuds. American Academy of Audiology.

  3. Treefrogs as Animal Models for Research on Auditory Scene Analysis and the Cocktail Party Problem

    PubMed Central

    Bee, Mark A.

    2014-01-01

    The perceptual analysis of acoustic scenes involves binding together sounds from the same source and separating them from other sounds in the environment. In large social groups, listeners experience increased difficulty performing these tasks due to high noise levels and interference from the concurrent signals of multiple individuals. While a substantial body of literature on these issues pertains to human hearing and speech communication, few studies have investigated how nonhuman animals may be evolutionarily adapted to solve biologically analogous communication problems. Here, I review recent and ongoing work aimed at testing hypotheses about perceptual mechanisms that enable treefrogs in the genus Hyla to communicate vocally in noisy, multi-source social environments. After briefly introducing the genus and the methods used to study hearing in frogs, I outline several functional constraints on communication posed by the acoustic environment of breeding “choruses”. Then, I review studies of sound source perception aimed at uncovering how treefrog listeners may be adapted to cope with these constraints. Specifically, this review covers research on the acoustic cues used in sequential and simultaneous auditory grouping, spatial release from masking, and dip listening. Throughout the paper, I attempt to illustrate how broad-scale, comparative studies of carefully considered animal models may ultimately reveal an evolutionary diversity of underlying mechanisms for solving cocktail-party-like problems in communication. PMID:24424243

  4. Cognitive Factors in Sexual Arousal: The Role of Distraction

    ERIC Educational Resources Information Center

    Geer, James H.; Fuhr, Robert

    1976-01-01

    Four groups of male undergraduates were instructed to perform complex cognitive operations when randomly presented single digits of a dichotic listening paradigm. An erotic tape recording was played into the nonattended ear. Sexual arousal varied directly as a function of the complexity of the distracting cognitive operations. (Author)

  5. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  6. Classroom Listening Conditions in Indian Primary Schools: A Survey of Four Schools

    PubMed Central

    Sundaravadhanan, Gayathri; Selvarajan, Heramba G.; McPherson, Bradley

    2017-01-01

    Introduction: Background noise affects the listening environment inside classrooms, especially for younger children. High background noise level adversely affects not only student speech perception but also teacher vocal hygiene. The current study aimed to give an overview of the classroom listening conditions in selected government primary schools in India. Materials and Methods: Noise measurements were taken in 23 classrooms of four government primary schools in southern India, using a type 2 sound level meter. In each classroom measurements were taken in occupied and unoccupied conditions. Teacher voice level was measured in the same classrooms. In addition, the classroom acoustical conditions were observed and the reverberation time for each classroom was calculated. Results: The mean occupied noise level was 62.1 dBA and 65.6 dBC, and the mean unoccupied level was 62.2 dBA and 65 dBC. The mean unamplified teacher speech-to-noise ratio was 10.6 dBA. Both the occupied and unoccupied noise levels exceeded national and international recommended levels and the teacher speech-to-noise ratio was also found to be inadequate in most classrooms. The estimated reverberation time in all classrooms was greater than 2.6 seconds, which is almost double the duration of accepted standards. In addition, observation of classrooms revealed insufficient acoustical treatment to effectively reduce internal and external noise and minimize reverberation. Conclusion: The results of this study point out the need to improve the listening environment for children in government primary schools in India. PMID:28164937

  7. Classroom Listening Conditions in Indian Primary Schools: A Survey of Four Schools.

    PubMed

    Sundaravadhanan, Gayathri; Selvarajan, Heramba G; McPherson, Bradley

    2017-01-01

    Background noise affects the listening environment inside classrooms, especially for younger children. High background noise level adversely affects not only student speech perception but also teacher vocal hygiene. The current study aimed to give an overview of the classroom listening conditions in selected government primary schools in India. Noise measurements were taken in 23 classrooms of four government primary schools in southern India, using a type 2 sound level meter. In each classroom measurements were taken in occupied and unoccupied conditions. Teacher voice level was measured in the same classrooms. In addition, the classroom acoustical conditions were observed and the reverberation time for each classroom was calculated. The mean occupied noise level was 62.1 dBA and 65.6 dBC, and the mean unoccupied level was 62.2 dBA and 65 dBC. The mean unamplified teacher speech-to-noise ratio was 10.6 dBA. Both the occupied and unoccupied noise levels exceeded national and international recommended levels and the teacher speech-to-noise ratio was also found to be inadequate in most classrooms. The estimated reverberation time in all classrooms was greater than 2.6 seconds, which is almost double the duration of accepted standards. In addition, observation of classrooms revealed insufficient acoustical treatment to effectively reduce internal and external noise and minimize reverberation. The results of this study point out the need to improve the listening environment for children in government primary schools in India.

  8. Effects of music engagement on responses to painful stimulation.

    PubMed

    Bradshaw, David H; Chapman, C Richard; Jacobson, Robert C; Donaldson, Gary W

    2012-06-01

    We propose a theoretical framework for the behavioral modulation of pain based on constructivism, positing that task engagement, such as listening for errors in a musical passage, can establish a construction of reality that effectively replaces pain as a competing construction. Graded engagement produces graded reductions in pain as indicated by reduced psychophysiological arousal and subjective pain report. Fifty-three healthy volunteers having normal hearing participated in 4 music listening conditions consisting of passive listening (no task) or performing an error detection task varying in signal complexity and task difficulty. During all conditions, participants received normally painful fingertip shocks varying in intensity while stimulus-evoked potentials (SEP), pupil dilation responses (PDR), and retrospective pain reports were obtained. SEP and PDR increased with increasing stimulus intensity. Task performance decreased with increasing task difficulty. Mixed model analyses, adjusted for habituation/sensitization and repeated measures within person, revealed significant quadratic trends for SEP and pain report (Pchange<0.001) with large reductions from no task to easy task and smaller graded reductions corresponding to increasing task difficulty/complexity. PDR decreased linearly (Pchange<0.001) with graded task condition. We infer that these graded reductions in indicators of central and peripheral arousal and in reported pain correspond to graded increases in engagement in the music listening task. Engaging activities may prevent pain by creating competing constructions of reality that draw on the same processing resources as pain. Better understanding of these processes will advance the development of more effective pain modulation through improved manipulation of engagement strategies.

  9. Socratic Seminar with Data: A Strategy to Support Student Discourse and Understanding

    PubMed Central

    Griswold, Joan; Shaw, Loren; Munn, Maureen

    2017-01-01

    A Socratic seminar can be a powerful tool for increasing students’ ability to analyze and interpret data. Most commonly used for text-based discussion, we found that using Socratic seminar to engage students with data contributes to student understanding by allowing them to reason through and process complex information as a group. This approach also provides teachers with insights about student misconceptions and understanding of concepts by listening to the student-driven discussion. This article reports on Socratic seminar in the context of a high school type 2 diabetes curriculum that explores gene and environment interactions. A case study illustrates how Socratic seminar is applied in a classroom and how students engage with the process. General characteristics of Socratic seminar are discussed at the end of the article. PMID:29147033

  10. The effect of buildings on acoustic pulse propagation in an urban environment.

    PubMed

    Albert, Donald G; Liu, Lanbo

    2010-03-01

    Experimental measurements were conducted using acoustic pulse sources in a full-scale artificial village to investigate the reverberation, scattering, and diffraction produced as acoustic waves interact with buildings. These measurements show that a simple acoustic source pulse is transformed into a complex signature when propagating through this environment, and that diffraction acts as a low-pass filter on the acoustic pulse. Sensors located in non-line-of-sight (NLOS) positions usually recorded lower positive pressure maxima than sensors in line-of-sight positions. Often, the first arrival on a NLOS sensor located around a corner was not the largest arrival, as later reflection arrivals that traveled longer distances without diffraction had higher amplitudes. The waveforms are of such complexity that human listeners have difficulty identifying replays of the signatures generated by a single pulse, and the usual methods of source location based on the direction of arrivals may fail in many cases. Theoretical calculations were performed using a two-dimensional finite difference time domain (FDTD) method and compared to the measurements. The predicted peak positive pressure agreed well with the measured amplitudes for all but two sensor locations directly behind buildings, where the omission of rooftop ray paths caused the discrepancy. The FDTD method also produced good agreement with many of the measured waveform characteristics.

  11. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  12. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid.

    PubMed

    Kidd, Gerald

    2017-10-17

    Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired "target" talker while ignoring the speech from unwanted "masker" talkers and other sources of sound. This listening situation forms the classic "cocktail party problem" described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. This approach, embodied in a prototype "visually guided hearing aid" (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based "spatial filter" operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in "informational masking." The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in "energetic masking." Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. http://cred.pubs.asha.org/article.aspx?articleid=2601621.

  13. Enhancing Auditory Selective Attention Using a Visually Guided Hearing Aid

    PubMed Central

    2017-01-01

    Purpose Listeners with hearing loss, as well as many listeners with clinically normal hearing, often experience great difficulty segregating talkers in a multiple-talker sound field and selectively attending to the desired “target” talker while ignoring the speech from unwanted “masker” talkers and other sources of sound. This listening situation forms the classic “cocktail party problem” described by Cherry (1953) that has received a great deal of study over the past few decades. In this article, a new approach to improving sound source segregation and enhancing auditory selective attention is described. The conceptual design, current implementation, and results obtained to date are reviewed and discussed in this article. Method This approach, embodied in a prototype “visually guided hearing aid” (VGHA) currently used for research, employs acoustic beamforming steered by eye gaze as a means for improving the ability of listeners to segregate and attend to one sound source in the presence of competing sound sources. Results The results from several studies demonstrate that listeners with normal hearing are able to use an attention-based “spatial filter” operating primarily on binaural cues to selectively attend to one source among competing spatially distributed sources. Furthermore, listeners with sensorineural hearing loss generally are less able to use this spatial filter as effectively as are listeners with normal hearing especially in conditions high in “informational masking.” The VGHA enhances auditory spatial attention for speech-on-speech masking and improves signal-to-noise ratio for conditions high in “energetic masking.” Visual steering of the beamformer supports the coordinated actions of vision and audition in selective attention and facilitates following sound source transitions in complex listening situations. Conclusions Both listeners with normal hearing and with sensorineural hearing loss may benefit from the acoustic beamforming implemented by the VGHA, especially for nearby sources in less reverberant sound fields. Moreover, guiding the beam using eye gaze can be an effective means of sound source enhancement for listening conditions where the target source changes frequently over time as often occurs during turn-taking in a conversation. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601621 PMID:29049603

  14. Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli

    PubMed Central

    Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel

    2015-01-01

    Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli’s average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music. PMID:26510120

  15. Multi-Variate EEG Analysis as a Novel Tool to Examine Brain Responses to Naturalistic Music Stimuli.

    PubMed

    Sturm, Irene; Dähne, Sven; Blankertz, Benjamin; Curio, Gabriel

    2015-01-01

    Note onsets in music are acoustic landmarks providing auditory cues that underlie the perception of more complex phenomena such as beat, rhythm, and meter. For naturalistic ongoing sounds a detailed view on the neural representation of onset structure is hard to obtain, since, typically, stimulus-related EEG signatures are derived by averaging a high number of identical stimulus presentations. Here, we propose a novel multivariate regression-based method extracting onset-related brain responses from the ongoing EEG. We analyse EEG recordings of nine subjects who passively listened to stimuli from various sound categories encompassing simple tone sequences, full-length romantic piano pieces and natural (non-music) soundscapes. The regression approach reduces the 61-channel EEG to one time course optimally reflecting note onsets. The neural signatures derived by this procedure indeed resemble canonical onset-related ERPs, such as the N1-P2 complex. This EEG projection was then utilized to determine the Cortico-Acoustic Correlation (CACor), a measure of synchronization between EEG signal and stimulus. We demonstrate that a significant CACor (i) can be detected in an individual listener's EEG of a single presentation of a full-length complex naturalistic music stimulus, and (ii) it co-varies with the stimuli's average magnitudes of sharpness, spectral centroid, and rhythmic complexity. In particular, the subset of stimuli eliciting a strong CACor also produces strongly coordinated tension ratings obtained from an independent listener group in a separate behavioral experiment. Thus musical features that lead to a marked physiological reflection of tone onsets also contribute to perceived tension in music.

  16. Active listening room compensation for massive multichannel sound reproduction systems using wave-domain adaptive filtering.

    PubMed

    Spors, Sascha; Buchner, Herbert; Rabenstein, Rudolf; Herbordt, Wolfgang

    2007-07-01

    The acoustic theory for multichannel sound reproduction systems usually assumes free-field conditions for the listening environment. However, their performance in real-world listening environments may be impaired by reflections at the walls. This impairment can be reduced by suitable compensation measures. For systems with many channels, active compensation is an option, since the compensating waves can be created by the reproduction loudspeakers. Due to the time-varying nature of room acoustics, the compensation signals have to be determined by an adaptive system. The problems associated with the successful operation of multichannel adaptive systems are addressed in this contribution. First, a method for decoupling the adaptation problem is introduced. It is based on a generalized singular value decomposition and is called eigenspace adaptive filtering. Unfortunately, it cannot be implemented in its pure form, since the continuous adaptation of the generalized singular value decomposition matrices to the variable room acoustics is numerically very demanding. However, a combination of this mathematical technique with the physical description of wave propagation yields a realizable multichannel adaptation method with good decoupling properties. It is called wave domain adaptive filtering and is discussed here in the context of wave field synthesis.

  17. Turn Off the Music! Music Impairs Visual Associative Memory Performance in Older Adults

    PubMed Central

    Reaves, Sarah; Graham, Brittany; Grahn, Jessica; Rabannifard, Parissa; Duarte, Audrey

    2016-01-01

    Purpose of the Study: Whether we are explicitly listening to it or not, music is prevalent in our environment. Surprisingly, little is known about the effect of environmental music on concurrent cognitive functioning and whether young and older adults are differentially affected by music. Here, we investigated the impact of background music on a concurrent paired associate learning task in healthy young and older adults. Design and Methods: Young and older adults listened to music or to silence while simultaneously studying face–name pairs. Participants’ memory for the pairs was then tested while listening to either the same or different music. Participants also made subjective ratings about how distracting they found each song to be. Results: Despite the fact that all participants rated music as more distracting to their performance than silence, only older adults’ associative memory performance was impaired by music. These results are most consistent with the theory that older adults’ failure to inhibit processing of distracting task-irrelevant information, in this case background music, contributes to their memory impairments. Implications: These data have important practical implications for older adults’ ability to perform cognitively demanding tasks even in what many consider to be an unobtrusive environment. PMID:26035876

  18. Effects of Voice Harmonic Complexity on ERP Responses to Pitch-Shifted Auditory Feedback

    PubMed Central

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R.

    2011-01-01

    Objective The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Methods Event-related potentials (ERPs) were recorded in response to +200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. Results During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. Conclusions These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. Significance This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. PMID:21719346

  19. Guiding Curriculum Development: Student Perceptions for the Second Language Learning in Technology-Enhanced Learning Environments

    ERIC Educational Resources Information Center

    Gürleyik, Sinan; Akdemir, Elif

    2018-01-01

    Developing curriculum to enhance student learning is the primer purpose of all curricular activities. Availability of recent tools supporting to teach various skills including reading, listening, speaking and writing has opened a new avenue for curricular activities in technology-enhanced learning environments. Understanding the perceptions of…

  20. Forced-Attention Dichotic Listening with University Students with Dyslexia: Search for a Core Deficit

    ERIC Educational Resources Information Center

    Kershner, John R.

    2016-01-01

    Rapidly changing environments in day-to-day activities, enriched with stimuli competing for attention, require a cognitive control mechanism to select relevant stimuli, ignore irrelevant stimuli, and shift attention between alternative features of the environment. Such attentional orchestration is essential to the acquisition of reading skills. In…

  1. Eavesdropping on Electronic Guidebooks: Observing Learning Resources in Shared Listening Environments.

    ERIC Educational Resources Information Center

    Woodruff, Allison; Aoki, Paul M.; Grinter, Rebecca E.; Hurst, Amy; Szymanski, Margaret H.; Thornton, James D.

    This paper describes an electronic guidebook, "Sotto Voce," that enables visitors to share audio information by eavesdropping on each others guidebook activity. The first section discusses the design and implementation of the guidebook device, key aspects of its user interface, the design goals for the audio environment, the eavesdropping…

  2. Music and speech listening enhance the recovery of early sensory processing after stroke.

    PubMed

    Särkämö, Teppo; Pihko, Elina; Laitinen, Sari; Forsblom, Anita; Soinila, Seppo; Mikkonen, Mikko; Autti, Taina; Silvennoinen, Heli M; Erkkilä, Jaakko; Laine, Matti; Peretz, Isabelle; Hietanen, Marja; Tervaniemi, Mari

    2010-12-01

    Our surrounding auditory environment has a dramatic influence on the development of basic auditory and cognitive skills, but little is known about how it influences the recovery of these skills after neural damage. Here, we studied the long-term effects of daily music and speech listening on auditory sensory memory after middle cerebral artery (MCA) stroke. In the acute recovery phase, 60 patients who had middle cerebral artery stroke were randomly assigned to a music listening group, an audio book listening group, or a control group. Auditory sensory memory, as indexed by the magnetic MMN (MMNm) response to changes in sound frequency and duration, was measured 1 week (baseline), 3 months, and 6 months after the stroke with whole-head magnetoencephalography recordings. Fifty-four patients completed the study. Results showed that the amplitude of the frequency MMNm increased significantly more in both music and audio book groups than in the control group during the 6-month poststroke period. In contrast, the duration MMNm amplitude increased more in the audio book group than in the other groups. Moreover, changes in the frequency MMNm amplitude correlated significantly with the behavioral improvement of verbal memory and focused attention induced by music listening. These findings demonstrate that merely listening to music and speech after neural damage can induce long-term plastic changes in early sensory processing, which, in turn, may facilitate the recovery of higher cognitive functions. The neural mechanisms potentially underlying this effect are discussed.

  3. Losing the music: aging affects the perception and subcortical neural representation of musical harmony.

    PubMed

    Bones, Oliver; Plack, Christopher J

    2015-03-04

    When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. Copyright © 2015 Bones and Plack.

  4. Auditory and visual orienting responses in listeners with and without hearing-impairment

    PubMed Central

    Brimijoin, W. Owen; McShefferty, David; Akeroyd, Michael A.

    2015-01-01

    Head movements are intimately involved in sound localization and may provide information that could aid an impaired auditory system. Using an infrared camera system, head position and orientation was measured for 17 normal-hearing and 14 hearing-impaired listeners seated at the center of a ring of loudspeakers. Listeners were asked to orient their heads as quickly as was comfortable toward a sequence of visual targets, or were blindfolded and asked to orient toward a sequence of loudspeakers playing a short sentence. To attempt to elicit natural orienting responses, listeners were not asked to reorient their heads to the 0° loudspeaker between trials. The results demonstrate that hearing-impairment is associated with several changes in orienting responses. Hearing-impaired listeners showed a larger difference in auditory versus visual fixation position and a substantial increase in initial and fixation latency for auditory targets. Peak velocity reached roughly 140 degrees per second in both groups, corresponding to a rate of change of approximately 1 microsecond of interaural time difference per millisecond of time. Most notably, hearing-impairment was associated with a large change in the complexity of the movement, changing from smooth sigmoidal trajectories to ones characterized by abruptly-changing velocities, directional reversals, and frequent fixation angle corrections. PMID:20550266

  5. Losing the Music: Aging Affects the Perception and Subcortical Neural Representation of Musical Harmony

    PubMed Central

    Plack, Christopher J.

    2015-01-01

    When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or “consonance”. Complex frequency ratios, on the other hand, evoke feelings of tension or “dissonance”. Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological “frequency-following response.” The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding. PMID:25740534

  6. An efficient robust sound classification algorithm for hearing aids.

    PubMed

    Nordqvist, Peter; Leijon, Arne

    2004-06-01

    An efficient robust sound classification algorithm based on hidden Markov models is presented. The system would enable a hearing aid to automatically change its behavior for differing listening environments according to the user's preferences. This work attempts to distinguish between three listening environment categories: speech in traffic noise, speech in babble, and clean speech, regardless of the signal-to-noise ratio. The classifier uses only the modulation characteristics of the signal. The classifier ignores the absolute sound pressure level and the absolute spectrum shape, resulting in an algorithm that is robust against irrelevant acoustic variations. The measured classification hit rate was 96.7%-99.5% when the classifier was tested with sounds representing one of the three environment categories included in the classifier. False-alarm rates were 0.2%-1.7% in these tests. The algorithm is robust and efficient and consumes a small amount of instructions and memory. It is fully possible to implement the classifier in a DSP-based hearing instrument.

  7. The Listening Cube: A Three Dimensional Auditory Training Program

    PubMed Central

    Ilona, Anderson; Marleen, Bammens; Josepha, Jans; Marianne, Haesevoets; Ria, Pans; Hilde, Vandistel; Yvette, Vrolix

    2012-01-01

    Objectives Here we present the Listening Cube, an auditory training program for children and adults receiving cochlear implants, developed during the clinical practice at the KIDS Royal Institute for the Deaf in Belgium. We provide information on the content of the program as well as guidance as to how to use it. Methods The Listening Cube is a three-dimensional auditory training model that takes the following into consideration: the sequence of auditory listening skills to be trained, the variety of materials to be used, and the range of listening environments to be considered. During auditory therapy, it is important to develop training protocols and materials to provide rapid improvement over a relatively short time period. Moreover, effectiveness and the general real-life applicability of these protocols to various users should be determined. Results Because this publication is not a research article, but comes out of good daily practice, we cannot state the main results of this study. We can only say that this auditory training model is very successful. Since the first report was published in the Dutch language in 2003, more than 200 therapists in Belgium and the Netherlands followed a training course elected to implement the Listening Cube in their daily practice with children and adults with a hearing loss, especially in those wearing cochlear implants. Conclusion The Listening Cube is a tool to aid in planning therapeutic sessions created to meet individual needs, which is often challenging. The three dimensions of the cube are levels of perception, practice material, and practice conditions. These dimensions can serve as a visual reminder of the task analysis and of other considerations that play a role in structuring therapy sessions. PMID:22701766

  8. Fast Forward: An Upskilling Programme for Ford Motor Company Foundry Workers.

    ERIC Educational Resources Information Center

    Cousin, Glynis; Pound, Gill

    1991-01-01

    The purpose of an upgrading program for British Ford Motor Company employees was getting trainees back into learning environments and improving communication, listening, calculation, reading, and cooperation. (SK)

  9. Humans (Homo sapiens) judge the emotional content of piglet (Sus scrofa domestica) calls based on simple acoustic parameters, not personality, empathy, nor attitude toward animals.

    PubMed

    Maruščáková, Iva L; Linhart, Pavel; Ratcliffe, Victoria F; Tallet, Céline; Reby, David; Špinka, Marek

    2015-05-01

    The vocal expression of emotion is likely driven by shared physiological principles among species. However, which acoustic features promote decoding of emotional state and how the decoding is affected by their listener's psychology remain poorly understood. Here we tested how acoustic features of piglet vocalizations interact with psychological profiles of human listeners to affect judgments of emotional content of heterospecific vocalizations. We played back 48 piglet call sequences recorded in four different contexts (castration, isolation, reunion, nursing) to 60 listeners. Listeners judged the emotional intensity and valence of the recordings and were further asked to attribute a context of emission from four proposed contexts. Furthermore, listeners completed a series of questionnaires assessing their personality (NEO-FFI personality inventory), empathy [Interpersonal Reactivity Index (IRI)] and attitudes to animals (Animal Attitudes Scale). None of the listeners' psychological traits affected the judgments. On the contrary, acoustic properties of recordings had a substantial effect on ratings. Recordings were rated as more intense with increasing pitch (mean fundamental frequency) and increasing proportion of vocalized sound within each stimulus recording and more negative with increasing pitch and increasing duration of the calls within the recording. More complex acoustic properties (jitter, harmonic-to-noise ratio, and presence of subharmonics) did not seem to affect the judgments. The probability of correct context recognition correlated positively with the assessed emotion intensity for castration and reunion calls, and negatively for nursing calls. In conclusion, listeners judged emotions from pig calls using simple acoustic properties and the perceived emotional intensity might guide the identification of the context. (c) 2015 APA, all rights reserved).

  10. Emotions induced by operatic music: psychophysiological effects of music, plot, and acting: a scientist's tribute to Maria Callas.

    PubMed

    Balteş, Felicia Rodica; Avram, Julia; Miclea, Mircea; Miu, Andrei C

    2011-06-01

    Operatic music involves both singing and acting (as well as rich audiovisual background arising from the orchestra and elaborate scenery and costumes) that multiply the mechanisms by which emotions are induced in listeners. The present study investigated the effects of music, plot, and acting performance on emotions induced by opera. There were three experimental conditions: (1) participants listened to a musically complex and dramatically coherent excerpt from Tosca; (2) they read a summary of the plot and listened to the same musical excerpt again; and (3) they re-listened to music while they watched the subtitled film of this acting performance. In addition, a control condition was included, in which an independent sample of participants succesively listened three times to the same musical excerpt. We measured subjective changes using both dimensional, and specific music-induced emotion questionnaires. Cardiovascular, electrodermal, and respiratory responses were also recorded, and the participants kept track of their musical chills. Music listening alone elicited positive emotion and autonomic arousal, seen in faster heart rate, but slower respiration rate and reduced skin conductance. Knowing the (sad) plot while listening to the music a second time reduced positive emotions (peacefulness, joyful activation), and increased negative ones (sadness), while high autonomic arousal was maintained. Watching the acting performance increased emotional arousal and changed its valence again (from less positive/sad to transcendent), in the context of continued high autonomic arousal. The repeated exposure to music did not by itself induce this pattern of modifications. These results indicate that the multiple musical and dramatic means involved in operatic performance specifically contribute to the genesis of music-induced emotions and their physiological correlates. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Interactive physically-based sound simulation

    NASA Astrophysics Data System (ADS)

    Raghuvanshi, Nikunj

    The realization of interactive, immersive virtual worlds requires the ability to present a realistic audio experience that convincingly compliments their visual rendering. Physical simulation is a natural way to achieve such realism, enabling deeply immersive virtual worlds. However, physically-based sound simulation is very computationally expensive owing to the high-frequency, transient oscillations underlying audible sounds. The increasing computational power of desktop computers has served to reduce the gap between required and available computation, and it has become possible to bridge this gap further by using a combination of algorithmic improvements that exploit the physical, as well as perceptual properties of audible sounds. My thesis is a step in this direction. My dissertation concentrates on developing real-time techniques for both sub-problems of sound simulation: synthesis and propagation. Sound synthesis is concerned with generating the sounds produced by objects due to elastic surface vibrations upon interaction with the environment, such as collisions. I present novel techniques that exploit human auditory perception to simulate scenes with hundreds of sounding objects undergoing impact and rolling in real time. Sound propagation is the complementary problem of modeling the high-order scattering and diffraction of sound in an environment as it travels from source to listener. I discuss my work on a novel numerical acoustic simulator (ARD) that is hundred times faster and consumes ten times less memory than a high-accuracy finite-difference technique, allowing acoustic simulations on previously-intractable spaces, such as a cathedral, on a desktop computer. Lastly, I present my work on interactive sound propagation that leverages my ARD simulator to render the acoustics of arbitrary static scenes for multiple moving sources and listener in real time, while accounting for scene-dependent effects such as low-pass filtering and smooth attenuation behind obstructions, reverberation, scattering from complex geometry and sound focusing. This is enabled by a novel compact representation that takes a thousand times less memory than a direct scheme, thus reducing memory footprints to fit within available main memory. To the best of my knowledge, this is the only technique and system in existence to demonstrate auralization of physical wave-based effects in real-time on large, complex 3D scenes.

  12. Tempo and walking speed with music in the urban context

    PubMed Central

    Franěk, Marek; van Noorden, Leon; Režný, Lukáš

    2014-01-01

    The study explored the effect of music on the temporal aspects of walking behavior in a real outdoor urban setting. First, spontaneous synchronization between the beat of the music and step tempo was explored. The effect of motivational and non-motivational music (Karageorghis et al., 1999) on the walking speed was also studied. Finally, we investigated whether music can mask the effects of visual aspects of the walking route environment, which involve fluctuation of walking speed as a response to particular environmental settings. In two experiments, we asked participants to walk around an urban route that was 1.8 km in length through various environments in the downtown area of Hradec Králové. In Experiment 1, the participants listened to a musical track consisting of world pop music with a clear beat. In Experiment 2, participants were walking either with motivational music, which had a fast tempo and a strong rhythm, or with non-motivational music, which was slower, nice music, but with no strong implication to movement. Musical beat, as well as the sonic character of the music listened to while walking, influenced walking speed but did not lead to precise synchronization. It was found that many subjects did not spontaneously synchronize with the beat of the music at all, and some subjects synchronized only part of the time. The fast, energetic music increases the speed of the walking tempo, while slower, relaxing music makes the walking tempo slower. Further, it was found that listening to music with headphones while walking can mask the influence of the surrounding environment to some extent. Both motivational music and non-motivational music had a larger effect than the world pop music from Experiment 1. Individual differences in responses to the music listened to while walking that were linked to extraversion and neuroticism were also observed. The findings described here could be useful in rhythmic stimulation for enhancing or recovering the features of movement performance. PMID:25520682

  13. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study.

    PubMed

    Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra

    2014-07-01

    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Measuring listening effort: driving simulator vs. simple dual-task paradigm

    PubMed Central

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599

  15. Some factors underlying individual differences in speech recognition on PRESTO: a first report.

    PubMed

    Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B

    2013-01-01

    Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. American Academy of Audiology.

  16. Some Factors Underlying Individual Differences in Speech Recognition on PRESTO: A First Report

    PubMed Central

    Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.

    2013-01-01

    Background Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core, underlying factors that influence speech recognition abilities. Purpose To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on PRESTO, a new high-variability sentence recognition test under adverse listening conditions. Research Design Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Study Sample Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Data Collection and Analysis Participants’ assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the BRIEF-A self-report questionnaire on executive function, and two performance subtests of the WASI Performance IQ (non-verbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). Results The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. Conclusions HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. PMID:24047949

  17. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal.

    PubMed

    Sun, Kang; Echevarria Sanchez, Gemma M; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment.

  18. Personal Audiovisual Aptitude Influences the Interaction Between Landscape and Soundscape Appraisal

    PubMed Central

    Sun, Kang; Echevarria Sanchez, Gemma M.; De Coensel, Bert; Van Renterghem, Timothy; Talsma, Durk; Botteldooren, Dick

    2018-01-01

    It has been established that there is an interaction between audition and vision in the appraisal of our living environment, and that this appraisal is influenced by personal factors. Here, we test the hypothesis that audiovisual aptitude influences appraisal of our sonic and visual environment. To measure audiovisual aptitude, an auditory deviant detection experiment was conducted in an ecologically valid and complex context. This experiment allows us to distinguish between accurate and less accurate listeners. Additionally, it allows to distinguish between participants that are easily visually distracted and those who are not. To do so, two previously conducted laboratory experiments were re-analyzed. The first experiment focuses on self-reported noise annoyance in a living room context, whereas the second experiment focuses on the perceived pleasantness of using outdoor public spaces. In the first experiment, the influence of visibility of vegetation on self-reported noise annoyance was modified by audiovisual aptitude. In the second one, it was found that the overall appraisal of walking across a bridge is influenced by audiovisual aptitude, in particular when a visually intrusive noise barrier is used to reduce highway traffic noise levels. We conclude that audiovisual aptitude may affect the appraisal of the living environment. PMID:29910750

  19. Mobile phone conversations, listening to music and quiet (electric) cars: Are traffic sounds important for safe cycling?

    PubMed

    Stelling-Konczak, A; van Wee, G P; Commandeur, J J F; Hagenzieker, M

    2017-09-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe cycling. Furthermore, the study investigated the potential safety implications of limited auditory information caused by quiet (electric) cars and by cyclists listening to music or talking on the phone. An Internet survey among 2249 cyclists in three age groups (16-18, 30-40 and 65-70year old) was carried out to collect information on the following aspects: 1) the auditory perception of traffic sounds, including the sounds of quiet (electric) cars; 2) the possible compensatory behaviours of cyclists who listen to music or talk on their mobile phones; 3) the possible contribution of listening to music and talking on the phone to cycling crashes and incidents. Age differences with respect to those three aspects were analysed. Results show that listening to music and talking on the phone negatively affects perception of sounds crucial for safe cycling. However, taking into account the influence of confounding variables, no relationship was found between the frequency of listening to music or talking on the phone and the frequency of incidents among teenage cyclists. This may be due to cyclists' compensating for the use of portable devices. Listening to music or talking on the phone whilst cycling may still pose a risk in the absence of compensatory behaviour or in a traffic environment with less extensive and less safe cycling infrastructure than the Dutch setting. With the increasing number of quiet (electric) cars on the road, cyclists in the future may also need to compensate for the limited auditory input of these cars. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Using Complex Event Processing (CEP) and vocal synthesis techniques to improve comprehension of sonified human-centric data

    NASA Astrophysics Data System (ADS)

    Rimland, Jeff; Ballora, Mark

    2014-05-01

    The field of sonification, which uses auditory presentation of data to replace or augment visualization techniques, is gaining popularity and acceptance for analysis of "big data" and for assisting analysts who are unable to utilize traditional visual approaches due to either: 1) visual overload caused by existing displays; 2) concurrent need to perform critical visually intensive tasks (e.g. operating a vehicle or performing a medical procedure); or 3) visual impairment due to either temporary environmental factors (e.g. dense smoke) or biological causes. Sonification tools typically map data values to sound attributes such as pitch, volume, and localization to enable them to be interpreted via human listening. In more complex problems, the challenge is in creating multi-dimensional sonifications that are both compelling and listenable, and that have enough discrete features that can be modulated in ways that allow meaningful discrimination by a listener. We propose a solution to this problem that incorporates Complex Event Processing (CEP) with speech synthesis. Some of the more promising sonifications to date use speech synthesis, which is an "instrument" that is amenable to extended listening, and can also provide a great deal of subtle nuance. These vocal nuances, which can represent a nearly limitless number of expressive meanings (via a combination of pitch, inflection, volume, and other acoustic factors), are the basis of our daily communications, and thus have the potential to engage the innate human understanding of these sounds. Additionally, recent advances in CEP have facilitated the extraction of multi-level hierarchies of information, which is necessary to bridge the gap between raw data and this type of vocal synthesis. We therefore propose that CEP-enabled sonifications based on the sound of human utterances could be considered the next logical step in human-centric "big data" compression and transmission.

  1. Factors Affecting Daily Cochlear Implant Use in Children: Datalogging Evidence.

    PubMed

    Easwar, Vijayalakshmi; Sanfilippo, Joseph; Papsin, Blake; Gordon, Karen

    Children with profound hearing loss can gain access to sound through cochlear implants (CIs), but these devices must be worn consistently to promote auditory development. Although subjective parent reports have identified several factors limiting long-term CI use in children, it is also important to understand the day-to-day issues which may preclude consistent device use. In the present study, objective measures gathered through datalogging software were used to quantify the following in children: (1) number of hours of CI use per day, (2) practical concerns including repeated disconnections between the external transmission coil and the internal device (termed "coil-offs"), and (3) listening environments experienced during daily use. This study aimed to (1) objectively measure daily CI use and factors influencing consistent device use in children using one or two CIs and (2) evaluate the intensity levels and types of listening environments children are exposed to during daily CI use. Retrospective analysis. Measures of daily CI use were obtained from 146 pediatric users of Cochlear Nucleus 6 speech processors. The sample included 5 unilateral, 40 bimodal, and 101 bilateral CI users (77 simultaneously and 24 sequentially implanted). Daily CI use, duration, and frequency of coil-offs per day, and the time spent in multiple intensity ranges and environment types were extracted from the datalog saved during clinic appointments. Multiple regression analyses were completed to predict daily CI use based on child-related demographic variables, and to evaluate the effects of age on coil-offs and environment acoustics. Children used their CIs for 9.86 ± 3.43 hr on average on a daily basis, with use exceeding 9 hr per day in ∼64% of the children. Daily CI use reduced significantly with increasing durations of coil-off (p = 0.027) and increased significantly with longer CI experience (p < 0.001) and pre-CI acoustic experience (p < 0.001), when controlled for the child's age. Total time in sound (sum of CI and pre-CI experience) was positively correlated with CI use (r = 0.72, p < 0.001). Longer durations of coil-off were associated with higher frequency of coil-offs (p < 0.001). The frequency of coil-offs ranged from 0.99 to 594.10 times per day and decreased significantly with age (p < 0.001). Daily CI use and frequency of coil-offs did not vary significantly across known etiologies. Listening environments of all children typically ranged between 50 and 70 dBA. Children of all ages were exposed to speech in noisy environments. Environment classified as "music" was identified more often in younger children. The majority of children use their CIs consistently, even during the first year of implantation. The frequency of coil-offs is a practical challenge in infants and young children, and demonstrates the need for improved coil retention methods for pediatric use. Longer hearing experience and shorter coil-off time facilitates consistent CI use. Children are listening to speech in noisy environments most often, thereby indicating a need for better access to binaural cues, signal processing, and stimulation strategies to aid listening. Study findings could be useful in parent counseling of young and/or new CI users. American Academy of Audiology

  2. Children Researching Their Urban Environment: Developing a Methodology

    ERIC Educational Resources Information Center

    Hacking, Elisabeth Barratt; Barratt, Robert

    2009-01-01

    "Listening to children: environmental perspectives and the school curriculum" (L2C) was a UK research council project based in schools in a socially and economically deprived urban area in England. It focused on 10/12 year old children's experience of their local community and environment, and how they made sense of this in relation both…

  3. Using Ubiquitous Games in an English Listening and Speaking Course: Impact on Learning Outcomes and Motivation

    ERIC Educational Resources Information Center

    Liu, Tsung-Yu; Chu, Yu-Ling

    2010-01-01

    This paper reports the results of a study which aimed to investigate how ubiquitous games influence English learning achievement and motivation through a context-aware ubiquitous learning environment. An English curriculum was conducted on a school campus by using a context-aware ubiquitous learning environment called the Handheld English Language…

  4. Constructing Nature with Children: A Phenomenological Study of Preschoolers' Experiences With(in) A Natural Environment

    ERIC Educational Resources Information Center

    Porto, Adonia F.

    2017-01-01

    This research investigated young children's experiences of a natural wetland environment as they constructed meanings of nature in a group. This work was framed theoretically on the premise of social constructivism and ethical listening in efforts to phenomenologically understand how children came to know nature through pre-reflective and…

  5. The Use of an Information Brokering Tool in an Electronic Museum Environment.

    ERIC Educational Resources Information Center

    Zimmermann, Andreas; Lorenz, Andreas; Specht, Marcus

    When art and technology meet, a huge information flow has to be managed. The LISTEN project conducted by the Fraunhofer Institut in St. Augustin (Germany) augments every day environments with audio information. In order to distribute and administer this information in an efficient way, the Institute decided to employ an information brokering tool…

  6. Using a Humanoid Robot to Develop a Dialogue-Based Interactive Learning Environment for Elementary Foreign Language Classrooms

    ERIC Educational Resources Information Center

    Chang, Chih-Wei; Chen, Gwo-Dong

    2010-01-01

    Elementary school is the critical stage during which the development of listening comprehension and oral abilities in language acquisition occur, especially with a foreign language. However, the current foreign language instructors often adopt one-way teaching, and the learning environment lacks any interactive instructional media with which to…

  7. Investigation of musicality in birdsong.

    PubMed

    Rothenberg, David; Roeske, Tina C; Voss, Henning U; Naguib, Marc; Tchernichovski, Ofer

    2014-02-01

    Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches-and the transitions between acoustic states-affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above "musical" features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music's effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners' emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function - to affect behavioral state in listeners - could provide a useful animal model for studying basic principles of music neuroscience in a system that is very accessible for investigation, and where developmental auditory and social experience can be tightly controlled. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Auditory psychophysics and perception.

    PubMed

    Hirsh, I J; Watson, C S

    1996-01-01

    In this review of auditory psychophysics and perception, we cite some important books, research monographs, and research summaries from the past decade. Within auditory psychophysics, we have singled out some topics of current importance: Cross-Spectral Processing, Timbre and Pitch, and Methodological Developments. Complex sounds and complex listening tasks have been the subject of new studies in auditory perception. We review especially work that concerns auditory pattern perception, with emphasis on temporal aspects of the patterns and on patterns that do not depend on the cognitive structures often involved in the perception of speech and music. Finally, we comment on some aspects of individual difference that are sufficiently important to question the goal of characterizing auditory properties of the typical, average, adult listener. Among the important factors that give rise to these individual differences are those involved in selective processing and attention.

  9. No Need for Templates in the Auditory Enhancement Effect

    PubMed Central

    Carcagno, Samuele; Semal, Catherine; Demany, Laurent

    2013-01-01

    The audibility of a target tone in a multitone background masker is enhanced by the presentation of a precursor sound consisting of the masker alone. There is evidence that precursor-induced neural adaptation plays a role in this perceptual enhancement. However, the precursor may also be strategically used by listeners as a spectral template of the following masker to better segregate it from the target. In the present study, we tested this hypothesis by measuring the audibility of a target tone in a multitone masker after the presentation of precursors which, in some conditions, were made dissimilar to the masker by gating their components asynchronously. The precursor and the following sound were presented either to the same ear or to opposite ears. In either case, we found no significant difference in the amount of enhancement produced by synchronous and asynchronous precursors. In a second experiment, listeners had to judge whether a synchronous multitone complex contained exactly the same tones as a preceding precursor complex or had one tone less. In this experiment, listeners performed significantly better with synchronous than with asynchronous precursors, showing that asynchronous precursors were poorer perceptual templates of the synchronous multitone complexes. Overall, our findings indicate that precursor-induced auditory enhancement cannot be fully explained by the strategic use of the precursor as a template of the following masker. Our results are consistent with an explanation of enhancement based on selective neural adaptation taking place at a central locus of the auditory system. PMID:23826348

  10. No Need for Templates in the Auditory Enhancement Effect.

    PubMed

    Carcagno, Samuele; Semal, Catherine; Demany, Laurent

    2013-01-01

    The audibility of a target tone in a multitone background masker is enhanced by the presentation of a precursor sound consisting of the masker alone. There is evidence that precursor-induced neural adaptation plays a role in this perceptual enhancement. However, the precursor may also be strategically used by listeners as a spectral template of the following masker to better segregate it from the target. In the present study, we tested this hypothesis by measuring the audibility of a target tone in a multitone masker after the presentation of precursors which, in some conditions, were made dissimilar to the masker by gating their components asynchronously. The precursor and the following sound were presented either to the same ear or to opposite ears. In either case, we found no significant difference in the amount of enhancement produced by synchronous and asynchronous precursors. In a second experiment, listeners had to judge whether a synchronous multitone complex contained exactly the same tones as a preceding precursor complex or had one tone less. In this experiment, listeners performed significantly better with synchronous than with asynchronous precursors, showing that asynchronous precursors were poorer perceptual templates of the synchronous multitone complexes. Overall, our findings indicate that precursor-induced auditory enhancement cannot be fully explained by the strategic use of the precursor as a template of the following masker. Our results are consistent with an explanation of enhancement based on selective neural adaptation taking place at a central locus of the auditory system.

  11. From fragments to the whole: a comparison between cochlear implant users and normal-hearing listeners in music perception and enjoyment.

    PubMed

    Alexander, Ashlin J; Bartel, Lee; Friesen, Lendra; Shipp, David; Chen, Joseph

    2011-02-01

    Cochlear implants (CIs) allow many profoundly deaf individuals to regain speech understanding. However, the ability to understand speech does not necessarily guarantee music enjoyment. Enabling a CI user to recover the ability to perceive and enjoy the complexity of music remains a challenge determined by many factors. (1) To construct a novel, attention-based, diagnostic software tool (Music EAR) for the assessment of music enjoyment and perception and (2) to compare the results among three listener groups. Thirty-six subjects completed the Music EAR assessment tool: 12 normal-hearing musicians (NHMs), 12 normal-hearing nonmusicians (NHnMs), and 12 CI listeners. Subjects were required to (1) rate enjoyment of musical excerpts at three complexity levels; (2) differentiate five instrumental timbres; (3) recognize pitch pattern variation; and (4) identify target musical patterns embedded holistically in a melody. Enjoyment scores for CI users were comparable to those for NHMs and superior to those for NHnMs and revealed that implantees enjoyed classical music most. CI users performed significantly poorer in all categories of music perception compared to normal-hearing listeners. Overall CI user scores were lowest in those tasks requiring increased attention. Two high-performing subjects matched or outperformed NHnMs in pitch and timbre perception tasks. The Music EAR assessment tool provides a unique approach to the measurement of music perception and enjoyment in CI users. Together with auditory training evidence, the results provide considerable hope for further recovery of music appreciation through methodical rehabilitation.

  12. An environment-adaptive management algorithm for hearing-support devices incorporating listening situation and noise type classifiers.

    PubMed

    Yook, Sunhyun; Nam, Kyoung Won; Kim, Heepyung; Hong, Sung Hwa; Jang, Dong Pyo; Kim, In Young

    2015-04-01

    In order to provide more consistent sound intelligibility for the hearing-impaired person, regardless of environment, it is necessary to adjust the setting of the hearing-support (HS) device to accommodate various environmental circumstances. In this study, a fully automatic HS device management algorithm that can adapt to various environmental situations is proposed; it is composed of a listening-situation classifier, a noise-type classifier, an adaptive noise-reduction algorithm, and a management algorithm that can selectively turn on/off one or more of the three basic algorithms-beamforming, noise-reduction, and feedback cancellation-and can also adjust internal gains and parameters of the wide-dynamic-range compression (WDRC) and noise-reduction (NR) algorithms in accordance with variations in environmental situations. Experimental results demonstrated that the implemented algorithms can classify both listening situation and ambient noise type situations with high accuracies (92.8-96.4% and 90.9-99.4%, respectively), and the gains and parameters of the WDRC and NR algorithms were successfully adjusted according to variations in environmental situation. The average values of signal-to-noise ratio (SNR), frequency-weighted segmental SNR, Perceptual Evaluation of Speech Quality, and mean opinion test scores of 10 normal-hearing volunteers of the adaptive multiband spectral subtraction (MBSS) algorithm were improved by 1.74 dB, 2.11 dB, 0.49, and 0.68, respectively, compared to the conventional fixed-parameter MBSS algorithm. These results indicate that the proposed environment-adaptive management algorithm can be applied to HS devices to improve sound intelligibility for hearing-impaired individuals in various acoustic environments. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  13. Impact of stimulus-related factors and hearing impairment on listening effort as indicated by pupil dilation.

    PubMed

    Ohlenforst, Barbara; Zekveld, Adriana A; Lunner, Thomas; Wendt, Dorothea; Naylor, Graham; Wang, Yang; Versfeld, Niek J; Kramer, Sophia E

    2017-08-01

    Previous research has reported effects of masker type and signal-to-noise ratio (SNR) on listening effort, as indicated by the peak pupil dilation (PPD) relative to baseline during speech recognition. At about 50% correct sentence recognition performance, increasing SNRs generally results in declining PPDs, indicating reduced effort. However, the decline in PPD over SNRs has been observed to be less pronounced for hearing-impaired (HI) compared to normal-hearing (NH) listeners. The presence of a competing talker during speech recognition generally resulted in larger PPDs as compared to the presence of a fluctuating or stationary background noise. The aim of the present study was to examine the interplay between hearing-status, a broad range of SNRs corresponding to sentence recognition performance varying from 0 to 100% correct, and different masker types (stationary noise and single-talker masker) on the PPD during speech perception. Twenty-five HI and 32 age-matched NH participants listened to sentences across a broad range of SNRs, masked with speech from a single talker (-25 dB to +15 dB SNR) or with stationary noise (-12 dB to +16 dB). Correct sentence recognition scores and pupil responses were recorded during stimulus presentation. With a stationary masker, NH listeners show maximum PPD across a relatively narrow range of low SNRs, while HI listeners show relatively large PPD across a wide range of ecological SNRs. With the single-talker masker, maximum PPD was observed in the mid-range of SNRs around 50% correct sentence recognition performance, while smaller PPDs were observed at lower and higher SNRs. Mixed-model ANOVAs revealed significant interactions between hearing-status and SNR on the PPD for both masker types. Our data show a different pattern of PPDs across SNRs between groups, which indicates that listening and the allocation of effort during listening in daily life environments may be different for NH and HI listeners. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Impact of Hearing Aid Technology on Outcomes in Daily Life II: Speech Understanding and Listening Effort.

    PubMed

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    2016-01-01

    Modern hearing aid (HA) devices include a collection of acoustic signal-processing features designed to improve listening outcomes in a variety of daily auditory environments. Manufacturers market these features at successive levels of technological sophistication. The features included in costlier premium hearing devices are designed to result in further improvements to daily listening outcomes compared with the features included in basic hearing devices. However, independent research has not substantiated such improvements. This research was designed to explore differences in speech-understanding and listening-effort outcomes for older adults using premium-feature and basic-feature HAs in their daily lives. For this participant-blinded, repeated, crossover trial 45 older adults (mean age 70.3 years) with mild-to-moderate sensorineural hearing loss wore each of four pairs of bilaterally fitted HAs for 1 month. HAs were premium- and basic-feature devices from two major brands. After each 1-month trial, participants' speech-understanding and listening-effort outcomes were evaluated in the laboratory and in daily life. Three types of speech-understanding and listening-effort data were collected: measures of laboratory performance, responses to standardized self-report questionnaires, and participant diary entries about daily communication. The only statistically significant superiority for the premium-feature HAs occurred for listening effort in the loud laboratory condition and was demonstrated for only one of the tested brands. The predominant complaint of older adults with mild-to-moderate hearing impairment is difficulty understanding speech in various settings. The combined results of all the outcome measures used in this research suggest that, when fitted using scientifically based practices, both premium- and basic-feature HAs are capable of providing considerable, but essentially equivalent, improvements to speech understanding and listening effort in daily life for this population. For HA providers to make evidence-based recommendations to their clientele with hearing impairment it is essential that further independent research investigates the relative benefit/deficit of different levels of hearing technology across brands and manufacturers in these and other real-world listening domains.

  15. Validation of the Common Objects Token (COT) Test for Children with Cochlear Implants

    ERIC Educational Resources Information Center

    Anderson, Ilona; Martin, Jane; Costa, Anne; Jamieson, Lyn; Bailey, Elspeth; Plant, Geoff; Pitterl, Markus

    2005-01-01

    Changes in selection criteria have meant that children are being provided with cochlear implants (CI) at increasingly younger ages. However, there is a paucity of measures that are appropriate for testing complex listening skills--most tests are too cognitively complex for such young children. The Common Objects Token (COT) Test was developed as a…

  16. The Feasibility and Acceptability of LISTEN for Loneliness

    PubMed Central

    Theeke, Laurie A.; Mallow, Jennifer A.; Barnes, Emily R.; Theeke, Elliott

    2015-01-01

    Purpose The purpose of this paper is to present the initial feasibility and acceptability of LISTEN (Loneliness Intervention using Story Theory to Enhance Nursing-sensitive outcomes), a new intervention for loneliness. Loneliness is a significant stressor and known contributor to multiple chronic health conditions in varied populations. In addition, loneliness is reported as predictive of functional decline and mortality in large samples of older adults from multiple cultures. Currently, there are no standard therapies recommended as effective treatments for loneliness. The paucity of interventions has limited the ability of healthcare providers to translate what we know about the problem of loneliness to active planning of clinical care that results in diminished loneliness. LISTEN was developed using the process for complex intervention development suggested by the Medical Research Council (MRC) [1] [2]. Methods Feasibility and acceptability of LISTEN were evaluated as the first objective of a longitudinal randomized trial which was set in a university based family medicine center in a rural southeastern community in Appalachia. Twenty-seven older adults [(24 women and 3 men, mean age: 75 (SD 7.50)] who were lonely, community-dwelling, and experiencing chronic illness, participated. Feasibility was evaluated by tracking recruitment efforts, enrollment, attendance to intervention sessions, attrition, and with feedback evaluations from study personnel. Acceptability was assessed using quantitative and qualitative evaluation data from participants. Results LISTEN was evaluated as feasible to deliver with no attrition and near perfect attendance. Participants ranked LISTEN as highly acceptable for diminishing loneliness with participants requesting a continuation of the program or development of additional sessions. Conclusions LISTEN is feasible to deliver in a primary healthcare setting and has the potential to diminish loneliness which could result in improvement of the long-term negative known sequelae of loneliness such as hypertension, depression, functional decline, and mortality. Feedback from study participants is being used to inform future trials of LISTEN with consideration for developing additional sessions. Longitudinal randomized trials are needed in varied populations to assess long-term health and healthcare system benefits of diminishing loneliness, and to assess the potential scalability of LISTEN as a reimbursable treatment for loneliness. PMID:26401420

  17. The Composer's Program Note for Newly Written Classical Music: Content and Intentions.

    PubMed

    Blom, Diana M; Bennett, Dawn; Stevenson, Ian

    2016-01-01

    In concerts of western classical music the provision of a program note is a widespread practice dating back to the 18th century and still commonly in use. Program notes tend to inform listeners and performers about historical context, composer biographical details, and compositional thinking. However, the scant program note research conducted to date reveals that program notes may not foster understanding or enhance listener enjoyment as previously assumed. In the case of canonic works, performers and listeners may already be familiar with much of the program note information. This is not so in the case of newly composed works, which formed the basis of the exploratory study reported here. This article reports the views of 17 living contemporary composers on their writing of program notes for their own works. In particular, the study sought to understand the intended recipient, role and the content of composer-written program notes. Participating composers identified three main roles for their program notes: to shape a performer's interpretation of the work; to guide, engage or direct the listener and/or performer; and as collaborative mode of communication between the composer, performer, and listener. For some composers, this collaboration was intended to result in "performative listening" in which listeners were actively engaged in bringing each composition to life. This was also described as a form of empathy that results in the co-construction of the musical experience. Overall, composers avoided giving too much personal information and they provided performers with more structural information. However, composers did not agree on whether the same information should be provided to both performers and listeners. Composers' responses problematize the view of a program note as a simple statement from writer to recipient, indicating instead a more complex set of relations at play between composer, performer, listener, and the work itself. These relations are illustrated in a model. There are implications for program note writers and readers, and for educators. Future research might seek to enhance understanding of program notes, including whether the written program note is the most effective format for communications about music.

  18. Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music.

    PubMed

    Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell

    2016-01-01

    An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music.

  19. Harmonic Structure Predicts the Enjoyment of Uplifting Trance Music

    PubMed Central

    Agres, Kat; Herremans, Dorien; Bigo, Louis; Conklin, Darrell

    2017-01-01

    An empirical investigation of how local harmonic structures (e.g., chord progressions) contribute to the experience and enjoyment of uplifting trance (UT) music is presented. The connection between rhythmic and percussive elements and resulting trance-like states has been highlighted by musicologists, but no research, to our knowledge, has explored whether repeated harmonic elements influence affective responses in listeners of trance music. Two alternative hypotheses are discussed, the first highlighting the direct relationship between repetition/complexity and enjoyment, and the second based on the theoretical inverted-U relationship described by the Wundt curve. We investigate the connection between harmonic structure and subjective enjoyment through interdisciplinary behavioral and computational methods: First we discuss an experiment in which listeners provided enjoyment ratings for computer-generated UT anthems with varying levels of harmonic repetition and complexity. The anthems were generated using a statistical model trained on a corpus of 100 uplifting trance anthems created for this purpose, and harmonic structure was constrained by imposing particular repetition structures (semiotic patterns defining the order of chords in the sequence) on a professional UT music production template. Second, the relationship between harmonic structure and enjoyment is further explored using two computational approaches, one based on average Information Content, and another that measures average tonal tension between chords. The results of the listening experiment indicate that harmonic repetition does in fact contribute to the enjoyment of uplifting trance music. More compelling evidence was found for the second hypothesis discussed above, however some maximally repetitive structures were also preferred. Both computational models provide evidence for a Wundt-type relationship between complexity and enjoyment. By systematically manipulating the structure of chord progressions, we have discovered specific harmonic contexts in which repetitive or complex structure contribute to the enjoyment of uplifting trance music. PMID:28119641

  20. The effect of native-language experience on the sensory-obligatory components, the P1–N1–P2 and the T-complex

    PubMed Central

    Wagner, Monica; Shafer, Valerie L.; Martin, Brett; Steinschneider, Mitchell

    2013-01-01

    The influence of native-language experience on sensory-obligatory auditory-evoked potentials (AEPs) was investigated in native-English and native-Polish listeners. AEPs were recorded to the first word in nonsense word pairs, while participants performed a syllable identification task to the second word in the pairs. Nonsense words contained phoneme sequence onsets (i.e., /pt/, /pət/, /st/ and /sət/) that occur in the Polish and English languages, with the exception that /pt/ at syllable onset is an illegal phonotactic form in English. P1–N1–P2 waveforms from fronto-central electrode sites were comparable in English and Polish listeners, even though, these same English participants were unable to distinguish the nonsense words having /pt/ and /pət/ onsets. The P1–N1–P2 complex indexed the temporal characteristics of the word stimuli in the same manner for both language groups. Taken together, these findings suggest that the fronto-central P1–N1–P2 complex reflects acoustic feature processing of speech and is not significantly influenced by exposure to the phoneme sequences of the native-language. In contrast, the T-complex from bilateral posterior temporal sites was found to index phonological as well as acoustic feature processing to the nonsense word stimuli. An enhanced negativity for the /pt/ cluster relative to its contrast sequence (i.e., /pət/) occurred only for the Polish listeners, suggesting that neural networks within non-primary auditory cortex may be involved in early cortical phonological processing. PMID:23643857

  1. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories.

    PubMed

    Karns, Christina M; Isbell, Elif; Giuliano, Ryan J; Neville, Helen J

    2015-06-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) across five age groups: 3-5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Auditory attention in childhood and adolescence: An event-related potential study of spatial selective attention to one of two simultaneous stories

    PubMed Central

    Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.

    2015-01-01

    Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721

  3. Minimalistic toy robot to analyze a scenery of speaker-listener condition in autism.

    PubMed

    Giannopulu, Irini; Montreynaud, Valérie; Watanabe, Tomio

    2016-05-01

    Atypical neural architecture causes impairment in communication capabilities and reduces the ability of representing the referential statements of other people in children with autism. During a scenery of "speaker-listener" communication, we have analyzed verbal and emotional expressions in neurotypical children (n = 20) and in children with autism (n = 20). The speaker was always a child, and the listener was a human or a minimalistic robot which reacts to speech expression by nodding only. Although both groups performed the task, everything happens as if the robot could allow children with autism to elaborate a multivariate equation encoding and conceptualizing within his/her brain, and externalizing into unconscious emotion (heart rate) and conscious verbal speech (words). Such a behavior would indicate that minimalistic artificial environments such as toy robots could be considered as the root of neuronal organization and reorganization with the potential to improve brain activity.

  4. Angle-Dependent Distortions in the Perceptual Topology of Acoustic Space

    PubMed Central

    2018-01-01

    By moving sounds around the head and asking listeners to report which ones moved more, it was found that sound sources at the side of a listener must move at least twice as much as ones in front to be judged as moving the same amount. A relative expansion of space in the front and compression at the side has consequences for spatial perception of moving sounds by both static and moving listeners. An accompanying prediction that the apparent location of static sound sources ought to also be distorted agrees with previous work and suggests that this is a general perceptual phenomenon that is not limited to moving signals. A mathematical model that mimics the measured expansion of space can be used to successfully capture several previous findings in spatial auditory perception. The inverse of this function could be used alongside individualized head-related transfer functions and motion tracking to produce hyperstable virtual acoustic environments. PMID:29764312

  5. Effects of voice harmonic complexity on ERP responses to pitch-shifted auditory feedback.

    PubMed

    Behroozmand, Roozbeh; Korzyukov, Oleg; Larson, Charles R

    2011-12-01

    The present study investigated the neural mechanisms of voice pitch control for different levels of harmonic complexity in the auditory feedback. Event-related potentials (ERPs) were recorded in response to+200 cents pitch perturbations in the auditory feedback of self-produced natural human vocalizations, complex and pure tone stimuli during active vocalization and passive listening conditions. During active vocal production, ERP amplitudes were largest in response to pitch shifts in the natural voice, moderately large for non-voice complex stimuli and smallest for the pure tones. However, during passive listening, neural responses were equally large for pitch shifts in voice and non-voice complex stimuli but still larger than that for pure tones. These findings suggest that pitch change detection is facilitated for spectrally rich sounds such as natural human voice and non-voice complex stimuli compared with pure tones. Vocalization-induced increase in neural responses for voice feedback suggests that sensory processing of naturally-produced complex sounds such as human voice is enhanced by means of motor-driven mechanisms (e.g. efference copies) during vocal production. This enhancement may enable the audio-vocal system to more effectively detect and correct for vocal errors in the feedback of natural human vocalizations to maintain an intended vocal output for speaking. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  6. Informational landscapes in art, science, and evolution.

    PubMed

    Cohen, Irun R

    2006-07-01

    An informational landscape refers to an array of information related to a particular theme or function. The Internet is an example of an informational landscape designed by humans for purposes of communication. Once it exists, however, any informational landscape may be exploited to serve a new purpose. Listening Post is the name of a dynamic multimedia work of art that exploits the informational landscape of the Internet to produce a visual and auditory environment. Here, I use Listening Post as a prototypic example for considering the creative role of informational landscapes in the processes that beget evolution and science.

  7. Turn Off the Music! Music Impairs Visual Associative Memory Performance in Older Adults.

    PubMed

    Reaves, Sarah; Graham, Brittany; Grahn, Jessica; Rabannifard, Parissa; Duarte, Audrey

    2016-06-01

    Whether we are explicitly listening to it or not, music is prevalent in our environment. Surprisingly, little is known about the effect of environmental music on concurrent cognitive functioning and whether young and older adults are differentially affected by music. Here, we investigated the impact of background music on a concurrent paired associate learning task in healthy young and older adults. Young and older adults listened to music or to silence while simultaneously studying face-name pairs. Participants' memory for the pairs was then tested while listening to either the same or different music. Participants also made subjective ratings about how distracting they found each song to be. Despite the fact that all participants rated music as more distracting to their performance than silence, only older adults' associative memory performance was impaired by music. These results are most consistent with the theory that older adults' failure to inhibit processing of distracting task-irrelevant information, in this case background music, contributes to their memory impairments. These data have important practical implications for older adults' ability to perform cognitively demanding tasks even in what many consider to be an unobtrusive environment. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Temporally selective attention modulates early perceptual processing: event-related potential evidence.

    PubMed

    Sanders, Lisa D; Astheimer, Lori B

    2008-05-01

    Some of the most important information we encounter changes so rapidly that our perceptual systems cannot process all of it in detail. Spatially selective attention is critical for perception when more information than can be processed in detail is presented simultaneously at distinct locations. When presented with complex, rapidly changing information, listeners may need to selectively attend to specific times rather than to locations. We present evidence that listeners can direct selective attention to time points that differ by as little as 500 msec, and that doing so improves target detection, affects baseline neural activity preceding stimulus presentation, and modulates auditory evoked potentials at a perceptually early stage. These data demonstrate that attentional modulation of early perceptual processing is temporally precise and that listeners can flexibly allocate temporally selective attention over short intervals, making it a viable mechanism for preferentially processing the most relevant segments in rapidly changing streams.

  9. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  10. The development of a modified spectral ripple test.

    PubMed

    Aronoff, Justin M; Landsberger, David M

    2013-08-01

    Poor spectral resolution can be a limiting factor for hearing impaired listeners, particularly for complex listening tasks such as speech understanding in noise. Spectral ripple tests are commonly used to measure spectral resolution, but these tests contain a number of potential confounds that can make interpretation of the results difficult. To measure spectral resolution while avoiding those confounds, a modified spectral ripple test with dynamically changing ripples was created, referred to as the spectral-temporally modulated ripple test (SMRT). This paper describes the SMRT and provides evidence that it is sensitive to changes in spectral resolution.

  11. Negative Effect of Acoustic Panels on Listening Effort in a Classroom Environment.

    PubMed

    Amlani, Amyn M; Russo, Timothy A

    Acoustic panels are used to lessen the pervasive effects of noise and reverberation on speech understanding in a classroom environment. These panels, however, predominately absorb high-frequency energy important to speech understanding. Therefore, a classroom environment treated with acoustic panels might negatively influence the transmission of the target signal, resulting in an increase in listening effort exerted by the listener. Acoustic panels were installed in a public school environment that did not meet the ANSI-recommended guidelines for classroom design. We assessed the modifications to the acoustic climate by quantifying the effect of (1) acoustic panel (i.e., without, with) on the transmission of a standardized target signal at different seat positions (i.e., A-D) using the Speech Transmission Index (STI) and (2) acoustic panel and seat position on listening-effort performance in a group of third-grade students having normal-hearing sensitivity using a dual-task paradigm. STI measurements are described qualitatively. We used a repeated-measures randomized design to assess listening-effort performance of monosyllabic words in a primary task and digit recall in a secondary task for the independent variables of acoustic panel and seat position. Twenty-seven, third-grade students (12 males, 15 females), ranging in age from 8.3 to 9.4 yr (mean = 8.7 yr, standard deviation = 0.7), participated in this study. Qualitatively, we performed STI measurements under both testing conditions (i.e., panel and seat location). For the primary task of the dual-task paradigm, participants heard a ten-item list of monosyllabic words (i.e., ten words per list) recorded through a manikin in the classroom environment without and with acoustic panels and at different seat positions. Participants were asked to repeat each word exactly as it was heard. During the secondary task, participants were shown a single, random string of five digits before the presentation of the monosyllabic words. After each list in the primary task was completed, participants were asked to recall the string of five digits verbatim. Word-recognition and digit-recall performance decreased with the presence of acoustic panels and as the distance from the target signal to a given seat location increased. The results were validated using the STI, as indicated by a decrease in the transmission of the target signal in the presence of acoustic panel and as the distance to a given seat location increased. The inclusion of acoustic panels reduced the negative effects of noise and reverberation in a classroom environment, resulting in an acoustic climate that complied with the ANSI-recommended guidelines for classroom design. Results, however, revealed that participants required an increased amount of mental effort when the classroom was modified with acoustic treatment compared to no acoustic treatment. Independent of acoustic treatment, mental effort was greatest at seat locations beyond the critical distance (CD). With the addition of acoustic panels, mental effort was found to increase significantly at seat locations beyond the CD compared to the unmodified room condition. Overall, results indicate that increasing the distance between the teacher and child has a detrimental impact on mental effort and, ultimately, academic performance. American Academy of Audiology

  12. Happy creativity: Listening to happy music facilitates divergent thinking.

    PubMed

    Ritter, Simone M; Ferguson, Sam

    2017-01-01

    Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition-the ability to come up with creative ideas, problem solutions and products-is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to 'happy music' (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed.

  13. Happy creativity: Listening to happy music facilitates divergent thinking

    PubMed Central

    Ferguson, Sam

    2017-01-01

    Creativity can be considered one of the key competencies for the twenty-first century. It provides us with the capacity to deal with the opportunities and challenges that are part of our complex and fast-changing world. The question as to what facilitates creative cognition—the ability to come up with creative ideas, problem solutions and products—is as old as the human sciences, and various means to enhance creative cognition have been studied. Despite earlier scientific studies demonstrating a beneficial effect of music on cognition, the effect of music listening on creative cognition has remained largely unexplored. The current study experimentally tests whether listening to specific types of music (four classical music excerpts systematically varying on valance and arousal), as compared to a silence control condition, facilitates divergent and convergent creativity. Creativity was higher for participants who listened to ‘happy music’ (i.e., classical music high on arousal and positive mood) while performing the divergent creativity task, than for participants who performed the task in silence. No effect of music was found for convergent creativity. In addition to the scientific contribution, the current findings may have important practical implications. Music listening can be easily integrated into daily life and may provide an innovative means to facilitate creative cognition in an efficient way in various scientific, educational and organizational settings when creative thinking is needed. PMID:28877176

  14. Thalamic and parietal brain morphology predicts auditory category learning.

    PubMed

    Scharinger, Mathias; Henry, Molly J; Erb, Julia; Meyer, Lars; Obleser, Jonas

    2014-01-01

    Auditory categorization is a vital skill involving the attribution of meaning to acoustic events, engaging domain-specific (i.e., auditory) as well as domain-general (e.g., executive) brain networks. A listener's ability to categorize novel acoustic stimuli should therefore depend on both, with the domain-general network being particularly relevant for adaptively changing listening strategies and directing attention to relevant acoustic cues. Here we assessed adaptive listening behavior, using complex acoustic stimuli with an initially salient (but later degraded) spectral cue and a secondary, duration cue that remained nondegraded. We employed voxel-based morphometry (VBM) to identify cortical and subcortical brain structures whose individual neuroanatomy predicted task performance and the ability to optimally switch to making use of temporal cues after spectral degradation. Behavioral listening strategies were assessed by logistic regression and revealed mainly strategy switches in the expected direction, with considerable individual differences. Gray-matter probability in the left inferior parietal lobule (BA 40) and left precentral gyrus was predictive of "optimal" strategy switch, while gray-matter probability in thalamic areas, comprising the medial geniculate body, co-varied with overall performance. Taken together, our findings suggest that successful auditory categorization relies on domain-specific neural circuits in the ascending auditory pathway, while adaptive listening behavior depends more on brain structure in parietal cortex, enabling the (re)direction of attention to salient stimulus properties. © 2013 Published by Elsevier Ltd.

  15. Can Music and Animation Improve the Flow and Attainment in Online Learning?

    ERIC Educational Resources Information Center

    Grice, Sue; Hughes, Janet

    2009-01-01

    Despite the wide use of music in various areas of society to influence listeners in different ways, one area often neglected is the use of music within online learning environments. This paper describes a study of the effects of music and animation upon learners in a computer mediated environment. A test was developed in which each learner was…

  16. Effect of minimal/mild hearing loss on children's speech understanding in a simulated classroom.

    PubMed

    Lewis, Dawna E; Valente, Daniel L; Spalding, Jody L

    2015-01-01

    While classroom acoustics can affect educational performance for all students, the impact for children with minimal/mild hearing loss (MMHL) may be greater than for children with normal hearing (NH). The purpose of this study was to examine the effect of MMHL on children's speech recognition comprehension and looking behavior in a simulated classroom environment. It was hypothesized that children with MMHL would perform similarly to their peers with NH on the speech recognition task but would perform more poorly on the comprehension task. Children with MMHL also were expected to look toward talkers more often than children with NH. Eighteen children with MMHL and 18 age-matched children with NH participated. In a simulated classroom environment, children listened to lines from an elementary-age-appropriate play read by a teacher and four students reproduced over LCD monitors and loudspeakers located around the listener. A gyroscopic headtracking device was used to monitor looking behavior during the task. At the end of the play, comprehension was assessed by asking a series of 18 factual questions. Children also were asked to repeat 50 meaningful sentences with three key words each presented audio-only by a single talker either from the loudspeaker at 0 degree azimuth or randomly from the five loudspeakers. Both children with NH and those with MMHL performed at or near ceiling on the sentence recognition task. For the comprehension task, children with MMHL performed more poorly than those with NH. Assessment of looking behavior indicated that both groups of children looked at talkers while they were speaking less than 50% of the time. In addition, the pattern of overall looking behaviors suggested that, compared with older children with NH, a larger portion of older children with MMHL may demonstrate looking behaviors similar to younger children with or without MMHL. The results of this study demonstrate that, under realistic acoustic conditions, it is difficult to differentiate performance among children with MMHL and children with NH using a sentence recognition task. The more cognitively demanding comprehension task identified performance differences between these two groups. The comprehension task represented a condition in which the persons talking change rapidly and are not readily visible to the listener. Examination of looking behavior suggested that, in this complex task, attempting to visualize the talker may inefficiently utilize cognitive resources that would otherwise be allocated for comprehension.

  17. Storytelling: An Underused Teaching Aid.

    ERIC Educational Resources Information Center

    Brazeau, Martin

    1985-01-01

    Describes ways to integrate storytelling into outdoor education programs. Discusses use of storytelling to teach history, culture, concepts, or values; stimulate imagination; learn new words; set a mood; encourage listener participation; and foster caring attitudes about the environment. (LFL)

  18. Hearing, listening, action: Enhancing nursing practice through aural awareness education.

    PubMed

    Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa

    2014-01-01

    Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses' ability to assess patients effectively. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilized an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patients' experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.

  19. Hearing, Listening, Action: Enhancing nursing practice through aural awareness education.

    PubMed

    Collins, Anita; Vanderheide, Rebecca; McKenna, Lisa

    2014-03-29

    Abstract Noise overload within the clinical environment has been found to interfere with the healing process for patients, as well as nurses ability to effectively assess patients. Awareness and responsibility for noise production begins during initial nursing training and consequently a program to enhance aural awareness skills was designed for graduate entry nursing students in an Australian university. The program utilised an innovative combination of music education activities to develop the students' ability to distinguishing individual sounds (hearing), appreciate patient's experience of sounds (listening) and improve their auscultation skills and reduce the negative effects of noise on patients (action). Using a mixed methods approach, students' reported heightened auscultation skills and greater recognition of both patients' and clinicians' aural overload. Results of this pilot suggest that music education activities can assist nursing students to develop their aural awareness and to action changes within the clinical environment to improve the patient's experience of noise.

  20. Age effects on discrimination of timing in auditory sequences

    NASA Astrophysics Data System (ADS)

    Fitzgibbons, Peter J.; Gordon-Salant, Sandra

    2004-08-01

    The experiments examined age-related changes in temporal sensitivity to increments in the interonset intervals (IOI) of components in tonal sequences. Discrimination was examined using reference sequences consisting of five 50-ms tones separated by silent intervals; tone frequencies were either fixed at 4 kHz or varied within a 2-4-kHz range to produce spectrally complex patterns. The tonal IOIs within the reference sequences were either equal (200 or 600 ms) or varied individually with an average value of 200 or 600 ms to produce temporally complex patterns. The difference limen (DL) for increments of IOI was measured. Comparison sequences featured either equal increments in all tonal IOIs or increments in a single target IOI, with the sequential location of the target changing randomly across trials. Four groups of younger and older adults with and without sensorineural hearing loss participated. Results indicated that DLs for uniform changes of sequence rate were smaller than DLs for single target intervals, with the largest DLs observed for single targets embedded within temporally complex sequences. Older listeners performed more poorly than younger listeners in all conditions, but the largest age-related differences were observed for temporally complex stimulus conditions. No systematic effects of hearing loss were observed.

  1. Seizures induced by music.

    PubMed

    Ogunyemi, A O; Breen, H

    1993-01-01

    Musicogenic epilepsy is a rare disorder. Much remains to be learned about the electroclinical features. This report describes a patient who has been followed at our institution for 17 years, and was investigated with long-term telemetered simultaneous video-EEG recordings. She began to have seizures at the age of 10 years. She experienced complex partial seizures, often preceded by elementary auditory hallucination and complex auditory illusion. The seizures occurred in relation to singing, listening to music or thinking about music. She also had occasional generalized tonic clonic seizures during sleep. There was no significant antecedent history. The family history was negative for epilepsy. The physical examination was unremarkable. CT and MRI scans of the brain were normal. During long-term simultaneous video-EEG recordings, clinical and electrographic seizure activities were recorded in association with singing and listening to music. Mathematical calculation, copying or viewing geometric patterns and playing the game of chess failed to evoke seizures.

  2. Others' anger makes people work harder not smarter: the effect of observing anger and sarcasm on creative and analytic thinking.

    PubMed

    Miron-Spektor, Ella; Efrat-Treister, Dorit; Rafaeli, Anat; Schwarz-Cohen, Orit

    2011-09-01

    The authors examine whether and how observing anger influences thinking processes and problem-solving ability. In 3 studies, the authors show that participants who listened to an angry customer were more successful in solving analytic problems, but less successful in solving creative problems compared with participants who listened to an emotionally neutral customer. In Studies 2 and 3, the authors further show that observing anger communicated through sarcasm enhances complex thinking and solving of creative problems. Prevention orientation is argued to be the latent variable that mediated the effect of observing anger on complex thinking. The present findings help reconcile inconsistent findings in previous research, promote theory about the effects of observing anger and sarcasm, and contribute to understanding the effects of anger in the workplace. PsycINFO Database Record (c) 2011 APA, all rights reserved

  3. Telling in-tune from out-of-tune: widespread evidence for implicit absolute intonation.

    PubMed

    Van Hedger, Stephen C; Heald, Shannon L M; Huang, Alex; Rutstein, Brooke; Nusbaum, Howard C

    2017-04-01

    Absolute pitch (AP) is the rare ability to name or produce an isolated musical note without the aid of a reference note. One skill thought to be unique to AP possessors is the ability to provide absolute intonation judgments (e.g., classifying an isolated note as "in-tune" or "out-of-tune"). Recent work has suggested that absolute intonation perception among AP possessors is not crystallized in a critical period of development, but is dynamically maintained by the listening environment, in which the vast majority of Western music is tuned to a specific cultural standard. Given that all listeners of Western music are constantly exposed to this specific cultural tuning standard, our experiments address whether absolute intonation perception extends beyond AP possessors. We demonstrate that non-AP listeners are able to accurately judge the intonation of completely isolated notes. Both musicians and nonmusicians showed evidence for absolute intonation recognition when listening to familiar timbres (piano and violin). When testing unfamiliar timbres (triangle and inverted sine waves), only musicians showed weak evidence of absolute intonation recognition (Experiment 2). Overall, these results highlight a previously unknown similarity between AP and non-AP possessors' long-term musical note representations, including evidence of sensitivity to frequency.

  4. Deep listening: towards an imaginative reframing of health and well-being practices in international development

    PubMed Central

    Pavlicevic, Mercédès; Impey, Angela

    2014-01-01

    This paper challenges the “intervention-as-solution” approach to health and well-being as commonly practised in the international development sector, and draws on the disciplinary intersections between Community Music Therapy and ethnomusicology in seeking a more negotiated and situationally apposite framework for health engagement. Drawing inspiration from music-based health applications in conflict or post-conflict environments in particular, and focusing on case studies from Lebanon and South Sudan respectively, the paper argues for a re-imagined international development health and well-being framework based on the concept of deep listening. Defined by composer Pauline Oliveros as listening which “digs below the surface of what is heard … unlocking layer after layer of imagination, meaning, and memory down to the cellular level of human experience” (Oliveros, 2005), the paper explores the methodological applications of such a dialogic, discursive approach with reference to a range of related listening stances – cultural, social and therapeutic. In so doing, it explores opportunities for multi-levelled and culturally inclusive health and well-being practices relevant to different localities in the world and aimed at the re-integration of self, place and community. PMID:25729413

  5. The effect of voice quality and competing speakers in a passage comprehension task: performance in relation to cognitive functioning in children with normal hearing.

    PubMed

    von Lochow, Heike; Lyberg-Åhlander, Viveka; Sahlén, Birgitta; Kastberg, Tobias; Brännström, K Jonas

    2018-04-01

    This study explores the effect of voice quality and competing speaker/-s on children's performance in a passage comprehension task. Furthermore, it explores the interaction between passage comprehension and cognitive functioning. Forty-nine children (27 girls and 22 boys) with normal hearing (aged 7-12 years) participated. Passage comprehension was tested in six different listening conditions; a typical voice (non-dysphonic voice) in quiet, a typical voice with one competing speaker, a typical voice with four competing speakers, a dysphonic voice in quiet, a dysphonic voice with one competing speaker, and a dysphonic voice with four competing speakers. The children's working memory capacity and executive functioning were also assessed. The findings indicate no direct effect of voice quality on the children's performance, but a significant effect of background listening condition. Interaction effects were seen between voice quality, background listening condition, and executive functioning. The children's susceptibility to the effect of the dysphonic voice and the background listening conditions are related to the individual's executive functions. The findings have several implications for design of interventions in language learning environments such as classrooms.

  6. Familiarity Affects Entrainment of EEG in Music Listening.

    PubMed

    Kumagai, Yuiko; Arvaneh, Mahnaz; Tanaka, Toshihisa

    2017-01-01

    Music perception involves complex brain functions. The relationship between music and brain such as cortical entrainment to periodic tune, periodic beat, and music have been well investigated. It has also been reported that the cerebral cortex responded more strongly to the periodic rhythm of unfamiliar music than to that of familiar music. However, previous works mainly used simple and artificial auditory stimuli like pure tone or beep. It is still unclear how the brain response is influenced by the familiarity of music. To address this issue, we analyzed electroencelphalogram (EEG) to investigate the relationship between cortical response and familiarity of music using melodies produced by piano sounds as simple natural stimuli. The cross-correlation function averaged across trials, channels, and participants showed two pronounced peaks at time lags around 70 and 140 ms. At the two peaks the magnitude of the cross-correlation values were significantly larger when listening to unfamiliar and scrambled music compared to those when listening to familiar music. Our findings suggest that the response to unfamiliar music is stronger than that to familiar music. One potential application of our findings would be the discrimination of listeners' familiarity with music, which provides an important tool for assessment of brain activity.

  7. Processing load induced by informational masking is related to linguistic abilities.

    PubMed

    Koelewijn, Thomas; Zekveld, Adriana A; Festen, Joost M; Rönnberg, Jerker; Kramer, Sophia E

    2012-01-01

    It is often assumed that the benefit of hearing aids is not primarily reflected in better speech performance, but that it is reflected in less effortful listening in the aided than in the unaided condition. Before being able to assess such a hearing aid benefit the present study examined how processing load while listening to masked speech relates to inter-individual differences in cognitive abilities relevant for language processing. Pupil dilation was measured in thirty-two normal hearing participants while listening to sentences masked by fluctuating noise or interfering speech at either 50% and 84% intelligibility. Additionally, working memory capacity, inhibition of irrelevant information, and written text reception was tested. Pupil responses were larger during interfering speech as compared to fluctuating noise. This effect was independent of intelligibility level. Regression analysis revealed that high working memory capacity, better inhibition, and better text reception were related to better speech reception thresholds. Apart from a positive relation to speech recognition, better inhibition and better text reception are also positively related to larger pupil dilation in the single-talker masker conditions. We conclude that better cognitive abilities not only relate to better speech perception, but also partly explain higher processing load in complex listening conditions.

  8. Auditory attention strategy depends on target linguistic properties and spatial configurationa)

    PubMed Central

    McCloy, Daniel R.; Lee, Adrian K. C.

    2015-01-01

    Whether crossing a busy intersection or attending a large dinner party, listeners sometimes need to attend to multiple spatially distributed sound sources or streams concurrently. How they achieve this is not clear—some studies suggest that listeners cannot truly simultaneously attend to separate streams, but instead combine attention switching with short-term memory to achieve something resembling divided attention. This paper presents two oddball detection experiments designed to investigate whether directing attention to phonetic versus semantic properties of the attended speech impacts listeners' ability to divide their auditory attention across spatial locations. Each experiment uses four spatially distinct streams of monosyllabic words, variation in cue type (providing phonetic or semantic information), and requiring attention to one or two locations. A rapid button-press response paradigm is employed to minimize the role of short-term memory in performing the task. Results show that differences in the spatial configuration of attended and unattended streams interact with linguistic properties of the speech streams to impact performance. Additionally, listeners may leverage phonetic information to make oddball detection judgments even when oddballs are semantically defined. Both of these effects appear to be mediated by the overall complexity of the acoustic scene. PMID:26233011

  9. Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns.

    PubMed

    Altenmüller, Eckart; Schürmann, Kristian; Lim, Vanessa K; Parlitz, Dietrich

    2002-01-01

    In order to investigate the neurobiological mechanisms accompanying emotional valence judgements during listening to complex auditory stimuli, cortical direct current (dc)-electroencephalography (EEG) activation patterns were recorded from 16 right-handed students. Students listened to 160 short sequences taken from the repertoires of jazz, rock-pop, classical music and environmental sounds (each n=40). Emotional valence of the perceived stimuli were rated on a 5-step scale after each sequence. Brain activation patterns during listening revealed widespread bilateral fronto-temporal activation, but a highly significant lateralisation effect: positive emotional attributions were accompanied by an increase in left temporal activation, negative by a more bilateral pattern with preponderance of the right fronto-temporal cortex. Female participants demonstrated greater valence-related differences than males. No differences related to the four stimulus categories could be detected, suggesting that the actual auditory brain activation patterns were more determined by their affective emotional valence than by differences in acoustical "fine" structure. The results are consistent with a model of hemispheric specialisation concerning perceived positive or negative emotions proposed by Heilman [Journal of Neuropsychiatry and Clinical Neuroscience 9 (1997) 439].

  10. Selective memory retrieval in social groups: When silence is golden and when it is not.

    PubMed

    Abel, Magdalena; Bäuml, Karl-Heinz T

    2015-07-01

    Previous research has shown that the selective remembering of a speaker and the resulting silences can cause forgetting of related, but unmentioned information by a listener (Cuc, Koppel, & Hirst, 2007). Guided by more recent work that demonstrated both detrimental and beneficial effects of selective memory retrieval in individuals, the present research explored the effects of selective remembering in social groups when access to the encoding context at retrieval was maintained or impaired. In each of three experiments, selective retrieval by the speaker impaired recall of the listener when access to the encoding context was maintained, but it improved recall of the listener when context access was impaired. The results suggest the existence of two faces of selective memory retrieval in social groups, with a detrimental face when the encoding context is still active at retrieval and a beneficial face when it is not. The role of silence in social recall thus seems to be more complex than was indicated in prior work, and mnemonic silences on the part of a speaker can be "golden" for the memories of a listener under some circumstances, but not be "golden" under others. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. On application of kernel PCA for generating stimulus features for fMRI during continuous music listening.

    PubMed

    Tsatsishvili, Valeri; Burunat, Iballa; Cong, Fengyu; Toiviainen, Petri; Alluri, Vinoo; Ristaniemi, Tapani

    2018-06-01

    There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Reflections on Measuring Thinking, while Listening to Mozart's "Jupiter" Symphony.

    ERIC Educational Resources Information Center

    Wasserman, Selma

    1989-01-01

    Reflects on educators' current preoccupation with assessment of higher order thinking skills. Easy-to-mark, forced-choice, pencil-and-paper tests with single numerical scores may trivialize the wonderful complexity of human capabilities. Includes 17 references. (MLH)

  13. Effect of occlusion, directionality and age on horizontal localization

    NASA Astrophysics Data System (ADS)

    Alworth, Lynzee Nicole

    Localization acuity of a given listener is dependent upon the ability discriminate between interaural time and level disparities. Interaural time differences are encoded by low frequency information whereas interaural level differences are encoded by high frequency information. Much research has examined effects of hearing aid microphone technologies and occlusion separately and prior studies have not evaluated age as a factor in localization acuity. Open-fit hearing instruments provide new earmold technologies and varying microphone capabilities; however, these instruments have yet to be evaluated with regard to horizontal localization acuity. Thus, the purpose of this study is to examine the effects of microphone configuration, type of dome in open-fit hearing instruments, and age on the horizontal localization ability of a given listener. Thirty adults participated in this study and were grouped based upon hearing sensitivity and age (young normal hearing, >50 years normal hearing, >50 hearing impaired). Each normal hearing participant completed one localization experiment (unaided/unamplified) where they listened to the stimulus "Baseball" and selected the point of origin. Hearing impaired listeners were fit with the same two receiver-in-the-ear hearing aids and same dome types, thus controlling for microphone technologies, type of dome, and fitting between trials. Hearing impaired listeners completed a total of 7 localization experiments (unaided/unamplified; open dome: omnidirectional, adaptive directional, fixed directional; micromold: omnidirectional, adaptive directional, fixed directional). Overall, results of this study indicate that age significantly affects horizontal localization ability as younger adult listeners with normal hearing made significantly fewer localization errors than older adult listeners with normal hearing. Also, results revealed a significant difference in performance between dome type; however, upon further examination was not significant. Therefore, results examining type of dome should be viewed with caution. Results examining microphone configuration and microphone configuration by dome type were not significant. Moreover, results evaluating performance relative to unaided (unamplified) were not significant. Taken together, these results suggest open-fit hearing instruments, regardless of microphone or dome type, do not degrade horizontal localization acuity within a given listener relative to their 'older aged' normal hearing counterparts in quiet environments.

  14. Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns

    PubMed Central

    Lau, Joseph C. Y.; Wong, Patrick C. M.

    2016-01-01

    We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. PMID:27832606

  15. Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns.

    PubMed

    Lau, Joseph C Y; Wong, Patrick C M; Chandrasekaran, Bharath

    2017-02-01

    We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience. Copyright © 2017 the American Physiological Society.

  16. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior.

    PubMed

    Peelle, Jonathan E

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.

  17. The Composer’s Program Note for Newly Written Classical Music: Content and Intentions

    PubMed Central

    Blom, Diana M.; Bennett, Dawn; Stevenson, Ian

    2016-01-01

    In concerts of western classical music the provision of a program note is a widespread practice dating back to the 18th century and still commonly in use. Program notes tend to inform listeners and performers about historical context, composer biographical details, and compositional thinking. However, the scant program note research conducted to date reveals that program notes may not foster understanding or enhance listener enjoyment as previously assumed. In the case of canonic works, performers and listeners may already be familiar with much of the program note information. This is not so in the case of newly composed works, which formed the basis of the exploratory study reported here. This article reports the views of 17 living contemporary composers on their writing of program notes for their own works. In particular, the study sought to understand the intended recipient, role and the content of composer-written program notes. Participating composers identified three main roles for their program notes: to shape a performer’s interpretation of the work; to guide, engage or direct the listener and/or performer; and as collaborative mode of communication between the composer, performer, and listener. For some composers, this collaboration was intended to result in “performative listening” in which listeners were actively engaged in bringing each composition to life. This was also described as a form of empathy that results in the co-construction of the musical experience. Overall, composers avoided giving too much personal information and they provided performers with more structural information. However, composers did not agree on whether the same information should be provided to both performers and listeners. Composers’ responses problematize the view of a program note as a simple statement from writer to recipient, indicating instead a more complex set of relations at play between composer, performer, listener, and the work itself. These relations are illustrated in a model. There are implications for program note writers and readers, and for educators. Future research might seek to enhance understanding of program notes, including whether the written program note is the most effective format for communications about music. PMID:27881967

  18. Working memory, age, and hearing loss: susceptibility to hearing aid distortion.

    PubMed

    Arehart, Kathryn H; Souza, Pamela; Baca, Rosalinda; Kates, James M

    2013-01-01

    Hearing aids use complex processing intended to improve speech recognition. Although many listeners benefit from such processing, it can also introduce distortion that offsets or cancels intended benefits for some individuals. The purpose of the present study was to determine the effects of cognitive ability (working memory) on individual listeners' responses to distortion caused by frequency compression applied to noisy speech. The present study analyzed a large data set of intelligibility scores for frequency-compressed speech presented in quiet and at a range of signal-to-babble ratios. The intelligibility data set was based on scores from 26 adults with hearing loss with ages ranging from 62 to 92 years. The listeners were grouped based on working memory ability. The amount of signal modification (distortion) caused by frequency compression and noise was measured using a sound quality metric. Analysis of variance and hierarchical linear modeling were used to identify meaningful differences between subject groups as a function of signal distortion caused by frequency compression and noise. Working memory was a significant factor in listeners' intelligibility of sentences presented in babble noise and processed with frequency compression based on sinusoidal modeling. At maximum signal modification (caused by both frequency compression and babble noise), the factor of working memory (when controlling for age and hearing loss) accounted for 29.3% of the variance in intelligibility scores. Combining working memory, age, and hearing loss accounted for a total of 47.5% of the variability in intelligibility scores. Furthermore, as the total amount of signal distortion increased, listeners with higher working memory performed better on the intelligibility task than listeners with lower working memory did. Working memory is a significant factor in listeners' responses to total signal distortion caused by cumulative effects of babble noise and frequency compression implemented with sinusoidal modeling. These results, together with other studies focused on wide-dynamic range compression, suggest that older listeners with hearing loss and poor working memory are more susceptible to distortions caused by at least some types of hearing aid signal-processing algorithms and by noise, and that this increased susceptibility should be considered in the hearing aid fitting process.

  19. Assessing the Impact of Student Learning Style Preferences

    NASA Astrophysics Data System (ADS)

    Davis, Stacey M.; Franklin, Scott V.

    2004-09-01

    Students express a wide range of preferences for learning environments. We are trying to measure the manifestation of learning styles in various learning environments. In particular, we are interested in performance in an environment that disagrees with the expressed learning style preference, paying close attention to social (group vs. individual) and auditory (those who prefer to learn by listening) environments. These are particularly relevant to activity-based curricula which typically emphasize group-work and de-emphasize lectures. Our methods include multiple-choice assessments, individual student interviews, and a study in which we attempt to isolate the learning environment.

  20. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music

    PubMed Central

    Disbergen, Niels R.; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J.

    2018-01-01

    Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects (N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also participated in Experiment 2, showing a main effect of instrument timbre distance, even though within attention-condition timbre-distance contrasts did not demonstrate any timbre effect. Correlation of overall scores with morph-distance effects, computed by subtracting the largest from the smallest timbre distance scores, showed an influence of general task difficulty on the timbre distance effect. Comparison of laboratory and fMRI data showed scanner noise had no adverse effect on task performance. These Experimental paradigms enable to study both bottom-up and top-down contributions to auditory stream segregation and integration within psychophysical and neuroimaging experiments. PMID:29563861

  1. Use of sonification in the detection of anomalous events

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Cole, Robert J.; Kruesi, Heidi; Greene, Herbert; Monahan, Ganesh; Hall, David L.

    2012-06-01

    In this paper, we describe the construction of a soundtrack that fuses stock market data with information taken from tweets. This soundtrack, or auditory display, presents the numerical and text data in such a way that anomalous events may be readily detected, even by untrained listeners. The soundtrack generation is flexible, allowing an individual listener to create a unique audio mix from the available information sources. Properly constructed, the display exploits the auditory system's sensitivities to periodicities, to dynamic changes, and to patterns. This type of display could be valuable in environments that demand high levels of situational awareness based on multiple sources of incoming information.

  2. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  3. KSC-2012-6390

    NASA Image and Video Library

    2012-12-04

    CAPE CANAVERAL, Fla. – At the Kennedy Space Center Visitor Complex in Florida sixth-grade students listen to a science presentation on NASA programs. Between Nov. 26 and Dec. 7, 2012, about 5,300 sixth-graders in Brevard County, Florida were bused to Kennedy's Visitor Complex for Brevard Space Week, an educational program designed to encourage interest in science, technology, engineering and mathematics STEM careers. Photo credit: NASA/Tim Jacobs

  4. KSC-2012-6385

    NASA Image and Video Library

    2012-12-04

    CAPE CANAVERAL, Fla. – At the Kennedy Space Center Visitor Complex in Florida sixth-grade students listen to a presentation by former NASA astronaut Wendy Lawrence. Between Nov. 26 and Dec. 7, 2012, about 5,300 sixth-graders in Brevard County, Florida were bused to Kennedy's Visitor Complex for Brevard Space Week, an educational program designed to encourage interest in science, technology, engineering and mathematics STEM careers. Photo credit: NASA/Tim Jacobs

  5. Is complex signal processing for bone conduction hearing aids useful?

    PubMed

    Kompis, Martin; Kurz, Anja; Pfiffner, Flurin; Senn, Pascal; Arnold, Andreas; Caversaccio, Marco

    2014-05-01

    To establish whether complex signal processing is beneficial for users of bone anchored hearing aids. Review and analysis of two studies from our own group, each comparing a speech processor with basic digital signal processing (either Baha Divino or Baha Intenso) and a processor with complex digital signal processing (either Baha BP100 or Baha BP110 power). The main differences between basic and complex signal processing are the number of audiologist accessible frequency channels and the availability and complexity of the directional multi-microphone noise reduction and loudness compression systems. Both studies show a small, statistically non-significant improvement of speech understanding in quiet with the complex digital signal processing. The average improvement for speech in noise is +0.9 dB, if speech and noise are emitted both from the front of the listener. If noise is emitted from the rear and speech from the front of the listener, the advantage of the devices with complex digital signal processing as opposed to those with basic signal processing increases, on average, to +3.2 dB (range +2.3 … +5.1 dB, p ≤ 0.0032). Complex digital signal processing does indeed improve speech understanding, especially in noise coming from the rear. This finding has been supported by another study, which has been published recently by a different research group. When compared to basic digital signal processing, complex digital signal processing can increase speech understanding of users of bone anchored hearing aids. The benefit is most significant for speech understanding in noise.

  6. Safe Haven.

    ERIC Educational Resources Information Center

    Bush, Gail

    2003-01-01

    Discusses school libraries as safe havens for teenagers and considers elements that foster that atmosphere, including the physical environment, lack of judgments, familiarity, leisure, and a welcoming nature. Focuses on the importance of relationships, and taking the time to listen to teens and encourage them. (LRW)

  7. Neural time course of visually enhanced echo suppression.

    PubMed

    Bishop, Christopher W; London, Sam; Miller, Lee M

    2012-10-01

    Auditory spatial perception plays a critical role in day-to-day communication. For instance, listeners utilize acoustic spatial information to segregate individual talkers into distinct auditory "streams" to improve speech intelligibility. However, spatial localization is an exceedingly difficult task in everyday listening environments with numerous distracting echoes from nearby surfaces, such as walls. Listeners' brains overcome this unique challenge by relying on acoustic timing and, quite surprisingly, visual spatial information to suppress short-latency (1-10 ms) echoes through a process known as "the precedence effect" or "echo suppression." In the present study, we employed electroencephalography (EEG) to investigate the neural time course of echo suppression both with and without the aid of coincident visual stimulation in human listeners. We find that echo suppression is a multistage process initialized during the auditory N1 (70-100 ms) and followed by space-specific suppression mechanisms from 150 to 250 ms. Additionally, we find a robust correlate of listeners' spatial perception (i.e., suppressing or not suppressing the echo) over central electrode sites from 300 to 500 ms. Contrary to our hypothesis, vision's powerful contribution to echo suppression occurs late in processing (250-400 ms), suggesting that vision contributes primarily during late sensory or decision making processes. Together, our findings support growing evidence that echo suppression is a slow, progressive mechanism modifiable by visual influences during late sensory and decision making stages. Furthermore, our findings suggest that audiovisual interactions are not limited to early, sensory-level modulations but extend well into late stages of cortical processing.

  8. A Generalized Mechanism for Perception of Pitch Patterns

    PubMed Central

    Loui, Psyche; Wu, Elaine H.; Wessel, David L.; Knight, Robert T.

    2009-01-01

    Surviving in a complex and changeable environment relies upon the ability to extract probable recurring patterns. Here we report a neurophysiological mechanism for rapid probabilistic learning of a new system of music. Participants listened to different combinations of tones from a previously-unheard system of pitches based on the Bohlen-Pierce scale, with chord progressions that form 3:1 ratios in frequency, notably different from 2:1 frequency ratios in existing musical systems. Event-related brain potentials elicited by improbable sounds in the new music system showed emergence over a one-hour period of physiological signatures known to index sound expectation in standard Western music. These indices of expectation learning were eliminated when sound patterns were played equiprobably, and co-varied with individual behavioral differences in learning. These results demonstrate that humans utilize a generalized probability-based perceptual learning mechanism to process novel sound patterns in music. PMID:19144845

  9. Auditory perception bias in speech imitation

    PubMed Central

    Postma-Nilsenová, Marie; Postma, Eric

    2013-01-01

    In an experimental study, we explored the role of auditory perception bias in vocal pitch imitation. Psychoacoustic tasks involving a missing fundamental indicate that some listeners are attuned to the relationship between all the higher harmonics present in the signal, which supports their perception of the fundamental frequency (the primary acoustic correlate of pitch). Other listeners focus on the lowest harmonic constituents of the complex sound signal which may hamper the perception of the fundamental. These two listener types are referred to as fundamental and spectral listeners, respectively. We hypothesized that the individual differences in speakers' capacity to imitate F0 found in earlier studies, may at least partly be due to the capacity to extract information about F0 from the speech signal. Participants' auditory perception bias was determined with a standard missing fundamental perceptual test. Subsequently, speech data were collected in a shadowing task with two conditions, one with a full speech signal and one with high-pass filtered speech above 300 Hz. The results showed that perception bias toward fundamental frequency was related to the degree of F0 imitation. The effect was stronger in the condition with high-pass filtered speech. The experimental outcomes suggest advantages for fundamental listeners in communicative situations where F0 imitation is used as a behavioral cue. Future research needs to determine to what extent auditory perception bias may be related to other individual properties known to improve imitation, such as phonetic talent. PMID:24204361

  10. Mathematics in the Early Years.

    ERIC Educational Resources Information Center

    Copley, Juanita V., Ed.

    Noting that young children are capable of surprisingly complex forms of mathematical thinking and learning, this book presents a collection of articles depicting children discovering mathematical ideas, teachers fostering students' informal mathematical knowledge, adults asking questions and listening to answers, and researchers examining…

  11. Diversity and Density: Lexically Determined Evaluative and Informational Consequences of Linguistic Complexity

    ERIC Educational Resources Information Center

    Bradac, James J.; And Others

    1977-01-01

    Defines lexical diversity as manifest vocabulary range and lexical density as the ratio of lexical to gramatical items in a unit of discourse. Examines the effects of lexical diversity and density on listeners' evaluative judgments. (MH)

  12. Low complexity lossless compression of underwater sound recordings.

    PubMed

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  13. Getting the cocktail party started: masking effects in speech perception

    PubMed Central

    Evans, S; McGettigan, C; Agnew, ZK; Rosen, S; Scott, SK

    2016-01-01

    Spoken conversations typically take place in noisy environments and different kinds of masking sounds place differing demands on cognitive resources. Previous studies, examining the modulation of neural activity associated with the properties of competing sounds, have shown that additional speech streams engage the superior temporal gyrus. However, the absence of a condition in which target speech was heard without additional masking made it difficult to identify brain networks specific to masking and to ascertain the extent to which competing speech was processed equivalently to target speech. In this study, we scanned young healthy adults with continuous functional Magnetic Resonance Imaging (fMRI), whilst they listened to stories masked by sounds that differed in their similarity to speech. We show that auditory attention and control networks are activated during attentive listening to masked speech in the absence of an overt behavioural task. We demonstrate that competing speech is processed predominantly in the left hemisphere within the same pathway as target speech but is not treated equivalently within that stream, and that individuals who perform better in speech in noise tasks activate the left mid-posterior superior temporal gyrus more. Finally, we identify neural responses associated with the onset of sounds in the auditory environment, activity was found within right lateralised frontal regions consistent with a phasic alerting response. Taken together, these results provide a comprehensive account of the neural processes involved in listening in noise. PMID:26696297

  14. Abnormal Complex Auditory Pattern Analysis in Schizophrenia Reflected in an Absent Missing Stimulus Mismatch Negativity.

    PubMed

    Salisbury, Dean F; McCathern, Alexis G

    2016-11-01

    The simple mismatch negativity (MMN) to tones deviating physically (in pitch, loudness, duration, etc.) from repeated standard tones is robustly reduced in schizophrenia. Although generally interpreted to reflect memory or cognitive processes, simple MMN likely contains some activity from non-adapted sensory cells, clouding what process is affected in schizophrenia. Research in healthy participants has demonstrated that MMN can be elicited by deviations from abstract auditory patterns and complex rules that do not cause sensory adaptation. Whether persons with schizophrenia show abnormalities in the complex MMN is unknown. Fourteen schizophrenia participants and 16 matched healthy underwent EEG recording while listening to 400 groups of 6 tones 330 ms apart, separated by 800 ms. Occasional deviant groups were missing the 4th or 6th tone (50 groups each). Healthy participants generated a robust response to a missing but expected tone. The schizophrenia group was significantly impaired in activating the missing stimulus MMN, generating no significant activity at all. Schizophrenia affects the ability of "primitive sensory intelligence" and pre-attentive perceptual mechanisms to form implicit groups in the auditory environment. Importantly, this deficit must relate to abnormalities in abstract complex pattern analysis rather than sensory problems in the disorder. The results indicate a deficit in parsing of the complex auditory scene which likely impacts negatively on successful social navigation in schizophrenia. Knowledge of the location and circuit architecture underlying the true novelty-related MMN and its pathophysiology in schizophrenia will help target future interventions.

  15. How Hearing Impairment Affects Sentence Comprehension: Using Eye Fixations to Investigate the Duration of Speech Processing

    PubMed Central

    Kollmeier, Birger; Brand, Thomas

    2015-01-01

    The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications. PMID:25910503

  16. Getting Preschoolers Ready To Read and Write.

    ERIC Educational Resources Information Center

    Keith, Lori; Morrison, George S.; Brown, Karon

    2002-01-01

    Discusses the increased literacy emphasis in schools and how early childhood programs can provide an enriched literacy environment. Defines and provides suggestions for activities related to basic literacy concepts: listening comprehension, phonological awareness, reading motivation, written expression, letter and early word recognition, and…

  17. Workplace Communication: Meaningful Messages.

    ERIC Educational Resources Information Center

    Travis, Lisa; Watkins, Lisa

    This learning module emphasizes workplace communication skills with a special focus on the team environment. The following skills are addressed: speaking with clarity, maintaining eye contact, listening carefully, responding to questions with patience and an open mind, showing a willingness to understand, giving instructions clearly, and…

  18. Environment-specific noise suppression for improved speech intelligibility by cochlear implant users.

    PubMed

    Hu, Yi; Loizou, Philipos C

    2010-06-01

    Attempts to develop noise-suppression algorithms that can significantly improve speech intelligibility in noise by cochlear implant (CI) users have met with limited success. This is partly because algorithms were sought that would work equally well in all listening situations. Accomplishing this has been quite challenging given the variability in the temporal/spectral characteristics of real-world maskers. A different approach is taken in the present study focused on the development of environment-specific noise suppression algorithms. The proposed algorithm selects a subset of the envelope amplitudes for stimulation based on the signal-to-noise ratio (SNR) of each channel. Binary classifiers, trained using data collected from a particular noisy environment, are first used to classify the mixture envelopes of each channel as either target-dominated (SNR>or=0 dB) or masker-dominated (SNR<0 dB). Only target-dominated channels are subsequently selected for stimulation. Results with CI listeners indicated substantial improvements (by nearly 44 percentage points at 5 dB SNR) in intelligibility with the proposed algorithm when tested with sentences embedded in three real-world maskers. The present study demonstrated that the environment-specific approach to noise reduction has the potential to restore speech intelligibility in noise to a level near to that attained in quiet.

  19. Benefits of incorporating the adaptive dynamic range optimization amplification scheme into an assistive listening device for people with mild or moderate hearing loss.

    PubMed

    Chang, Hung-Yue; Luo, Ching-Hsing; Lo, Tun-Shin; Chen, Hsiao-Chuan; Huang, Kuo-You; Liao, Wen-Huei; Su, Mao-Chang; Liu, Shu-Yu; Wang, Nan-Mai

    2017-08-28

    This study investigated whether a self-designed assistive listening device (ALD) that incorporates an adaptive dynamic range optimization (ADRO) amplification strategy can surpass a commercially available monaurally worn linear ALD, SM100. Both subjective and objective measurements were implemented. Mandarin Hearing-In-Noise Test (MHINT) scores were the objective measurement, whereas participant satisfaction was the subjective measurement. The comparison was performed in a mixed design (i.e., subjects' hearing status being mild or moderate, quiet versus noisy, and linear versus ADRO scheme). The participants were two groups of hearing-impaired subjects, nine mild and eight moderate, respectively. The results of the ADRO system revealed a significant difference in the MHINT sentence reception threshold (SRT) in noisy environments between monaurally aided and unaided conditions, whereas the linear system did not. The benchmark results showed that the ADRO scheme is effectively beneficial to people who experience mild or moderate hearing loss in noisy environments. The satisfaction rating regarding overall speech quality indicated that the participants were satisfied with the speech quality of both ADRO and linear schemes in quiet environments, and they were more satisfied with ADRO than they with the linear scheme in noisy environments.

  20. Human vocal attractiveness as signaled by body size projection.

    PubMed

    Xu, Yi; Lee, Albert; Wu, Wing-Li; Liu, Xuan; Birkholz, Peter

    2013-01-01

    Voice, as a secondary sexual characteristic, is known to affect the perceived attractiveness of human individuals. But the underlying mechanism of vocal attractiveness has remained unclear. Here, we presented human listeners with acoustically altered natural sentences and fully synthetic sentences with systematically manipulated pitch, formants and voice quality based on a principle of body size projection reported for animal calls and emotional human vocal expressions. The results show that male listeners preferred a female voice that signals a small body size, with relatively high pitch, wide formant dispersion and breathy voice, while female listeners preferred a male voice that signals a large body size with low pitch and narrow formant dispersion. Interestingly, however, male vocal attractiveness was also enhanced by breathiness, which presumably softened the aggressiveness associated with a large body size. These results, together with the additional finding that the same vocal dimensions also affect emotion judgment, indicate that humans still employ a vocal interaction strategy used in animal calls despite the development of complex language.

  1. The influence of the level formants on the perception of synthetic vowel sounds

    NASA Astrophysics Data System (ADS)

    Kubzdela, Henryk; Owsianny, Mariuz

    A computer model of a generator of periodic complex sounds simulating consonants was developed. The system makes possible independent regulation of the level of each of the formants and instant generation of the sound. A trapezoid approximates the curve of the spectrum within the range of the formant. In using this model, each person in a group of six listeners experimentally selected synthesis parameters for six sounds that to him seemed optimal approximations of Polish consonants. From these, another six sounds were selected that were identified by a majority of the six persons and several additional listeners as being best qualified to serve as prototypes of Polish consonants. These prototypes were then used to randomly create sounds with various combinations at the level of the second and third formant and these were presented to seven listeners for identification. The results of the identifications are presented in table form in three variants and are described from the point of view of the requirements of automatic recognition of consonants in continuous speech.

  2. Syntactic comprehension in reading and listening: a study with French children with dyslexia.

    PubMed

    Casalis, Séverine; Leuwers, Christel; Hilton, Heather

    2013-01-01

    This study examined syntactic comprehension in French children with dyslexia in both listening and reading. In the first syntactic comprehension task, a partial version of the Epreuve de Compréhension syntaxico-sémantique (ECOSSE test; French adaptation of Bishop's test for receptive grammar test) children with dyslexia performed at a lower level in the written but not in the spoken modality, compared to reading age-matched children, suggesting a difficulty in handling syntax while reading. In the second task, syntactic processing was further explored through a test of relative clause processing, in which inflectional markers could aid in attributing roles to the elements in a complex syntactic structure. Children with dyslexia were insensitive to inflectional markers in both reading and listening, as was the reading age control group, while only the older normal reader group appeared to make use of the inflectional markers. Overall, the results support the hypothesis that difficulties in comprehension in dyslexia are strongly related to poor reading skills.

  3. Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes.

    PubMed

    Meyer, Bernd T; Brand, Thomas; Kollmeier, Birger

    2011-01-01

    The aim of this study is to quantify the gap between the recognition performance of human listeners and an automatic speech recognition (ASR) system with special focus on intrinsic variations of speech, such as speaking rate and effort, altered pitch, and the presence of dialect and accent. Second, it is investigated if the most common ASR features contain all information required to recognize speech in noisy environments by using resynthesized ASR features in listening experiments. For the phoneme recognition task, the ASR system achieved the human performance level only when the signal-to-noise ratio (SNR) was increased by 15 dB, which is an estimate for the human-machine gap in terms of the SNR. The major part of this gap is attributed to the feature extraction stage, since human listeners achieve comparable recognition scores when the SNR difference between unaltered and resynthesized utterances is 10 dB. Intrinsic variabilities result in strong increases of error rates, both in human speech recognition (HSR) and ASR (with a relative increase of up to 120%). An analysis of phoneme duration and recognition rates indicates that human listeners are better able to identify temporal cues than the machine at low SNRs, which suggests incorporating information about the temporal dynamics of speech into ASR systems.

  4. Masking Period Patterns and Forward Masking for Speech-Shaped Noise: Age-Related Effects.

    PubMed

    Grose, John H; Menezes, Denise C; Porter, Heather L; Griz, Silvana

    2016-01-01

    The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to nonsimultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Participants included younger (n = 11), middle-age (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions and assessed how well the temporal window fits accounted for these data. The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. This study demonstrated an age-related increase in susceptibility to nonsimultaneous masking, supporting the hypothesis that exacerbated nonsimultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data, suggesting an association between susceptibility to forward masking and speech understanding in modulated noise.

  5. Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen.

    PubMed

    Baker, David J; Müllensiefen, Daniel

    2017-01-01

    The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition.

  6. Perception of Leitmotives in Richard Wagner's Der Ring des Nibelungen

    PubMed Central

    Baker, David J.; Müllensiefen, Daniel

    2017-01-01

    The music of Richard Wagner tends to generate very diverse judgments indicative of the complex relationship between listeners and the sophisticated musical structures in Wagner's music. This paper presents findings from two listening experiments using the music from Wagner's Der Ring des Nibelungen that explores musical as well as individual listener parameters to better understand how listeners are able to hear leitmotives, a compositional device closely associated with Wagner's music. Results confirm findings from a previous experiment showing that specific expertise with Wagner's music can account for a greater portion of the variance in an individual's ability to recognize and remember musical material compared to measures of generic musical training. Results also explore how acoustical distance of the leitmotives affects memory recognition using a chroma similarity measure. In addition, we show how characteristics of the compositional structure of the leitmotives contributes to their salience and memorability. A final model is then presented that accounts for the aforementioned individual differences factors, as well as parameters of musical surface and structure. Our results suggest that that future work in music perception may consider both individual differences variables beyond musical training, as well as symbolic features and audio commonly used in music information retrieval in order to build robust models of musical perception and cognition. PMID:28522981

  7. Aided speech recognition in single-talker competition by elderly hearing-impaired listeners

    NASA Astrophysics Data System (ADS)

    Coughlin, Maureen; Humes, Larry

    2004-05-01

    This study examined the speech-identification performance in one-talker interference conditions that increased in complexity while audibility was ensured over a wide bandwidth (200-4000 Hz). Factorial combinations of three independent variables were used to vary the amount of informational masking. These variables were: (1) competition playback direction (forward or reverse); (2) gender match between target and competition talkers (same or different); and (3) target talker uncertainty (one of three possible talkers from trial to trial). Four groups of listeners, two elderly hearing-impaired groups differing in age (65-74 and 75-84 years) and two young normal-hearing groups, were tested. One of the groups of young normal-hearing listeners was tested under acoustically equivalent test conditions and one was tested under perceptually equivalent test conditions. The effect of each independent variable on speech-identification performance and informational masking was generally consistent with expectations. Group differences in the observed informational masking were most pronounced for the oldest group of hearing-impaired listeners. The eight measures of speech-identification performance were found to be strongly correlated with one another, and individual differences in speech understanding performance among the elderly were found to be associated with age and level of education. [Work supported, in part, by NIA.

  8. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  9. Pitch perception prior to cortical maturation

    NASA Astrophysics Data System (ADS)

    Lau, Bonnie K.

    Pitch perception plays an important role in many complex auditory tasks including speech perception, music perception, and sound source segregation. Because of the protracted and extensive development of the human auditory cortex, pitch perception might be expected to mature, at least over the first few months of life. This dissertation investigates complex pitch perception in 3-month-olds, 7-month-olds and adults -- time points when the organization of the auditory pathway is distinctly different. Using an observer-based psychophysical procedure, a series of four studies were conducted to determine whether infants (1) discriminate the pitch of harmonic complex tones, (2) discriminate the pitch of unresolved harmonics, (3) discriminate the pitch of missing fundamental melodies, and (4) have comparable sensitivity to pitch and spectral changes as adult listeners. The stimuli used in these studies were harmonic complex tones, with energy missing at the fundamental frequency. Infants at both three and seven months of age discriminated the pitch of missing fundamental complexes composed of resolved and unresolved harmonics as well as missing fundamental melodies, demonstrating perception of complex pitch by three months of age. More surprisingly, infants in both age groups had lower pitch and spectral discrimination thresholds than adult listeners. Furthermore, no differences in performance on any of the tasks presented were observed between infants at three and seven months of age. These results suggest that subcortical processing is not only sufficient to support pitch perception prior to cortical maturation, but provides adult-like sensitivity to pitch by three months.

  10. Pointers for Parenting.

    ERIC Educational Resources Information Center

    Bessant, Helen P., Ed.

    Presented are 11 brief articles designed to help parents enhance their children's school performance and generally improve the home environment. Included is information on the following topics: the role of the social worker in parent education, home activities to improve a child's reading skills, developing listening skill through instructional…

  11. MEMORANDUM ON--FACILITIES FOR EARLY CHILDHOOD EDUCATION.

    ERIC Educational Resources Information Center

    DEUTSCH, MARTIN; AND OTHERS

    BECAUSE LEARNING ENVIRONMENT HAS SIGNIFICANCE FOR THE DISADVANTAGED CHILD, INSTRUCTIONAL SPACE SHOULD BE PROVIDED THAT WILL FACILITATE INTELLECTUAL DEVELOPMENT. GUIDELINES ARE GIVEN FOR GENERAL AREA, BLOCK ALCOVE, MANIPULATIVE TOY AREA, READING AND LISTENING AREA, DOLL AND HOUSEKEEPING AREA, ART AREA, TUTORING BOOTH, CUBICLES, TOILETS, STORAGE,…

  12. A Hospital Clinic Early Intervention Program.

    ERIC Educational Resources Information Center

    Simser, Judith I.; Steacie, Pamela

    1993-01-01

    The Aural Habilitation Program of Children's Hospital of Eastern Ontario (Canada) provides weekly, individualized aural habilitation sessions for parents of young children with hearing impairments and offers guidance in creating a listening, learning environment in the home. Strategies for developing parents' skills and confidence are described.…

  13. Recognizing speech under a processing load: dissociating energetic from informational factors.

    PubMed

    Mattys, Sven L; Brooks, Joanna; Cooke, Martin

    2009-11-01

    Effects of perceptual and cognitive loads on spoken-word recognition have so far largely escaped investigation. This study lays the foundations of a psycholinguistic approach to speech recognition in adverse conditions that draws upon the distinction between energetic masking, i.e., listening environments leading to signal degradation, and informational masking, i.e., listening environments leading to depletion of higher-order, domain-general processing resources, independent of signal degradation. We show that severe energetic masking, such as that produced by background speech or noise, curtails reliance on lexical-semantic knowledge and increases relative reliance on salient acoustic detail. In contrast, informational masking, induced by a resource-depleting competing task (divided attention or a memory load), results in the opposite pattern. Based on this clear dissociation, we propose a model of speech recognition that addresses not only the mapping between sensory input and lexical representations, as traditionally advocated, but also the way in which this mapping interfaces with general cognition and non-linguistic processes.

  14. Language-driven anticipatory eye movements in virtual reality.

    PubMed

    Eichert, Nicole; Peeters, David; Hagoort, Peter

    2018-06-01

    Predictive language processing is often studied by measuring eye movements as participants look at objects on a computer screen while they listen to spoken sentences. This variant of the visual-world paradigm has revealed that information encountered by a listener at a spoken verb can give rise to anticipatory eye movements to a target object, which is taken to indicate that people predict upcoming words. The ecological validity of such findings remains questionable, however, because these computer experiments used two-dimensional stimuli that were mere abstractions of real-world objects. Here we present a visual-world paradigm study in a three-dimensional (3-D) immersive virtual reality environment. Despite significant changes in the stimulus materials and the different mode of stimulus presentation, language-mediated anticipatory eye movements were still observed. These findings thus indicate that people do predict upcoming words during language comprehension in a more naturalistic setting where natural depth cues are preserved. Moreover, the results confirm the feasibility of using eyetracking in rich and multimodal 3-D virtual environments.

  15. Evaluation of a Stereo Music Preprocessing Scheme for Cochlear Implant Users.

    PubMed

    Buyens, Wim; van Dijk, Bas; Moonen, Marc; Wouters, Jan

    2018-01-01

    Although for most cochlear implant (CI) users good speech understanding is reached (at least in quiet environments), the perception and the appraisal of music are generally unsatisfactory. The improvement in music appraisal was evaluated in CI participants by using a stereo music preprocessing scheme implemented on a take-home device, in a comfortable listening environment. The preprocessing allowed adjusting the balance among vocals/bass/drums and other instruments, and was evaluated for different genres of music. The correlation between the preferred settings and the participants' speech and pitch detection performance was investigated. During the initial visit preceding the take-home test, the participants' speech-in-noise perception and pitch detection performance were measured, and a questionnaire about their music involvement was completed. The take-home device was provided, including the stereo music preprocessing scheme and seven playlists with six songs each. The participants were asked to adjust the balance by means of a turning wheel to make the music sound most enjoyable, and to repeat this three times for all songs. Twelve postlingually deafened CI users participated in the study. The data were collected by means of a take-home device, which preserved all the preferred settings for the different songs. Statistical analysis was done with a Friedman test (with post hoc Wilcoxon signed-rank test) to check the effect of "Genre." The correlations were investigated with Pearson's and Spearman's correlation coefficients. All participants preferred a balance significantly different from the original balance. Differences across participants were observed which could not be explained by perceptual abilities. An effect of "Genre" was found, showing significantly smaller preferred deviation from the original balance for Golden Oldies compared to the other genres. The stereo music preprocessing scheme showed an improvement in music appraisal with complex music and hence might be a good tool for music listening, training, or rehabilitation for CI users. American Academy of Audiology

  16. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  17. Research and Studies Directory for Manpower, Personnel, and Training

    DTIC Science & Technology

    1988-01-01

    314-889-6505 PSYCHOPHYSIOLCGICAL MAPPING OF COGNITIVE PROCESSES SUGA N* WASHINGTON UNIV ST LOUIS MO 314-889-6805 CONTROL OF BIOSONAR BEHAVIOR BY THE...VISUAL PERCEPTION CONTROL OF BIOSONAR BEHAVIOR BY THE AUDITORY CORTEX DICHOTIC LISTENING TO COMPLEX SOUNDS: EFFECTS OF STIMULUS CHARACTERISTICS AND

  18. Brain responses to 40-Hz binaural beat and effects on emotion and memory.

    PubMed

    Jirakittayakorn, Nantawachara; Wongsawat, Yodchanan

    2017-10-01

    Gamma oscillation plays a role in binding process or sensory integration, a process by which several brain areas beside primary cortex are activated for higher perception of the received stimulus. Beta oscillation is also involved in interpreting received stimulus and occurs following gamma oscillation, and this process is known as gamma-to-beta transition, a process for neglecting unnecessary stimuli in surrounding environment. Gamma oscillation also associates with cognitive functions, memory and emotion. Therefore, modulation of the brain activity can lead to manipulation of cognitive functions. The stimulus used in this study was 40-Hz binaural beat because binaural beat induces frequency following response. This study aimed to investigate the neural oscillation responding to the 40-Hz binaural beat and to evaluate working memory function and emotional states after listening to that stimulus. Two experiments were developed based on the study aims. In the first experiment, electroencephalograms were recorded while participants listened to the stimulus for 30min. The results suggested that frontal, temporal, and central regions were activated within 15min. In the second experiment, word list recall task was conducted before and after listening to the stimulus for 20min. The results showed that, after listening, the recalled words were increase in the working memory portion of the list. Brunel Mood Scale, a questionnaire to evaluate emotional states, revealed changes in emotional states after listening to the stimulus. The emotional results suggested that these changes were consistent with the induced neural oscillations. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Dichotic listening tests of functional brain asymmetry predict response to fluoxetine in depressed women and men.

    PubMed

    Bruder, Gerard E; Stewart, Jonathan W; McGrath, Patrick J; Deliyannides, Deborah; Quitkin, Frederic M

    2004-09-01

    Patients having a depressive disorder vary widely in their therapeutic responsiveness to a selective serotonin reuptake inhibitor (SSRI), but there are no clinical predictors of treatment outcome. Studies using dichotic listening, electrophysiologic and neuroimaging measures suggest that pretreatment differences among depressed patients in functional brain asymmetry are related to responsiveness to antidepressants. Two new studies replicate differences in dichotic listening asymmetry between fluoxetine responders and nonresponders, and demonstrate the importance of gender in this context. Right-handed outpatients who met DSM-IV criteria for major depression, dysthymia, or depression not otherwise specified were tested on dichotic fused-words and complex tones tests before completing 12 weeks of fluoxetine treatment. Perceptual asymmetry (PA) scores were compared for 75 patients (38 women) who responded to treatment and 39 patients (14 women) who were nonresponders. Normative data were also obtained for 101 healthy adults (61 women). Patients who responded to fluoxetine differed from nonresponders and healthy adults in favoring left- over right-hemisphere processing of dichotic stimuli, and this difference was dependent on gender and test. Heightened left-hemisphere advantage for dichotic words in responders was present among women but not men, whereas reduced right-hemisphere advantage for dichotic tones in responders was present among men but not women. Pretreatment PA was also predictive of change in depression severity following treatment. Responder vs nonresponder differences for verbal dichotic listening in women and nonverbal dichotic listening in men are discussed in terms of differences in cognitive function, hemispheric organization, and neurotransmitter function.

  20. What Does Music Sound Like for a Cochlear Implant User?

    PubMed

    Jiam, Nicole T; Caldwell, Meredith T; Limb, Charles J

    2017-09-01

    Cochlear implant research and product development over the past 40 years have been heavily focused on speech comprehension with little emphasis on music listening and enjoyment. The relatively little understanding of how music sounds in a cochlear implant user stands in stark contrast to the overall degree of importance the public places on music and quality of life. The purpose of this article is to describe what music sounds like to cochlear implant users, using a combination of existing research studies and listener descriptions. We examined the published literature on music perception in cochlear implant users, particularly postlingual cochlear implant users, with an emphasis on the primary elements of music and recorded music. Additionally, we administered an informal survey to cochlear implant users to gather first-hand descriptions of music listening experience and satisfaction from the cochlear implant population. Limitations in cochlear implant technology lead to a music listening experience that is significantly distorted compared with that of normal hearing listeners. On the basis of many studies and sources, we describe how music is frequently perceived as out-of-tune, dissonant, indistinct, emotionless, and weak in bass frequencies, especially for postlingual cochlear implant users-which may in part explain why music enjoyment and participation levels are lower after implantation. Additionally, cochlear implant users report difficulty in specific musical contexts based on factors including but not limited to genre, presence of lyrics, timbres (woodwinds, brass, instrument families), and complexity of the perceived music. Future research and cochlear implant development should target these areas as parameters for improvement in cochlear implant-mediated music perception.

  1. Speech recognition for bilaterally asymmetric and symmetric hearing aid microphone modes in simulated classroom environments.

    PubMed

    Ricketts, Todd A; Picou, Erin M

    2013-09-01

    This study aimed to evaluate the potential utility of asymmetrical and symmetrical directional hearing aid fittings for school-age children in simulated classroom environments. This study also aimed to evaluate speech recognition performance of children with normal hearing in the same listening environments. Two groups of school-age children 11 to 17 years of age participated in this study. Twenty participants had normal hearing, and 29 participants had sensorineural hearing loss. Participants with hearing loss were fitted with behind-the-ear hearing aids with clinically appropriate venting and were tested in 3 hearing aid configurations: bilateral omnidirectional, bilateral directional, and asymmetrical directional microphones. Speech recognition testing was completed in each microphone configuration in 3 environments: Talker-Front, Talker-Back, and Question-Answer situations. During testing, the location of the speech signal changed, but participants were always seated in a noisy, moderately reverberant classroom-like room. For all conditions, results revealed expected effects of directional microphones on speech recognition performance. When the signal of interest was in front of the listener, bilateral directional microphone was best, and when the signal of interest was behind the listener, bilateral omnidirectional microphone was best. Performance with asymmetric directional microphones was between the 2 symmetrical conditions. The magnitudes of directional benefits and decrements were not significantly correlated. In comparison with their peers with normal hearing, children with hearing loss performed similarly to their peers with normal hearing when fitted with directional microphones and the speech was from the front. In contrast, children with normal hearing still outperformed children with hearing loss if the speech originated from behind, even when the children were fitted with the optimal hearing aid microphone mode for the situation. Bilateral directional microphones can be effective in improving speech recognition performance for children in the classroom, as long as child is facing the talker of interest. Bilateral directional microphones, however, can impair performance if the signal originates from behind a listener. However, these data suggest that the magnitude of decrement is not predictable from an individual's benefit. The results re-emphasize the importance of appropriate switching between microphone modes so children can take full advantage of directional benefits without being hurt by directional decrements. An asymmetric fitting limits decrements, but does not lead to maximum speech recognition scores when compared with the optimal symmetrical fitting. Therefore, the asymmetric mode may not be the best option as a default fitting for children in a classroom environment. While directional microphones improve performance for children with hearing loss, their performance in most conditions continues to be impaired relative to their normal-hearing peers, particularly when the signals of interest originate from behind or from an unpredictable location.

  2. Auditory Training Effects on the Listening Skills of Children With Auditory Processing Disorder.

    PubMed

    Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva

    2016-01-01

    Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in various configurations of masking speech, and in which the target speakers and test materials were unrelated to the training materials; (2) the Children's Auditory Performance Scale that assesses listening skills, completed by the children's teachers; and (3) the Clinical Evaluation of Language Fundamental-4 pragmatic profile that assesses pragmatic language use, completed by parents. All outcome measures significantly improved at immediate postintervention in the intervention group only, with effect sizes ranging from 0.76 to 1.7. Improvements in speech-in-noise performance correlated with improved scores in the Children's Auditory Performance Scale questionnaire in the trained group only. Baseline language and cognitive assessments did not predict better training outcome. Improvements in speech-in-noise performance were sustained 3 months postintervention. Broad speech-based auditory training led to improved auditory processing skills as reflected in speech-in-noise test performance and in better functional listening in real life. The observed correlation between improved functional listening with improved speech-in-noise perception in the trained group suggests that improved listening was a direct generalization of the auditory training.

  3. A multidisciplinary approach of the problem of noise nuisance in urban environment

    NASA Astrophysics Data System (ADS)

    Rabah, Derbal Cobis; Hamza, Zeghlache

    2002-05-01

    More often the problem of noise and sonic pollution, particularly in urban sites, is studied by different disciplines such as physics, the acoustics, the psychoacoustics, the medicine and others. It is independently of each other that these sciences are often approaching this subject. Some studies are carried out in laboratories taking noise as samples cut off their realistic context. Urban noise is studied as well by making an abstraction of the different contextual parameters by idealizing a rather complex sonic environment. The noise, according to this present approach, is suposed to react with surounding space, and it takes the form and the quality of the place by defining and requalifying it. It is found that the contextual aspects such as social, cultural or even symbolic dimensions modulate the listening conditions and the perception quality of the noise and even the living and the daily practice of the urban space. The multiparameter dimension study of the noise in an urban context is necessary to better work out the problem and to try to come up with some practical and efficient solutions. The little amount of studies based on such multidisciplinary approach, confort well our effort to go ahead with this methodological approach.

  4. Listening through Voices: Infant Statistical Word Segmentation across Multiple Speakers

    ERIC Educational Resources Information Center

    Graf Estes, Katharine; Lew-Williams, Casey

    2015-01-01

    To learn from their environments, infants must detect structure behind pervasive variation. This presents substantial and largely untested learning challenges in early language acquisition. The current experiments address whether infants can use statistical learning mechanisms to segment words when the speech signal contains acoustic variation…

  5. Listening Technologies for Individuals and the Classroom

    ERIC Educational Resources Information Center

    Marttila, Joan

    2004-01-01

    Assistive technology has always been an important component of individualized education programs. The individualized education program process can be used to supply hearing assistive technology to students. One goal of audiologists and educators is to improve the acoustic environment of classrooms for all students by constructing school buildings…

  6. Auditory Processing Disorders: Acquisition and Treatment

    ERIC Educational Resources Information Center

    Moore, David R.

    2007-01-01

    Auditory processing disorder (APD) describes a mixed and poorly understood listening problem characterised by poor speech perception, especially in challenging environments. APD may include an inherited component, and this may be major, but studies reviewed here of children with long-term otitis media with effusion (OME) provide strong evidence…

  7. Classroom Acoustics: A Resource for Creating Environments with Desirable Listening Conditions.

    ERIC Educational Resources Information Center

    Seep, Benjamin; Glosemeyer, Robin; Hulce, Emily; Linn, Matt; Aytar, Pamela

    This booklet provides a general overview of classroom acoustic problems and their solutions for both new school construction and renovation. Practical explanations and examples are discussed on topics including reverberation, useful and undesirable reflections, mechanical equipment noise, interior noise sources, and sound reinforcement. Examples…

  8. Investigating Joint Attention Mechanisms through Spoken Human-Robot Interaction

    ERIC Educational Resources Information Center

    Staudte, Maria; Crocker, Matthew W.

    2011-01-01

    Referential gaze during situated language production and comprehension is tightly coupled with the unfolding speech stream (Griffin, 2001; Meyer, Sleiderink, & Levelt, 1998; Tanenhaus, Spivey-Knowlton, Eberhard, & Sedivy, 1995). In a shared environment, utterance comprehension may further be facilitated when the listener can exploit the speaker's…

  9. A critical review of hearing-aid single-microphone noise-reduction studies in adults and children.

    PubMed

    Chong, Foong Yen; Jenstad, Lorienne M

    2017-10-26

    Single-microphone noise reduction (SMNR) is implemented in hearing aids to suppress background noise. The purpose of this article was to provide a critical review of peer-reviewed studies in adults and children with sensorineural hearing loss who were fitted with hearing aids incorporating SMNR. Articles published between 2000 and 2016 were searched in PUBMED and EBSCO databases. Thirty-two articles were included in the final review. Most studies with adult participants showed that SMNR has no effect on speech intelligibility. Positive results were reported for acceptance of background noise, preference, and listening effort. Studies of school-aged children were consistent with the findings of adult studies. No study with infants or young children of under 5 years old was found. Recent studies on noise-reduction systems not yet available in wearable hearing aids have documented benefits of noise reduction on memory for speech processing for older adults. This evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort. Future research should test SMNR with infants and children who are younger than 5 years of age. Further development, testing, and clinical trials should be carried out on algorithms not yet available in wearable hearing aids. Testing higher cognitive level for speech processing and learning of novel sounds or words could show benefits of advanced signal processing features. These approaches should be expanded to other populations such as children and younger adults. Implications for rehabilitation The review provides a quick reference for students and clinicians regarding the efficacy and effectiveness of SMNR in wearable hearing aids. This information is useful during counseling session to build a realistic expectation among hearing aid users. Most studies in the adult population suggest that SMNR may provide some benefits to adult listeners in terms of listening comfort, acceptance of background noise, and release of cognitive load in a complex listening condition. However, it does not improve speech intelligibility. Studies that examined SMNR in the paediatric population suggest that SMNR may benefit older school-aged children, aged between 10 and 12 years old. The evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort.

  10. Technology and the Four Skills

    ERIC Educational Resources Information Center

    Blake, Robert

    2016-01-01

    Most L2 instructors implement their curriculum with an eye to improving the four skills: speaking, listening, reading, and writing. Absent in this vision of language are notions of pragmatic, sociolinguistic, and multicultural competencies. Although current linguistic theories posit a more complex, interactive, and integrated model of language,…

  11. The effect of music on the cardiac activity of a fetus in a cardiotocographic examination.

    PubMed

    Gebuza, Grażyna; Zaleska, Marta; Kaźmierczak, Marzena; Mieczkowska, Estera; Gierszewska, Małgorzata

    2018-04-24

    Music therapy as an adjunct to treatment is rarely used in perinatology and obstetrics, despite the proven therapeutic effect. Auditory stimulation through music positively impacts the health of adults and infants, its special role being observed in the development of prematurely born neonates. It is equally interesting how music impacts fetuses. The aim of this study is to assess the parameters of fetuses through cardiotocographic recording in women in the 3rd trimester of pregnancy while listening to Pyotr Tchaikovsky's "Sleeping Beauty" and "Swan Lake." The study was conducted in 2015 at Dr. Jan Biziel 2nd University Hospital in Bydgoszcz, on 48 women in the 3rd trimester of pregnancy. The cardiotocographic parameters of the fetus were examined by means of a Sonicaid Team Standard Oxford apparatus (Huntleigh Healthcare, Cardiff, United Kingdom). Significant changes were observed in the number of uterine contractions, accelerations, episodes of higher variability, and fetal movements after listening to the music. Listening to classical music can serve as a successful method of prophylaxis against premature deliveries, indicated by the lower number of uterine contractions, and in stimulating fetal movement in the case of a non-reactive non-stress test (NST). Music therapy, as a therapeutic method which is inexpensive and soothing, should be used more frequently in obstetrics wards, indicated by pathological pregnancies, isolation from the natural environment, and distress resulting from diagnostics and from being in an unfamiliar environment.

  12. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    PubMed

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Processing Mechanisms in Hearing-Impaired Listeners: Evidence from Reaction Times and Sentence Interpretation.

    PubMed

    Carroll, Rebecca; Uslar, Verena; Brand, Thomas; Ruigendijk, Esther

    The authors aimed to determine whether hearing impairment affects sentence comprehension beyond phoneme or word recognition (i.e., on the sentence level), and to distinguish grammatically induced processing difficulties in structurally complex sentences from perceptual difficulties associated with listening to degraded speech. Effects of hearing impairment or speech in noise were expected to reflect hearer-specific speech recognition difficulties. Any additional processing time caused by the sustained perceptual challenges across the sentence may either be independent of or interact with top-down processing mechanisms associated with grammatical sentence structure. Forty-nine participants listened to canonical subject-initial or noncanonical object-initial sentences that were presented either in quiet or in noise. Twenty-four participants had mild-to-moderate hearing impairment and received hearing-loss-specific amplification. Twenty-five participants were age-matched peers with normal hearing status. Reaction times were measured on-line at syntactically critical processing points as well as two control points to capture differences in processing mechanisms. An off-line comprehension task served as an additional indicator of sentence (mis)interpretation, and enforced syntactic processing. The authors found general effects of hearing impairment and speech in noise that negatively affected perceptual processing, and an effect of word order, where complex grammar locally caused processing difficulties for the noncanonical sentence structure. Listeners with hearing impairment were hardly affected by noise at the beginning of the sentence, but were affected markedly toward the end of the sentence, indicating a sustained perceptual effect of speech recognition. Comprehension of sentences with noncanonical word order was negatively affected by degraded signals even after sentence presentation. Hearing impairment adds perceptual processing load during sentence processing, but affects grammatical processing beyond the word level to the same degree as in normal hearing, with minor differences in processing mechanisms. The data contribute to our understanding of individual differences in speech perception and language understanding. The authors interpret their results within the ease of language understanding model.

  14. Investigation of musicality in birdsong

    PubMed Central

    Rothenberg, David; Roeske, Tina C.; Voss, Henning U.; Naguib, Marc; Tchernichovski, Ofer

    2013-01-01

    Songbirds spend much of their time learning, producing, and listening to complex vocal sequences we call songs. Songs are learned via cultural transmission, and singing, usually by males, has a strong impact on the behavioral state of the listeners, often promoting affiliation, pair bonding, or aggression. What is it in the acoustic structure of birdsong that makes it such a potent stimulus? We suggest that birdsong potency might be driven by principles similar to those that make music so effective in inducing emotional responses in humans: a combination of rhythms and pitches —and the transitions between acoustic states—affecting emotions through creating expectations, anticipations, tension, tension release, or surprise. Here we propose a framework for investigating how birdsong, like human music, employs the above “musical” features to affect the emotions of avian listeners. First we analyze songs of thrush nightingales (Luscinia luscinia) by examining their trajectories in terms of transitions in rhythm and pitch. These transitions show gradual escalations and graceful modifications, which are comparable to some aspects of human musicality. We then explore the feasibility of stripping such putative musical features from the songs and testing how this might affect patterns of auditory responses, focusing on fMRI data in songbirds that demonstrate the feasibility of such approaches. Finally, we explore ideas for investigating whether musical features of birdsong activate avian brains and affect avian behavior in manners comparable to music’s effects on humans. In conclusion, we suggest that birdsong research would benefit from current advances in music theory by attempting to identify structures that are designed to elicit listeners’ emotions and then testing for such effects experimentally. Birdsong research that takes into account the striking complexity of song structure in light of its more immediate function – to affect behavioral state in listeners – could provide a useful animal model for studying basic principles of music neuroscience in a system that is very accessible for investigation, and where developmental auditory and social experience can be tightly controlled. PMID:24036130

  15. How hearing impairment affects sentence comprehension: using eye fixations to investigate the duration of speech processing.

    PubMed

    Wendt, Dorothea; Kollmeier, Birger; Brand, Thomas

    2015-04-24

    The main objective of this study was to investigate the extent to which hearing impairment influences the duration of sentence processing. An eye-tracking paradigm is introduced that provides an online measure of how hearing impairment prolongs processing of linguistically complex sentences; this measure uses eye fixations recorded while the participant listens to a sentence. Eye fixations toward a target picture (which matches the aurally presented sentence) were measured in the presence of a competitor picture. Based on the recorded eye fixations, the single target detection amplitude, which reflects the tendency of the participant to fixate the target picture, was used as a metric to estimate the duration of sentence processing. The single target detection amplitude was calculated for sentence structures with different levels of linguistic complexity and for different listening conditions: in quiet and in two different noise conditions. Participants with hearing impairment spent more time processing sentences, even at high levels of speech intelligibility. In addition, the relationship between the proposed online measure and listener-specific factors, such as hearing aid use and cognitive abilities, was investigated. Longer processing durations were measured for participants with hearing impairment who were not accustomed to using a hearing aid. Moreover, significant correlations were found between sentence processing duration and individual cognitive abilities (such as working memory capacity or susceptibility to interference). These findings are discussed with respect to audiological applications. © The Author(s) 2015.

  16. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior

    PubMed Central

    2018-01-01

    Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners’ abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication. PMID:28938250

  17. Creating a Garden for the Senses

    ERIC Educational Resources Information Center

    Potter, Cindy

    2010-01-01

    Almost everyone enjoys a walk through a garden, bending to sniff a flower, enjoying a fresh air breeze, listening to water bubbling from a fountain, and watching sunlight dapple through trees and plants. At Allegheny Valley School (AVS), the emphasis on multisensory environments (MSE) for individuals with intellectual and developmental…

  18. Helping Families Connect Early Literacy with Social-Emotional Development

    ERIC Educational Resources Information Center

    Santos, Rosa Milagros; Fettig, Angel; Shaffer, LaShorage

    2012-01-01

    Early childhood educators know that home is a child's first learning environment. From birth, children are comforted by hearing and listening to their caregivers' voices. The language used by families supports young children's development of oral language skills. Exposure to print materials in the home also supports literacy development. Literacy…

  19. Making Connections for Success: A Networking Exercise

    ERIC Educational Resources Information Center

    Friar, John H.; Eddleston, Kimberly A.

    2007-01-01

    Networking is important, and it is a skill. The authors have developed an exercise that provides students with a realistic networking experience within the safe environment of the classroom. The exercise provides a lead-in to the discussion of networking techniques, active listening, the cultivation of secondary networks, appropriate ways to…

  20. A Stress Management Classroom Tool for Teachers of Children with BD.

    ERIC Educational Resources Information Center

    Jackson, James T.; Owens, James L.

    1999-01-01

    This article discusses how stress may affect the lives of children with behavior disorders, provides educators with a model for introducing stress management techniques, and closes with strategies for managing stress in the classroom, including listening to relaxing music, manipulating the environment, and providing a morning physical education…

  1. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise

    ERIC Educational Resources Information Center

    Lam, Boji P. W.; Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath

    2017-01-01

    Purpose: Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing…

  2. Recognizing Speech under a Processing Load: Dissociating Energetic from Informational Factors

    ERIC Educational Resources Information Center

    Mattys, Sven L.; Brooks, Joanna; Cooke, Martin

    2009-01-01

    Effects of perceptual and cognitive loads on spoken-word recognition have so far largely escaped investigation. This study lays the foundations of a psycholinguistic approach to speech recognition in adverse conditions that draws upon the distinction between energetic masking, i.e., listening environments leading to signal degradation, and…

  3. The Impact of Brain-Based Strategies: One School's Perspective

    ERIC Educational Resources Information Center

    Hodges, Jane Allen

    2013-01-01

    Research has shown student inattention, off-task behaviors, and lack of listening skills in the classroom can impact progress in reading, math, and language development. Lack of verbal interaction in home environments, variations in learning and teaching modalities, and larger class sizes contribute to the difficulties students have in developing…

  4. Acoustics in Physical Education Settings: The Learning Roadblock

    ERIC Educational Resources Information Center

    Ryan, Stu; Mendel, Lisa Lucks

    2010-01-01

    Background: The audibility of teachers and peers is an essential factor in determining the academic performance of school children. However, acoustic conditions in most classrooms are less than optimal and have been viewed as "hostile listening environments" that undermine the learning of children in school. While research has shown that…

  5. Infants and Toddlers Meet the Natural World

    ERIC Educational Resources Information Center

    McHenry, Jolie D.; Buerk, Kathy J.

    2008-01-01

    Children observe, listen, feel, taste, and take apart while exploring everything in their environment. Teachers can cultivate nature investigations with very young children by offering infants natural objects they can explore and investigate. When adults introduce nature in the earliest stages of development, children will be open to new ideas and…

  6. Listen to the Noise: Noise Is Beneficial for Cognitive Performance in ADHD

    ERIC Educational Resources Information Center

    Soderlund, Goran; Sikstrom, Sverker; Smart, Andrew

    2007-01-01

    Background: Noise is typically conceived of as being detrimental to cognitive performance. However, given the mechanism of stochastic resonance, a certain amount of noise can benefit performance. We investigate cognitive performance in noisy environments in relation to a neurocomputational model of attention deficit hyperactivity disorder (ADHD)…

  7. Games and Toys for Blind Children in Preschool Age.

    ERIC Educational Resources Information Center

    Pielasch, Helmut, Ed.; And Others

    The booklet, a contribution to the International Year of the Child, is intended to help parents enhance the development and education of their blind preschoolers. Parent-child interaction games to promote manual dexterity and sense of touch, listening skills and social communication, mobility, comprehension of the physical environment, artistic…

  8. Parents in Reading: Parents' Booklet.

    ERIC Educational Resources Information Center

    Truby, Roy

    Intended for parents, this booklet offers advice and suggestions for developing a child's self-expression and providing a supportive environment for reading experiences at home. Various sections of the book discuss the following: (1) giving love and warmth to your child, (2) reading with your child, (3) listening to your child, (4) talking with…

  9. Feature Assignment in Perception of Auditory Figure

    ERIC Educational Resources Information Center

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  10. Short-Term Forgetting without Interference

    ERIC Educational Resources Information Center

    McKeown, Denis; Mercer, Tom

    2012-01-01

    In the 1st reported experiment, we demonstrate that auditory memory is robust over extended retention intervals (RIs) when listeners compare the timbre of complex tones, even when active or verbal rehearsal is difficult or impossible. Thus, our tones have an abstract timbre that resists verbal labeling, they differ across trials so that no…

  11. A Rationale for Criterion-Referenced Proficiency Testing

    ERIC Educational Resources Information Center

    Clifford, Ray

    2016-01-01

    This article summarizes some of the technical issues that add to the complexity of language testing. It focuses in particular on the criterion-referenced nature of the ACTFL Proficiency Guidelines-Speaking; and it proposes a criterion-referenced interpretation of the ACTFL guidelines for reading and listening. It then demonstrates how using…

  12. SPOKEN COCHABAMBA QUECHUA, UNITS 13-24.

    ERIC Educational Resources Information Center

    LASTRA, YOLANDA; SOLA, DONALD F.

    UNITS 13-24 OF THE SPOKEN COCHABAMBA QUECHUA COURSE FOLLOW THE GENERAL FORMAT OF THE FIRST VOLUME (UNITS 1-12). THIS SECOND VOLUME IS INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE AND INCLUDES MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS, AS WELL AS GRAMMAR AND EXERCISE SECTIONS COVERING ADDITIONAL…

  13. The Music within

    ERIC Educational Resources Information Center

    Rajan, Rekha S.

    2010-01-01

    Providing opportunity for musical exploration is essential to any early childhood program. Through music making, children are actively engaged with their senses: they listen to the complex sounds around them, move their bodies to the rhythms, and touch and feel the textures and shapes of the instruments. The inimitable strength of the Montessori…

  14. Role of contextual cues on the perception of spectrally reduced interrupted speech.

    PubMed

    Patro, Chhayakanta; Mendel, Lisa Lucks

    2016-08-01

    Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.

  15. Impacts of Authentic Listening Tasks upon Listening Anxiety and Listening Comprehension

    ERIC Educational Resources Information Center

    Melanlioglu, Deniz

    2013-01-01

    Although listening is the skill mostly used by students in the classrooms, the desired success cannot be attained in teaching listening since this skill is shaped by multiple variables. In this research we focused on listening anxiety, listening comprehension and impact of authentic tasks on both listening anxiety and listening comprehension.…

  16. Normative data on audiovisual speech integration using sentence recognition and capacity measures.

    PubMed

    Altieri, Nicholas; Hudock, Daniel

    2016-01-01

    The ability to use visual speech cues and integrate them with auditory information is important, especially in noisy environments and for hearing-impaired (HI) listeners. Providing data on measures of integration skills that encompass accuracy and processing speed will benefit researchers and clinicians. The study consisted of two experiments: First, accuracy scores were obtained using City University of New York (CUNY) sentences, and capacity measures that assessed reaction-time distributions were obtained from a monosyllabic word recognition task. We report data on two measures of integration obtained from a sample comprised of 86 young and middle-age adult listeners: To summarize our results, capacity showed a positive correlation with accuracy measures of audiovisual benefit obtained from sentence recognition. More relevant, factor analysis indicated that a single-factor model captured audiovisual speech integration better than models containing more factors. Capacity exhibited strong loadings on the factor, while the accuracy-based measures from sentence recognition exhibited weaker loadings. Results suggest that a listener's integration skills may be assessed optimally using a measure that incorporates both processing speed and accuracy.

  17. Music in disorders of consciousness.

    PubMed

    Rollnik, Jens D; Altenmüller, Eckart

    2014-01-01

    This review presents an overview of the use of music therapy in neurological early rehabilitation of patients with coma and other disorders of consciousness (DOC) such as unresponsive wakefulness syndrome (UWS) or minimally conscious state (MCS). There is evidence that patients suffering from UWS show emotional processing of auditory information, such as listening to speech. Thus, it seems reasonable to believe that music listening-as part of an enriched environment setting-may be of therapeutic value in these patients. There is, however, a considerable lack of evidence. The authors strongly encourage further studies to evaluate the efficacy of music listening in patients with DOC in neurological early rehabilitation. These studies should consider a precise clinical definition and homogeneity of the patient cohort with respect to the quality (coma vs. UWS vs. MCS), duration (rather weeks to months than days) and cause (traumatic vs. non-traumatic) of DOC, a standardized intervention protocol, valid clinical outcome parameters over a longer observation period (weeks to months), monitoring of neurophysiological and vegetative parameters and, if available, neuroimaging to confirm diagnosis and to demonstrate responses and functional changes in the patients' brains.

  18. Masking Period Patterns & Forward Masking for Speech-Shaped Noise: Age-related effects

    PubMed Central

    Grose, John H.; Menezes, Denise C.; Porter, Heather L.; Griz, Silvana

    2015-01-01

    Objective The purpose of this study was to assess age-related changes in temporal resolution in listeners with relatively normal audiograms. The hypothesis was that increased susceptibility to non-simultaneous masking contributes to the hearing difficulties experienced by older listeners in complex fluctuating backgrounds. Design Participants included younger (n = 11), middle-aged (n = 12), and older (n = 11) listeners with relatively normal audiograms. The first phase of the study measured masking period patterns for speech-shaped noise maskers and signals. From these data, temporal window shapes were derived. The second phase measured forward-masking functions, and assessed how well the temporal window fits accounted for these data. Results The masking period patterns demonstrated increased susceptibility to backward masking in the older listeners, compatible with a more symmetric temporal window in this group. The forward-masking functions exhibited an age-related decline in recovery to baseline thresholds, and there was also an increase in the variability of the temporal window fits to these data. Conclusions This study demonstrated an age-related increase in susceptibility to non-simultaneous masking, supporting the hypothesis that exacerbated non-simultaneous masking contributes to age-related difficulties understanding speech in fluctuating noise. Further support for this hypothesis comes from limited speech-in-noise data suggesting an association between susceptibility to forward masking and speech understanding in modulated noise. PMID:26230495

  19. Data-driven analysis of functional brain interactions during free listening to music and speech.

    PubMed

    Fang, Jun; Hu, Xintao; Han, Junwei; Jiang, Xi; Zhu, Dajiang; Guo, Lei; Liu, Tianming

    2015-06-01

    Natural stimulus functional magnetic resonance imaging (N-fMRI) such as fMRI acquired when participants were watching video streams or listening to audio streams has been increasingly used to investigate functional mechanisms of the human brain in recent years. One of the fundamental challenges in functional brain mapping based on N-fMRI is to model the brain's functional responses to continuous, naturalistic and dynamic natural stimuli. To address this challenge, in this paper we present a data-driven approach to exploring functional interactions in the human brain during free listening to music and speech streams. Specifically, we model the brain responses using N-fMRI by measuring the functional interactions on large-scale brain networks with intrinsically established structural correspondence, and perform music and speech classification tasks to guide the systematic identification of consistent and discriminative functional interactions when multiple subjects were listening music and speech in multiple categories. The underlying premise is that the functional interactions derived from N-fMRI data of multiple subjects should exhibit both consistency and discriminability. Our experimental results show that a variety of brain systems including attention, memory, auditory/language, emotion, and action networks are among the most relevant brain systems involved in classic music, pop music and speech differentiation. Our study provides an alternative approach to investigating the human brain's mechanism in comprehension of complex natural music and speech.

  20. Are We Listening to Music or Noise? Use of the Lyapunov Exponent for Comprehensive Assessment of Heart Rate Complexity During Hemorrhage in Sedated Conscious Miniature Swine

    DTIC Science & Technology

    2009-09-01

    physiologic mechanisms underlying experimental observations: a practical example☆ Sven Zenker, Andreas Hoeft Department of Anaesthesiology and...to describe experi - mental data (goodness of fit) and its complexity (number of parameters). Their use in macroscopic physiologic investigations...BSP, and BRS could either be identical or vary across interventions, resulting in models with 4 to 12 parameters. After digitizing the experimental data

  1. Listen and Learn: Improving Listening across the Curriculum.

    ERIC Educational Resources Information Center

    Steil, Lyman K.

    1984-01-01

    Describes importance of listening and interest in developing listening skills in today's educational curriculum, and discusses past attempts to develop listening skills through legislation mandating listening education, development of listening courses, formation of International Listening Association, and Sperry Corporation's listening…

  2. Individual Differences in Laughter Perception Reveal Roles for Mentalizing and Sensorimotor Systems in the Evaluation of Emotional Authenticity

    PubMed Central

    McGettigan, C.; Walsh, E.; Jessop, R.; Agnew, Z. K.; Sauter, D. A.; Warren, J. E.; Scott, S. K.

    2015-01-01

    Humans express laughter differently depending on the context: polite titters of agreement are very different from explosions of mirth. Using functional MRI, we explored the neural responses during passive listening to authentic amusement laughter and controlled, voluntary laughter. We found greater activity in anterior medial prefrontal cortex (amPFC) to the deliberate, Emitted Laughs, suggesting an obligatory attempt to determine others' mental states when laughter is perceived as less genuine. In contrast, passive perception of authentic Evoked Laughs was associated with greater activity in bilateral superior temporal gyri. An individual differences analysis found that greater accuracy on a post hoc test of authenticity judgments of laughter predicted the magnitude of passive listening responses to laughter in amPFC, as well as several regions in sensorimotor cortex (in line with simulation accounts of emotion perception). These medial prefrontal and sensorimotor sites showed enhanced positive connectivity with cortical and subcortical regions during listening to involuntary laughter, indicating a complex set of interacting systems supporting the automatic emotional evaluation of heard vocalizations. PMID:23968840

  3. Individual differences in laughter perception reveal roles for mentalizing and sensorimotor systems in the evaluation of emotional authenticity.

    PubMed

    McGettigan, C; Walsh, E; Jessop, R; Agnew, Z K; Sauter, D A; Warren, J E; Scott, S K

    2015-01-01

    Humans express laughter differently depending on the context: polite titters of agreement are very different from explosions of mirth. Using functional MRI, we explored the neural responses during passive listening to authentic amusement laughter and controlled, voluntary laughter. We found greater activity in anterior medial prefrontal cortex (amPFC) to the deliberate, Emitted Laughs, suggesting an obligatory attempt to determine others' mental states when laughter is perceived as less genuine. In contrast, passive perception of authentic Evoked Laughs was associated with greater activity in bilateral superior temporal gyri. An individual differences analysis found that greater accuracy on a post hoc test of authenticity judgments of laughter predicted the magnitude of passive listening responses to laughter in amPFC, as well as several regions in sensorimotor cortex (in line with simulation accounts of emotion perception). These medial prefrontal and sensorimotor sites showed enhanced positive connectivity with cortical and subcortical regions during listening to involuntary laughter, indicating a complex set of interacting systems supporting the automatic emotional evaluation of heard vocalizations. © The Author 2013. Published by Oxford University Press.

  4. Emotions over time: synchronicity and development of subjective, physiological, and facial affective reactions to music.

    PubMed

    Grewe, Oliver; Nagel, Frederik; Kopiez, Reinhard; Altenmüller, Eckart

    2007-11-01

    Most people are able to identify basic emotions expressed in music and experience affective reactions to music. But does music generally induce emotion? Does it elicit subjective feelings, physiological arousal, and motor reactions reliably in different individuals? In this interdisciplinary study, measurement of skin conductance, facial muscle activity, and self-monitoring were synchronized with musical stimuli. A group of 38 participants listened to classical, rock, and pop music and reported their feelings in a two-dimensional emotion space during listening. The first entrance of a solo voice or choir and the beginning of new sections were found to elicit interindividual changes in subjective feelings and physiological arousal. Quincy Jones' "Bossa Nova" motivated movement and laughing in more than half of the participants. Bodily reactions such as "goose bumps" and "shivers" could be stimulated by the "Tuba Mirum" from Mozart's Requiem in 7 of 38 participants. In addition, the authors repeated the experiment seven times with one participant to examine intraindividual stability of effects. This exploratory combination of approaches throws a new light on the astonishing complexity of affective music listening.

  5. Large-Scale Analysis of Auditory Segregation Behavior Crowdsourced via a Smartphone App.

    PubMed

    Teki, Sundeep; Kumar, Sukhbinder; Griffiths, Timothy D

    2016-01-01

    The human auditory system is adept at detecting sound sources of interest from a complex mixture of several other simultaneous sounds. The ability to selectively attend to the speech of one speaker whilst ignoring other speakers and background noise is of vital biological significance-the capacity to make sense of complex 'auditory scenes' is significantly impaired in aging populations as well as those with hearing loss. We investigated this problem by designing a synthetic signal, termed the 'stochastic figure-ground' stimulus that captures essential aspects of complex sounds in the natural environment. Previously, we showed that under controlled laboratory conditions, young listeners sampled from the university subject pool (n = 10) performed very well in detecting targets embedded in the stochastic figure-ground signal. Here, we presented a modified version of this cocktail party paradigm as a 'game' featured in a smartphone app (The Great Brain Experiment) and obtained data from a large population with diverse demographical patterns (n = 5148). Despite differences in paradigms and experimental settings, the observed target-detection performance by users of the app was robust and consistent with our previous results from the psychophysical study. Our results highlight the potential use of smartphone apps in capturing robust large-scale auditory behavioral data from normal healthy volunteers, which can also be extended to study auditory deficits in clinical populations with hearing impairments and central auditory disorders.

  6. Noise and pitch interact during the cortical segregation of concurrent speech.

    PubMed

    Bidelman, Gavin M; Yellamsetty, Anusha

    2017-08-01

    Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Effect of listening to Vedic chants and Indian classical instrumental music on patients undergoing upper gastrointestinal endoscopy: A randomized control trial.

    PubMed

    Padam, Anita; Sharma, Neetu; Sastri, O S K S; Mahajan, Shivani; Sharma, Rajesh; Sharma, Deepak

    2017-01-01

    A high level of preoperative anxiety is common among patients undergoing medical and surgical procedures. Anxiety impacts of gastroenterological procedures on psychological and physiological responses are worth consideration. To analyze the effect of listening to Vedic chants and Indian classical instrumental music on anxiety levels and on blood pressure (BP), heart rate (HR), and oxygen saturation in patients undergoing upper gastrointestinal (GI) endoscopy. A prospective, randomized controlled trial was done on 199 patients undergoing upper GI endoscopy. On arrival, their anxiety levels were assessed using state and trait scores and various physiological parameters such as HR, BP, and SpO 2 . Patients were randomly divided into three groups: Group I of 67 patients who were made to listen prerecorded Vedic chants for 10 min, Group II consisting of 66 patients who listened to Indian classical instrumental music for 10 min, and Group III of 66 controls who remained seated for same period in the same environment. Thereafter, their anxiety state scores and physiological parameters were reassessed. A significant reduction in anxiety state scores was observed in the patients in Group I (from 40.4 ± 8.9 to 38.5 ± 10.7; P < 0.05) and Group II (from 41.8 ± 9.9 to 38.0 ± 8.6; P < 0.001) while Group III controls showed no significant change in the anxiety scores. A significant decrease in systolic BP ( P < 0.001), diastolic BP ( P < 0.05), and SpO 2 ( P < 0.05 was also observed in Group II. Listening to Vedic chants and Indian classical instrumental music has beneficial effects on alleviating anxiety levels induced by apprehension of invasive procedures and can be of therapeutic use.

  8. The impact of sound-field systems on learning and attention in elementary school classrooms.

    PubMed

    Dockrell, Julie E; Shield, Bridget

    2012-08-01

    The authors evaluated the installation and use of sound-field systems to investigate the impact of these systems on teaching and learning in elementary school classrooms. Methods The evaluation included acoustic surveys of classrooms, questionnaire surveys of students and teachers, and experimental testing of students with and without the use of sound-field systems. In this article, the authors report students' perceptions of classroom environments and objective data evaluating change in performance on cognitive and academic assessments with amplification over a 6-month period. Teachers were positive about the use of sound-field systems in improving children's listening and attention to verbal instructions. Over time, students in amplified classrooms did not differ from those in nonamplified classrooms in their reports of listening conditions, nor did their performance differ in measures of numeracy, reading, or spelling. Use of sound-field systems in the classrooms resulted in significantly larger gains in performance in the number of correct items on the nonverbal measure of speed of processing and the measure of listening comprehension. Analysis controlling for classroom acoustics indicated that students' listening comprehension scores improved significantly in amplified classrooms with poorer acoustics but not in amplified classrooms with better acoustics. Both teacher ratings and student performance on standardized tests indicated that sound-field systems improved performance on children's understanding of spoken language. However, academic attainments showed no benefits from the use of sound-field systems. Classroom acoustics were a significant factor influencing the efficacy of sound-field systems; children in classes with poorer acoustics benefited in listening comprehension, whereas there was no additional benefit for children in classrooms with better acoustics.

  9. Listening to humans walking together activates the social brain circuitry.

    PubMed

    Saarela, Miiamaaria V; Hari, Riitta

    2008-01-01

    Human footsteps carry a vast amount of social information, which is often unconsciously noted. Using functional magnetic resonance imaging, we analyzed brain networks activated by footstep sounds of one or two persons walking. Listening to two persons walking together activated brain areas previously associated with affective states and social interaction, such as the subcallosal gyrus bilaterally, the right temporal pole, and the right amygdala. These areas seem to be involved in the analysis of persons' identity and complex social stimuli on the basis of auditory cues. Single footsteps activated only the biological motion area in the posterior STS region. Thus, hearing two persons walking together involved a more widespread brain network than did hearing footsteps from a single person.

  10. Promoting the perception of two and three concurrent sound objects: An event-related potential study.

    PubMed

    Kocsis, Zsuzsanna; Winkler, István; Bendixen, Alexandra; Alain, Claude

    2016-09-01

    The auditory environment typically comprises several simultaneously active sound sources. In contrast to the perceptual segregation of two concurrent sounds, the perception of three simultaneous sound objects has not yet been studied systematically. We conducted two experiments in which participants were presented with complex sounds containing sound segregation cues (mistuning, onset asynchrony, differences in frequency or amplitude modulation or in sound location), which were set up to promote the perceptual organization of the tonal elements into one, two, or three concurrent sounds. In Experiment 1, listeners indicated whether they heard one, two, or three concurrent sounds. In Experiment 2, participants watched a silent subtitled movie while EEG was recorded to extract the object-related negativity (ORN) component of the event-related potential. Listeners predominantly reported hearing two sounds when the segregation promoting manipulations were applied to the same tonal element. When two different tonal elements received manipulations promoting them to be heard as separate auditory objects, participants reported hearing two and three concurrent sounds objects with equal probability. The ORN was elicited in most conditions; sounds that included the amplitude- or the frequency-modulation cue generated the smallest ORN amplitudes. Manipulating two different tonal elements yielded numerically and often significantly smaller ORNs than the sum of the ORNs elicited when the same cues were applied on a single tonal element. These results suggest that ORN reflects the presence of multiple concurrent sounds, but not their number. The ORN results are compatible with the horse-race principle of combining different cues of concurrent sound segregation. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Performance variability on perceptual discrimination tasks in profoundly deaf adults with cochlear implants.

    PubMed

    Hay-McCutcheon, Marcia J; Peterson, Nathaniel R; Pisoni, David B; Kirk, Karen Iler; Yang, Xin; Parton, Jason

    The purpose of this study was to evaluate performance on two challenging listening tasks, talker and regional accent discrimination, and to assess variables that could have affected the outcomes. A prospective study using 35 adults with one cochlear implant (CI) or a CI and a contralateral hearing aid (bimodal hearing) was conducted. Adults completed talker and regional accent discrimination tasks. Two-alternative forced-choice tasks were used to assess talker and accent discrimination in a group of adults who ranged in age from 30 years old to 81 years old. A large amount of performance variability was observed across listeners for both discrimination tasks. Three listeners successfully discriminated between talkers for both listening tasks, 14 participants successfully completed one discrimination task and 18 participants were not able to discriminate between talkers for either listening task. Some adults who used bimodal hearing benefitted from the addition of acoustic cues provided through a HA but for others the HA did not help with discrimination abilities. Acoustic speech feature analysis of the test signals indicated that both the talker speaking rate and the fundamental frequency (F0) helped with talker discrimination. For accent discrimination, findings suggested that access to more salient spectral cues was important for better discrimination performance. The ability to perform challenging discrimination tasks successfully likely involves a number of complex interactions between auditory and non-auditory pre- and post-implant factors. To understand why some adults with CIs perform similarly to adults with normal hearing and others experience difficulty discriminating between talkers, further research will be required with larger populations of adults who use unilateral CIs, bilateral CIs and bimodal hearing. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. Who Is He? Children with ASD and ADHD Take the Listener into Account in Their Production of Ambiguous Pronouns

    PubMed Central

    Kuijper, Sanne J. M.; Hartman, Catharina A.; Hendriks, Petra

    2015-01-01

    During conversation, speakers constantly make choices about how specific they wish to be in their use of referring expressions. In the present study we investigate whether speakers take the listener into account or whether they base their referential choices solely on their own representation of the discourse. We do this by examining the cognitive mechanisms that underlie the choice of referring expression at different discourse moments. Furthermore, we provide insights into how children with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) use referring expressions and whether their use differs from that of typically developing (TD) children. Children between 6 and 12 years old (ASD: n=46; ADHD: n=37; TD: n=38) were tested on their production of referring expressions and on Theory of Mind, response inhibition and working memory. We found support for the view that speakers take the listener into account when choosing a referring expression: Theory of Mind was related to referential choice only at those moments when speakers could not solely base their choice on their own discourse representation to be understood. Working memory appeared to be involved in keeping track of the different referents in the discourse. Furthermore, we found that TD children as well as children with ASD and children with ADHD took the listener into account in their choice of referring expression. In addition, children with ADHD were less specific than TD children in contexts with more than one referent. The previously observed problems with referential choice in children with ASD may lie in difficulties in keeping track of longer and more complex discourses, rather than in problems with taking into account the listener. PMID:26147200

  13. Reading Comprehension in a Large Cohort of French First Graders from Low Socio-Economic Status Families: A 7-Month Longitudinal Study

    PubMed Central

    Gentaz, Edouard; Sprenger-Charolles, Liliane; Theurel, Anne; Colé, Pascale

    2013-01-01

    Background The literature suggests that a complex relationship exists between the three main skills involved in reading comprehension (decoding, listening comprehension and vocabulary) and that this relationship depends on at least three other factors orthographic transparency, children’s grade level and socioeconomic status (SES). This study investigated the relative contribution of the predictors of reading comprehension in a longitudinal design (from beginning to end of the first grade) in 394 French children from low SES families. Methodology/Principal findings Reading comprehension was measured at the end of the first grade using two tasks one with short utterances and one with a medium length narrative text. Accuracy in listening comprehension and vocabulary, and fluency of decoding skills, were measured at the beginning and end of the first grade. Accuracy in decoding skills was measured only at the beginning. Regression analyses showed that listening comprehension and decoding skills (accuracy and fluency) always significantly predicted reading comprehension. The contribution of decoding was greater when reading comprehension was assessed via the task using short utterances. Between the two assessments, the contribution of vocabulary, and of decoding skills especially, increased, while that of listening comprehension remained unchanged. Conclusion/Significance These results challenge the ‘simple view of reading’. They also have educational implications, since they show that it is possible to assess decoding and reading comprehension very early on in an orthography (i.e., French), which is less deep than the English one even in low SES children. These assessments, associated with those of listening comprehension and vocabulary, may allow early identification of children at risk for reading difficulty, and to set up early remedial training, which is the most effective, for them. PMID:24250802

  14. A Binaural Cochlear Implant Sound Coding Strategy Inspired by the Contralateral Medial Olivocochlear Reflex

    PubMed Central

    Eustaquio-Martín, Almudena; Stohl, Joshua S.; Wolford, Robert D.; Schatzer, Reinhold; Wilson, Blake S.

    2016-01-01

    Objectives: In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Design: Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. Results: In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. Conclusions: The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids. PMID:26862711

  15. A Binaural Cochlear Implant Sound Coding Strategy Inspired by the Contralateral Medial Olivocochlear Reflex.

    PubMed

    Lopez-Poveda, Enrique A; Eustaquio-Martín, Almudena; Stohl, Joshua S; Wolford, Robert D; Schatzer, Reinhold; Wilson, Blake S

    2016-01-01

    In natural hearing, cochlear mechanical compression is dynamically adjusted via the efferent medial olivocochlear reflex (MOCR). These adjustments probably help understanding speech in noisy environments and are not available to the users of current cochlear implants (CIs). The aims of the present study are to: (1) present a binaural CI sound processing strategy inspired by the control of cochlear compression provided by the contralateral MOCR in natural hearing; and (2) assess the benefits of the new strategy for understanding speech presented in competition with steady noise with a speech-like spectrum in various spatial configurations of the speech and noise sources. Pairs of CI sound processors (one per ear) were constructed to mimic or not mimic the effects of the contralateral MOCR on compression. For the nonmimicking condition (standard strategy or STD), the two processors in a pair functioned similarly to standard clinical processors (i.e., with fixed back-end compression and independently of each other). When configured to mimic the effects of the MOCR (MOC strategy), the two processors communicated with each other and the amount of back-end compression in a given frequency channel of each processor in the pair decreased/increased dynamically (so that output levels dropped/increased) with increases/decreases in the output energy from the corresponding frequency channel in the contralateral processor. Speech reception thresholds in speech-shaped noise were measured for 3 bilateral CI users and 2 single-sided deaf unilateral CI users. Thresholds were compared for the STD and MOC strategies in unilateral and bilateral listening conditions and for three spatial configurations of the speech and noise sources in simulated free-field conditions: speech and noise sources colocated in front of the listener, speech on the left ear with noise in front of the listener, and speech on the left ear with noise on the right ear. In both bilateral and unilateral listening, the electrical stimulus delivered to the test ear(s) was always calculated as if the listeners were wearing bilateral processors. In both unilateral and bilateral listening conditions, mean speech reception thresholds were comparable with the two strategies for colocated speech and noise sources, but were at least 2 dB lower (better) with the MOC than with the STD strategy for spatially separated speech and noise sources. In unilateral listening conditions, mean thresholds improved with increasing the spatial separation between the speech and noise sources regardless of the strategy but the improvement was significantly greater with the MOC strategy. In bilateral listening conditions, thresholds improved significantly with increasing the speech-noise spatial separation only with the MOC strategy. The MOC strategy (1) significantly improved the intelligibility of speech presented in competition with a spatially separated noise source, both in unilateral and bilateral listening conditions; (2) produced significant spatial release from masking in bilateral listening conditions, something that did not occur with fixed compression; and (3) enhanced spatial release from masking in unilateral listening conditions. The MOC strategy as implemented here, or a modified version of it, may be usefully applied in CIs and in hearing aids.

  16. Young adults' use and output level settings of personal music systems.

    PubMed

    Torre, Peter

    2008-10-01

    There are growing concerns over noise exposure via personal music system use by young adults. One purpose of this study was to evaluate the prevalence of personal music system use and the listening patterns associated with these systems in a large sample of young adults. A second purpose of this study was to measure the dB SPL in the ear canal of young adults while they blindly set the volume of a personal music system to four settings. In the first study, the personal music system use survey was completed by 1016 students at various locations on the San Diego State University campus. Questions included sex, age, ethnicity, race, and whether or not they used a personal music system. Students who answered Yes to using a personal music system were instructed to complete the remaining 11 closed-set questions. These questions dealt with type of earphones used with the system, most common listening environment, length of time per day the system was used, and the volume setting. The differences between women and men and across ethnicity and race were evaluated for the questions. In the second study, a probe microphone placed in the ear canal of 32 participants was used to determine the dB SPL of four loudness categories at which the participants blindly set the level of a personal music system: low, medium or comfortable, loud, and very loud. In study 1, over 90% of the participants who completed the survey reported using a personal music system. Over 50% of those who use a personal music system reported listening between 1 and 3 hrs and almost 90% reported listening at either a medium or loud volume. Men were significantly more likely to report listening to their system for a longer duration compared with women and more likely to report listening at a very loud volume. There was a trend for Hispanic or Latino students to report listening for longer durations compared with Not Hispanic or Latino students, but this difference was not statistically significant. Black or African American students were significantly more likely to report listening to their personal music system between 3 and 5 hrs and more than 5 hrs and to report listening at a very loud volume compared with other racial groups. In study 2, the mean dB SPL values for low, medium or comfortable, loud, and very loud were 62.0, 71.6, 87.7, and 97.8 dB SPL, respectively. Men set the level of very loud significantly higher than women. It is clear that a vast majority of young adults who completed the personal music system use survey listen to a system using earphones. Most of the respondents listen between 1 and 3 hrs a day at a medium or loud volume. Based on the probe microphone measurement results, the volume settings for reported durations may not be hazardous for hearing. Long-term use of personal music systems, however, in combination with other noise exposures (i.e., recreational, occupational), and their effect on hearing remains a question for additional research.

  17. Listening level of music through headphones in train car noise environments.

    PubMed

    Shimokura, Ryota; Soeta, Yoshiharu

    2012-09-01

    Although portable music devices are useful for passing time on trains, exposure to music using headphones for long periods carries the risk of damaging hearing acuity. The aim of this study is to examine the listening level of music through headphones in the noisy environment of a train car. Eight subjects adjusted the volume to an optimum level (L(music)) in a simulated noisy train car environment. In Experiment I, the effects of noise level (L(train)) and type of train noise (rolling, squealing, impact, and resonance) were examined. Spectral and temporal characteristics were found to be different according to the train noise type. In Experiment II, the effects of L(train) and type of music (five vocal and five instrumental music) were examined. Each music type had a different pitch strength and spectral centroid, and each was evaluated by φ(1) and W(φ(0)), respectively. These were classified as factors of the autocorrelation function (ACF) of the music. Results showed that L(music) increased as L(train) increased in both experiments, while the type of music greatly influenced L(music). The type of train noise, however, only slightly influenced L(music). L(music) can be estimated using L(train) and the ACF factors φ(1) and W(φ(0)).

  18. Listening.

    ERIC Educational Resources Information Center

    Duker, Sam

    This survey of "listening, as a receptive communication skill," summarizes major research on listening in the following areas: (1) "Scope and Extent of Listening," (2) " Literature on Listening," (3) "Relationships to Listening"--the interrelationships between listening and such factors as reading skills, intelligence, school achievement, cultural…

  19. Learning Icelandic Language and Culture in Virtual Reykjavik: Starting to Talk

    ERIC Educational Resources Information Center

    Bédi, Branislav; Arnbjörnsdóttir, Birna; Vilhjálmsson, Hannes Högni; Helgadóttir, Hafdís Erla; Ólafsson, Stefán; Björgvinsson, Elías

    2016-01-01

    This paper describes how beginners of Icelandic as a foreign and second language responded to playing the first scene in Virtual Reykjavik, a video game-like environment where learners interact with virtual characters--Embodied Conversational Agents (ECAs). This game enables learners to practice speaking and listening skills, to learn about the…

  20. Perception of Native English Reduced Forms in Adverse Environments by Chinese Undergraduate Students

    ERIC Educational Resources Information Center

    Wong, Simpson W. L.; Tsui, Jenny K. Y.; Chow, Bonnie Wing-Yin; Leung, Vina W. H.; Mok, Peggy; Chung, Kevin Kien-Hoa

    2017-01-01

    Previous research has shown that learners of English-as-a-second-language (ESL) have difficulties in understanding connected speech spoken by native English speakers. Extending from past research limited to quiet listening condition, this study examined the perception of English connected speech presented under five adverse conditions, namely…

  1. Training and Local Development. An Experiment in Cooperation between Different Bodies for the Provision of Training on the Mediterranean Coast.

    ERIC Educational Resources Information Center

    Mascarell, Josep Vicent

    1995-01-01

    This description of the difficulties encountered in achieving cooperation among local agencies to provide vocational training in a Spanish district highlights the need for collaborating parties to listen to each other and the importance of understanding the environment in which agencies operate. (SK)

  2. A Blended Learning Environment for Individualized English Listening and Speaking Integrating Critical Thinking

    ERIC Educational Resources Information Center

    Yang, Ya-Ting Carolyn; Chuang, Ya-Chin; Li, Lung-Yu; Tseng, Shin-Shang

    2013-01-01

    Critical thinking (CT) and English communication are recognized as two essential 21st century competencies. To equip students with these competencies and respond to the challenges of global competition, educational technology is being developed to enhance teaching and learning. This study examined the effectiveness of integrating CT into…

  3. Providing the Support Services Needed by Students Who Are Deaf or Hard of Hearing.

    ERIC Educational Resources Information Center

    Luetke-Stahlman, Barbara

    1998-01-01

    Discusses programmatic and curricular modifications often needed for students included in public school settings who are deaf or hard of hearing, such as adapting the mode/flow of classroom communication, linguistic-level changes, and adapting the listening and physical environment. Possible curricular modifications are suggested for…

  4. The Long Term Implication of RTLB Support: Listening to the Voices of Student Experiences

    ERIC Educational Resources Information Center

    Pillay, Poobie; Flanagan, Paul

    2011-01-01

    Resource Teachers: Learning and Behaviour (RTLB) have supported more than 15,000 students since RTLB 1999 by assisting teachers to manage and support students with learning or behaviour difficulties within inclusive classroom environments. Research indicates that there are long term positive educational effects for students receiving short-term…

  5. Localization in Multiple Source Environments: Localizing the Missing Source

    DTIC Science & Technology

    2007-02-01

    volunteer listeners (3 males and 3 females, 19-24 years of age ), participated in the experiment. All had normal hearing (au- diometric thresholds < 15...were routed from a control computer to a Mark of the Unicorn digital-to-analog con- verter (MOTU 24 I/O), then through a bank of amplifiers (Crown Model

  6. Somatic/Embodied Learning and Adult Education. Trends and Issues Alert.

    ERIC Educational Resources Information Center

    Kerka, Sandra

    A somatic approach to education implies education that trusts individuals to learn from and listen to the information they are receiving from the interaction of self with the environment. Somatic or embodied knowing is experiential knowledge that involves senses, perceptions, and mind-body action and reaction. Western culture has been dominated by…

  7. Playing Music to Relieve Stress in a College Classroom Environment

    ERIC Educational Resources Information Center

    Ferrer, Eileen; Lew, Polong; Jung, Sarah M.; Janeke, Emilia; Garcia, Michelle; Peng, Cindy; Poon, George; Rathod, Vinisha; Beckwith, Sharon; Tam, Chick F.

    2014-01-01

    Music therapy can be an effective treatment that prevents stress from contributing to the etiology of disease. For this study, the participants, college students enrolled in an annual Alternative Nutrition class at California State University, Los Angeles, were instructed to select a song to present during class. After listening to each song…

  8. An Investigation of Selected Readiness Variables As Predictors of Reading Achievement at Second Grade Level.

    ERIC Educational Resources Information Center

    Seals, Caryl Neman

    This study was designed to determine the relationship of selected readiness variables to achievement in reading at the second grade level. The readiness variables were environment, mathematics, letters and sounds, aural comprehension, visual perception, auditory perception, vocabulary and concepts, word meaning, listening, matching, alphabet,…

  9. The Role of Assistive Listening Devices in the Classroom. PEPNet Tipsheet

    ERIC Educational Resources Information Center

    Clark, Catherine

    2000-01-01

    Many students who use hearing aids effectively in quiet environments have a difficult time following information presented in large college classrooms. In the classroom, the instructor's voice is competing with background noise, room echo, and distance. Therefore, the intelligibility of the instructor's voice is degraded by the poor room acoustics…

  10. Reframing School Violence: Listening to Voices of Students.

    ERIC Educational Resources Information Center

    Haselswerdt, Michael V.; Lenhardt, Ann Marie C.

    2003-01-01

    Focus groups with 82 middle and high school students from diverse school settings elicited four themes: (1) definitions of school violence must be expanded beyond physical assault; (2) teachers should be more involved in establishing a safe environment; (3) respect is the key to effective communication; and (4) connections with school and adults…

  11. The Power of Investigating: Guiding Authentic Assessments

    ERIC Educational Resources Information Center

    McGough, Julie V.; Nyberg, Lisa M.

    2017-01-01

    Children want to explore, dig, build, play, and wonder. To do this they need to touch, feel, see, observe, listen, manipulate, plan, and create. How does a teacher build and maintain a learning environment that will help students investigate meaningful questions? How does a teacher plan and manage ongoing investigations? How does a teacher use…

  12. How to Show One-Fourth? Uncovering Hidden Context through Reciprocal Learning

    ERIC Educational Resources Information Center

    Abramovich, S.; Brouwer, P.

    2007-01-01

    This paper suggests that mathematics teacher educators should listen carefully to what their students are saying. More specifically, it demonstrates how from one pre-teacher's non-traditional geometric representation of a unit fraction, a variety of learning environments that lead to the enrichment of mathematics for teaching can be developed. The…

  13. Phonetic Influences on English and French Listeners' Assimilation of Mandarin Tones to Native Prosodic Categories

    ERIC Educational Resources Information Center

    So, Connie K.; Best, Catherine T.

    2014-01-01

    This study examined how native speakers of Australian English and French, nontone languages with different lexical stress properties, perceived Mandarin tones in a sentence environment according to their native sentence intonation categories (i-Categories) in connected speech. Results showed that both English and French speakers categorized…

  14. PLANNING FOR THE LANGUAGE DEVELOPMENT OF DISADVANTAGED CHILDREN AND YOUTH.

    ERIC Educational Resources Information Center

    NEWTON, EUNICE S.

    THE VERBAL ENVIRONMENT OF THE FIRST YEARS OF LIFE IS CRUCIAL IN THE LANGUAGE DEVELOPMENT OF THE INDIVIDUAL. THERE IS A CLOSE INTERRELATEDNESS AMONG LANGUAGE ARTS. SPEAKING, WRITING, LISTENING, AND READING PERFORM RECIPROCAL FUNCTIONS IN THE COMMUNICATIVE CYCLE. THEREFORE, THERE IS A NEED TO REINFORCE LANGUAGE ARTS IN ALL GRADES AND IN ALL…

  15. CAI and Its Application in Rural Junior English Class

    ERIC Educational Resources Information Center

    He, Xiaojun

    2015-01-01

    Superiority in developing students' listening, speaking, etc. This thesis explores how to provide a better environment for English teaching in rural junior school with the aid of multimedia and find some ways to improve teaching efficiency. In recent years, using multimedia is the direction of reform and mainstream in English teaching. Compared…

  16. Effect of the Affordances of a Virtual Environment on Second Language Oral Proficiency

    ERIC Educational Resources Information Center

    Carruthers, Heidy P. Cuervo

    2013-01-01

    The traditional language laboratory consists of computer-based exercises in which students practice the language individually, working on language form drills and listening comprehension activities. In addition to the traditional approach to the laboratory requirement, students in the study participated in a weekly conversation hour focusing on…

  17. Teach Me in the Way I Learn: Education and the Internet Generation

    ERIC Educational Resources Information Center

    Baker, Russell; Matulich, Erika; Papp, Raymond

    2007-01-01

    College students learn differently than their professors. This disconnect between learning styles is not a new problem, however the problem has been magnified by the technology driven environment which exists in contemporary higher education. Students who grew up using computers and Playstations while surfing MySpace blogs and listening to their…

  18. Improving Classroom Acoustics (ICA): A Three-Year FM Sound Field Classroom Amplification Study.

    ERIC Educational Resources Information Center

    Rosenberg, Gail Gegg; Blake-Rahter, Patricia; Heavner, Judy; Allen, Linda; Redmond, Beatrice Myers; Phillips, Janet; Stigers, Kathy

    1999-01-01

    The Improving Classroom Acoustics (ICA) special project was designed to determine if students' listening and learning behaviors improved as a result of an acoustical environment enhanced through the use of FM sound field classroom amplification. The 3-year project involved 2,054 students in 94 general education kindergarten, first-, and…

  19. A Novel Approach for Enhancing Student Reading Comprehension and Assisting Teacher Assessment of Literacy

    ERIC Educational Resources Information Center

    Chen, Jun-Ming; Chen, Meng-Chang; Sun, Yeali S.

    2010-01-01

    For students of English as a Foreign Language (EFL), reading exercises are critical not only for developing strong reading comprehension, but also for developing listening, speaking, and writing skills. Prior research suggests that social, collaborative learning environments are best suited for improving language ability. However, opportunities…

  20. Combining Technology and Narrative in a Learning Environment for Workplace Training.

    ERIC Educational Resources Information Center

    Nelson, Wayne A.; Wellings, Paula; Palumbo, David; Gupton, Christine

    In a project designed to provide training for entry-level job skills in high tech industries, a combination of narrative and technology was employed to aid learners in developing the necessary soft skills (dependability, responsibility, listening comprehension, collaboration, et cetera) sought by employers. The EnterTech Project brought together a…

  1. Using Music in the Adult ESL Classroom. ERIC Digest.

    ERIC Educational Resources Information Center

    Lems, Kristen

    Music can be used in the adult English-as-a-Second-Language (ESL) classroom to create a learning environment; to build listening comprehension, speaking, reading, and writing skills; to increase vocabulary; and to expand cultural knowledge. This digest looks briefly at research and offers strategies for using music in the adult ESL classroom.…

  2. Preservice Teachers' Understanding of the Language Arts: Using a Lens of Critical Literacy

    ERIC Educational Resources Information Center

    Bender-Slack, Delane; Young, Teresa

    2016-01-01

    Preservice teachers are placed in educational environments to learn about teaching literacy and about literacy's role in the English Language Arts (ELA) classroom. Of particular significance is how preservice teachers perceive and understand the varied components of language arts (i.e., reading, writing, speaking, listening, viewing, and visually…

  3. On the origins of narrative : Storyteller bias as a fitness-enhancing strategy.

    PubMed

    Sugiyama, M S

    1996-12-01

    Stories consist largely of representations of the human social environment. These representations can be used to influence the behavior of others (consider, e.g., rumor, propaganda, public relations, advertising). Storytelling can thus be seen as a transaction in which the benefit to the listener is information about his or her environment, and the benefit to the storyteller is the elicitation of behavior from the listener that serves the former's interests. However, because no two individuals have exactly the same fitness interests, we would expect different storytellers to have different narrative perspectives and priorities due to differences in sex, age, health, social status, marital status, number of offspring, and so on. Tellingly, the folklore record indicates that different storytellers within the same cultural group tell the same story differently. Furthermore, the historical and ethnographic records provide numerous examples of storytelling deliberately used as a means of political manipulation. This evidence suggests that storyteller bias is rooted in differences in individual fitness interests, and that storytelling may have originated as a means of promoting these interests.

  4. The perception of complex pitch in cochlear implants: A comparison of monopolar and tripolar stimulation.

    PubMed

    Fielden, Claire A; Kluk, Karolina; Boyle, Patrick J; McKay, Colette M

    2015-10-01

    Cochlear implant listeners typically perform poorly in tasks of complex pitch perception (e.g., musical pitch and voice pitch). One explanation is that wide current spread during implant activation creates channel interactions that may interfere with perception of temporal fundamental frequency information contained in the amplitude modulations within channels. Current focusing using a tripolar mode of stimulation has been proposed as a way of reducing channel interactions, minimising spread of excitation and potentially improving place and temporal pitch cues. The present study evaluated the effect of mode in a group of cochlear implant listeners on a pitch ranking task using male and female singing voices separated by either a half or a quarter octave. Results were variable across participants, but on average, pitch ranking was at chance level when the pitches were a quarter octave apart and improved when the difference was a half octave. No advantage was observed for tripolar over monopolar mode at either pitch interval, suggesting that previously published psychophysical advantages for focused modes may not translate into improvements in complex pitch ranking. Evaluation of the spectral centroid of the stimulation pattern, plus a lack of significant difference between male and female voices, suggested that participants may have had difficulty in accessing temporal pitch cues in either mode.

  5. Variability and reduced performance of preschool- and early school-aged children on psychoacoustic tasks: What are the relevant factors?

    NASA Astrophysics Data System (ADS)

    Allen, Prudence

    2003-04-01

    Young children typically perform more poorly on psychoacoustic tasks than do adults, with large individual differences. When performance is averaged across children within age groups, the data suggest a gradual change in performance with increasing age. However, an examination of individual data suggests that the performance matures more rapidly, although at different times for different children. The mechanisms of development responsible for these changes are likely very complex, involving both sensory and cognitive processes. This paper will discuss some previously suggested mechanisms including attention and cue weighting, as well as possibilities suggested from more recent studies in which learning effects were examined. In one task, a simple frequency discrimination was required, while in another the listener was required to extract regularities in complex sequences of sounds that varied from trial to trial. Results suggested that the ability to select and consistently employ an effective listening strategy was especially important in the performance of the more complex task, while simple stimulus exposure and motivation contributed to the simpler task. These factors are important for understanding the perceptual development and for the subsequent application of psychoacoustic findings to clinical populations. [Work supported by the NSERC and the Canadian Language and Literacy Research Network.

  6. Selective entrainment of brain oscillations drives auditory perceptual organization.

    PubMed

    Costa-Faidella, Jordi; Sussman, Elyse S; Escera, Carles

    2017-10-01

    Perceptual sound organization supports our ability to make sense of the complex acoustic environment, to understand speech and to enjoy music. However, the neuronal mechanisms underlying the subjective experience of perceiving univocal auditory patterns that can be listened to, despite hearing all sounds in a scene, are poorly understood. We hereby investigated the manner in which competing sound organizations are simultaneously represented by specific brain activity patterns and the way attention and task demands prime the internal model generating the current percept. Using a selective attention task on ambiguous auditory stimulation coupled with EEG recordings, we found that the phase of low-frequency oscillatory activity dynamically tracks multiple sound organizations concurrently. However, whereas the representation of ignored sound patterns is circumscribed to auditory regions, large-scale oscillatory entrainment in auditory, sensory-motor and executive-control network areas reflects the active perceptual organization, thereby giving rise to the subjective experience of a unitary percept. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Learning To Listen, Talk and Trust: Constructing Collaborations.

    ERIC Educational Resources Information Center

    Krasnow, Maris H.

    Forming friendships is an ongoing, ever-growing, complex experience. Strategies for building relationships with others are the focus of this paper. The experience of three diverse groups of professionals are followed as they work to develop positive and respectful relationships in the name of collaboration and as they try to understand each…

  8. Sex Differences in Emergent Literacy and Reading Behaviour in Junior Kindergarten

    ERIC Educational Resources Information Center

    Deasley, Shanna; Evans, Mary Ann; Nowak, Sarah; Willoughby, David

    2018-01-01

    In a sample of 128 Canadian junior kindergarten children (66 boys), we examined sex differences in emergent literacy and behaviour when listening to and interacting with books of four types: alphabet books with simple text and illustrations, traditional alphabet books with complex text and illustrations, alphabet eBooks, and illustrated…

  9. SPOKEN AYACUCHO QUECHUA, UNITS 11-20.

    ERIC Educational Resources Information Center

    PARKER, GARY J.; SOLA, DONALD F.

    THE ESSENTIALS OF AYACUCHO GRAMMAR WERE PRESENTED IN THE FIRST VOLUME OF THIS SERIES, SPOKEN AYACUCHO QUECHUA, UNITS 1-10. THE 10 UNITS IN THIS VOLUME (11-20) ARE INTENDED FOR USE IN AN INTERMEDIATE OR ADVANCED COURSE, AND PRESENT THE STUDENT WITH LENGTHIER AND MORE COMPLEX DIALOGS, CONVERSATIONS, "LISTENING-INS," AND DICTATIONS AS WELL…

  10. Less Arguing, More Listening: Improving Civility in Classrooms

    ERIC Educational Resources Information Center

    Crocco, Margaret; Halvorsen, Anne-Lise; Jacobsen, Rebecca; Segall, Avner

    2018-01-01

    Today's youth increasingly are being expected to engage in civil deliberation in classrooms while simultaneously living in a society with a high level of political incivility. However, teaching students to argue--particularly in oral form--is enormously complex and challenging work. In this article, the authors report on a study of four high…

  11. Earthwatching III. An Environmental Reader with Teacher's Guide.

    ERIC Educational Resources Information Center

    Wisconsin Univ., Madison. Inst. for Environmental Studies.

    This book is the third published collection of scripts written for radio by professional staff and student writers. The writers strived to translate complex technical topics into everyday terms without sacrificing accuracy and to provide listeners with fair and balanced reports on the major environmental and scientific issues of the day. This…

  12. Segregating the neural correlates of physical and perceived change in auditory input using the change deafness effect.

    PubMed

    Puschmann, Sebastian; Weerda, Riklef; Klump, Georg; Thiel, Christiane M

    2013-05-01

    Psychophysical experiments show that auditory change detection can be disturbed in situations in which listeners have to monitor complex auditory input. We made use of this change deafness effect to segregate the neural correlates of physical change in auditory input from brain responses related to conscious change perception in an fMRI experiment. Participants listened to two successively presented complex auditory scenes, which consisted of six auditory streams, and had to decide whether scenes were identical or whether the frequency of one stream was changed between presentations. Our results show that physical changes in auditory input, independent of successful change detection, are represented at the level of auditory cortex. Activations related to conscious change perception, independent of physical change, were found in the insula and the ACC. Moreover, our data provide evidence for significant effective connectivity between auditory cortex and the insula in the case of correctly detected auditory changes, but not for missed changes. This underlines the importance of the insula/anterior cingulate network for conscious change detection.

  13. Listening Skills in the Workplace.

    ERIC Educational Resources Information Center

    Grognet, Allene; Van Duzer, Carol

    This article examines the listening process and factors affecting listening. It also suggests general guidelines for teaching and assessing listening and gives examples of activities for practicing and developing listening skills for the workplace. Listening is a demanding process that involves the listener, speaker, message content, and…

  14. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  15. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  16. Water Immersion Affects Episodic Memory and Postural Control in Healthy Older Adults.

    PubMed

    Bressel, Eadric; Louder, Talin J; Raikes, Adam C; Alphonsa, Sushma; Kyvelidou, Anastasia

    2018-05-04

    Previous research has reported that younger adults make fewer cognitive errors on an auditory vigilance task while in chest-deep water compared with on land. The purpose of this study was to extend this previous work to include older adults and to examine the effect of environment (water vs land) on linear and nonlinear measures of postural control under single- and dual-task conditions. Twenty-one older adult participants (age = 71.6 ± 8.34 years) performed a cognitive (auditory vigilance) and motor (standing balance) task separately and simultaneously on land and in chest-deep water. Listening errors (n = count) from the auditory vigilance test and sample entropy (SampEn), center of pressure area, and velocity for the balance test served as dependent measures. Environment (land vs water) and task (single vs dual) comparisons were made with a Wilcoxon matched-pair test. Listening errors were 111% greater during land than during water environments (single-task = 4.0 ± 3.5 vs 1.9 ± 1.7; P = .03). Conversely, SampEn values were 100% greater during water than during land environments (single-task = 0.04 ± 0.01 vs 0.02 ± 0.01; P < .001). Center of pressure area and velocity followed a similar trend to SampEn with respect to environment differences, and none of the measures were different between single- and dual-task conditions (P > .05). The findings of this study expand current support for the potential use of partial aquatic immersion as a viable method for challenging both cognitive and motor abilities in older adults.

  17. Neural Decoding of Bistable Sounds Reveals an Effect of Intention on Perceptual Organization

    PubMed Central

    2018-01-01

    Auditory signals arrive at the ear as a mixture that the brain must decompose into distinct sources based to a large extent on acoustic properties of the sounds. An important question concerns whether listeners have voluntary control over how many sources they perceive. This has been studied using pure high (H) and low (L) tones presented in the repeating pattern HLH-HLH-, which can form a bistable percept heard either as an integrated whole (HLH-) or as segregated into high (H-H-) and low (-L-) sequences. Although instructing listeners to try to integrate or segregate sounds affects reports of what they hear, this could reflect a response bias rather than a perceptual effect. We had human listeners (15 males, 12 females) continuously report their perception of such sequences and recorded neural activity using MEG. During neutral listening, a classifier trained on patterns of neural activity distinguished between periods of integrated and segregated perception. In other conditions, participants tried to influence their perception by allocating attention either to the whole sequence or to a subset of the sounds. They reported hearing the desired percept for a greater proportion of time than when listening neutrally. Critically, neural activity supported these reports; stimulus-locked brain responses in auditory cortex were more likely to resemble the signature of segregation when participants tried to hear segregation than when attempting to perceive integration. These results indicate that listeners can influence how many sound sources they perceive, as reflected in neural responses that track both the input and its perceptual organization. SIGNIFICANCE STATEMENT Can we consciously influence our perception of the external world? We address this question using sound sequences that can be heard either as coming from a single source or as two distinct auditory streams. Listeners reported spontaneous changes in their perception between these two interpretations while we recorded neural activity to identify signatures of such integration and segregation. They also indicated that they could, to some extent, choose between these alternatives. This claim was supported by corresponding changes in responses in auditory cortex. By linking neural and behavioral correlates of perception, we demonstrate that the number of objects that we perceive can depend not only on the physical attributes of our environment, but also on how we intend to experience it. PMID:29440556

  18. Acoustics of a planetarium

    NASA Astrophysics Data System (ADS)

    Shepherd, Micah; Leishman, Timothy W.; Utami, Sentagi

    2005-09-01

    Brigham Young University has recently constructed a planetarium with a 38-ft.-diameter dome. The facility also serves as a classroom. Since planetariums typically have poor acoustics due to their domed ceiling structures, acoustical recommendations were requested before its construction. The recommendations were made in an attempt to create an acceptable listening environment for lectures and other listening events. They were based in part on computer models and auralizations intended to predict the effectiveness of several acoustical treatments on the outer walls and on the dome itself. The recommendations were accepted and the planetarium was completed accordingly. A series of acoustical measurements was subsequently made in the room and the resulting acoustical parameters were mapped over the floor plan. This paper discusses these results and compares them with the predictions of the computer models.

  19. Brainstem Correlates of Speech-in-Noise Perception in Children

    PubMed Central

    Anderson, Samira; Skoe, Erika; Chandrasekaran, Bharath; Zecker, Steven; Kraus, Nina

    2010-01-01

    Children often have difficulty understanding speech in challenging listening environments. In the absence of peripheral hearing loss, these speech perception difficulties may arise from dysfunction at more central levels in the auditory system, including subcortical structures. We examined brainstem encoding of pitch in a speech syllable in 38 school-age children. In children with poor speech-in-noise perception, we find impaired encoding of the fundamental frequency and the second harmonic, two important cues for pitch perception. Pitch, an important factor in speaker identification, aids the listener in tracking a specific voice from a background of voices. These results suggest that the robustness of subcortical neural encoding of pitch features in time-varying signals is an important factor in determining success with speech perception in noise. PMID:20708671

  20. Auditory detection of non-speech and speech stimuli in noise: Effects of listeners' native language background.

    PubMed

    Liu, Chang; Jin, Su-Hyun

    2015-11-01

    This study investigated whether native listeners processed speech differently from non-native listeners in a speech detection task. Detection thresholds of Mandarin Chinese and Korean vowels and non-speech sounds in noise, frequency selectivity, and the nativeness of Mandarin Chinese and Korean vowels were measured for Mandarin Chinese- and Korean-native listeners. The two groups of listeners exhibited similar non-speech sound detection and frequency selectivity; however, the Korean listeners had better detection thresholds of Korean vowels than Chinese listeners, while the Chinese listeners performed no better at Chinese vowel detection than the Korean listeners. Moreover, thresholds predicted from an auditory model highly correlated with behavioral thresholds of the two groups of listeners, suggesting that detection of speech sounds not only depended on listeners' frequency selectivity, but also might be affected by their native language experience. Listeners evaluated their native vowels with higher nativeness scores than non-native listeners. Native listeners may have advantages over non-native listeners when processing speech sounds in noise, even without the required phonetic processing; however, such native speech advantages might be offset by Chinese listeners' lower sensitivity to vowel sounds, a characteristic possibly resulting from their sparse vowel system and their greater cognitive and attentional demands for vowel processing.

  1. Free Field Word recognition test in the presence of noise in normal hearing adults.

    PubMed

    Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge

    In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. Bimodal benefits on objective and subjective outcomes for adult cochlear implant users.

    PubMed

    Heo, Ji-Hye; Lee, Jae-Hee; Lee, Won-Sang

    2013-09-01

    Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments.

  3. The Effect of Tinnitus on Listening Effort in Normal-Hearing Young Adults: A Preliminary Study.

    PubMed

    Degeest, Sofie; Keppler, Hannah; Corthals, Paul

    2017-04-14

    The objective of this study was to investigate the effect of chronic tinnitus on listening effort. Thirteen normal-hearing young adults with chronic tinnitus were matched with a control group for age, gender, hearing thresholds, and educational level. A dual-task paradigm was used to evaluate listening effort in different listening conditions. A primary speech-recognition task and a secondary memory task were performed both separately and simultaneously. Furthermore, subjective listening effort was questioned for various listening situations. The Tinnitus Handicap Inventory was used to control for tinnitus handicap. Listening effort significantly increased in the tinnitus group across listening conditions. There was no significant difference in listening effort between listening conditions, nor was there an interaction between groups and listening conditions. Subjective listening effort did not significantly differ between both groups. This study is a first exploration of listening effort in normal-hearing participants with chronic tinnitus showing that listening effort is increased as compared with a control group. There is a need to further investigate the cognitive functions important for speech understanding and their possible relation with the presence of tinnitus and listening effort.

  4. Intelligibility of foreign-accented speech: Effects of listening condition, listener age, and listener hearing status

    NASA Astrophysics Data System (ADS)

    Ferguson, Sarah Hargus

    2005-09-01

    It is well known that, for listeners with normal hearing, speech produced by non-native speakers of the listener's first language is less intelligible than speech produced by native speakers. Intelligibility is well correlated with listener's ratings of talker comprehensibility and accentedness, which have been shown to be related to several talker factors, including age of second language acquisition and level of similarity between the talker's native and second language phoneme inventories. Relatively few studies have focused on factors extrinsic to the talker. The current project explored the effects of listener and environmental factors on the intelligibility of foreign-accented speech. Specifically, monosyllabic English words previously recorded from two talkers, one a native speaker of American English and the other a native speaker of Spanish, were presented to three groups of listeners (young listeners with normal hearing, elderly listeners with normal hearing, and elderly listeners with hearing impairment; n=20 each) in three different listening conditions (undistorted words in quiet, undistorted words in 12-talker babble, and filtered words in quiet). Data analysis will focus on interactions between talker accent, listener age, listener hearing status, and listening condition. [Project supported by American Speech-Language-Hearing Association AARC Award.

  5. A Correlation Study between EFL Strategic Listening and Listening Comprehension Skills among Secondary School Students

    ERIC Educational Resources Information Center

    Amin, Iman Abdul-Reheem; Amin, Magdy Mohammad; Aly, Mahsoub Abdul-Sadeq

    2011-01-01

    The present study was undertaken to investigate the correlation between EFL students strategic listening and their listening comprehension skills. Eighty secondary school students participated in this study. Participants' strategic listening was measured by a Strategic Listening Interview (SLI), a Strategic Listening Questionnaire (SLQ) and a…

  6. Accuracy of cochlear implant recipients in speech reception in the presence of background music.

    PubMed

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-12-01

    This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.

  7. Accuracy of Cochlear Implant Recipients on Speech Reception in Background Music

    PubMed Central

    Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia

    2012-01-01

    Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550

  8. How visual cues for when to listen aid selective auditory attention.

    PubMed

    Varghese, Lenny A; Ozmeral, Erol J; Best, Virginia; Shinn-Cunningham, Barbara G

    2012-06-01

    Visual cues are known to aid auditory processing when they provide direct information about signal content, as in lip reading. However, some studies hint that visual cues also aid auditory perception by guiding attention to the target in a mixture of similar sounds. The current study directly tests this idea for complex, nonspeech auditory signals, using a visual cue providing only timing information about the target. Listeners were asked to identify a target zebra finch bird song played at a random time within a longer, competing masker. Two different maskers were used: noise and a chorus of competing bird songs. On half of all trials, a visual cue indicated the timing of the target within the masker. For the noise masker, the visual cue did not affect performance when target and masker were from the same location, but improved performance when target and masker were in different locations. In contrast, for the chorus masker, visual cues improved performance only when target and masker were perceived as coming from the same direction. These results suggest that simple visual cues for when to listen improve target identification by enhancing sounds near the threshold of audibility when the target is energetically masked and by enhancing segregation when it is difficult to direct selective attention to the target. Visual cues help little when target and masker already differ in attributes that enable listeners to engage selective auditory attention effectively, including differences in spectrotemporal structure and in perceived location.

  9. Second and foreign language listening: unraveling the construct.

    PubMed

    Tafaghodtari, Marzieh H; Vandergrift, Larry

    2008-08-01

    Identifying the variables which contribute to second and foreign language (L2) listening ability can provide a better understanding of the listening construct. This study explored the degree to which first language (L1) listening ability, L2 proficiency, motivation and metacognition contribute to L2 listening comprehension. 115 Persian-speaking English as a Foreign Language (EFL) university students completed a motivation questionnaire, the Language Learning Motivation Orientation Scale, a listening questionnaire, the Metacognitive Awareness Listening Questionnaire, and an English-language proficiency measure, as well as listening tests in English and Persian. Scores from all measures were subjected to descriptive, inferential, and correlational analyses. The results support the hypothesis that variability in L2 listening cannot be explained by either L2 proficiency or L1 listening ability; rather, a cluster of variables including L2 proficiency, L1 listening ability, metacognitive knowledge and motivation orientations can better explain variability in L2 listening ability.

  10. Effects of attention on the speech reception threshold and pupil response of people with impaired and normal hearing.

    PubMed

    Koelewijn, Thomas; Versfeld, Niek J; Kramer, Sophia E

    2017-10-01

    For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  11. Development of Auditory Selective Attention: Why Children Struggle to Hear in Noisy Environments

    ERIC Educational Resources Information Center

    Jones, Pete R.; Moore, David R.; Amitay, Sygal

    2015-01-01

    Children's hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4-11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were…

  12. Impact of Environment-Based Teaching on Student Achievement: A Study of Washington State Middle Schools

    ERIC Educational Resources Information Center

    Bartosh, Oksana; Tudor, Margaret; Ferguson, Lynne; Taylor, Catherine

    2009-01-01

    This paper reports on a project which investigates the impact of systemic environmental education (EE) programs on student achievement on EE-based integrated tests and standardized tests in math, language arts, and listening. Systemic environmental education programs are defined by curriculum designed to align and integrate subjects around real…

  13. The Effect of Mozart's Music on Child Development in a Jordanian Kindergarten

    ERIC Educational Resources Information Center

    Mattar, Jehan

    2013-01-01

    Young children who listen to music regularly demonstrate better development than those who do not. As children grow, their social, cognitive and physical skills can be enhanced by their relationship with music. The music of Mozart was introduced into the children's environment as a sensory background for the standard curriculum. The purpose of…

  14. Improving the English Proficiency of Native Japanese via Digital Storytelling, Blogs, and E-Mobile Technologies

    ERIC Educational Resources Information Center

    Obari, Hiroyuki; Lambacher, Stephen

    2012-01-01

    This paper reports on the use of digital storytelling and blog activities to make CALL classes more dynamic and personalized for both instructors and learners alike. An empirical research study was carried out to determine if a blended-learning environment incorporating m-learning could help improve the English listening, presentation, and…

  15. Attention and Cognitive Control Networks Assessed in a Dichotic Listening fMRI Study

    ERIC Educational Resources Information Center

    Falkenberg, Liv E.; Specht, Karsten; Westerhausen, Rene

    2011-01-01

    A meaningful interaction with our environment relies on the ability to focus on relevant sensory input and to ignore irrelevant information, i.e. top-down control and attention processes are employed to select from competing stimuli following internal goals. In this, the demands for the recruitment of top-down control processes depend on the…

  16. An Integrative Approach to Teaching English as a Second Language: The Hong Kong Case.

    ERIC Educational Resources Information Center

    Wan, Yee

    This paper proposes an integrative approach for teaching English as a second language to students in Hong Kong to develop their listening, speaking, reading, and writing skills in English to meet the challenge of an English curriculum. The integrative approach provides an authentic language environment for learners to develop language skills in a…

  17. Correlation of Reading and Listening Comprehension Discrepancy with Teacher Perceptions of Reading Disability in Ghana

    ERIC Educational Resources Information Center

    Taylor, Mark

    2014-01-01

    The catalyst for this study emerged from the unprecedented number of Ghanaian students with reading difficulties, in an environment where school counselors are generally unavailable, funding is limited, and most educators do not recognize learning disabilities as true disabilities. Based on the limitations of the IQ-achievement discrepancy model…

  18. The Relationship of Listening to Classical Music on First Graders' Ability To Retain Information.

    ERIC Educational Resources Information Center

    Lewis, Erin

    In traditional reading and CARE lessons (a curriculum used to help students learn to read and identify sounds), music is not played to enhance the learning environment. However, some studies have shown that when music is played during learning experiences there is more retention of the material. This research project compared the traditional…

  19. The role of social engagement and identity in community mobility among older adults aging in place.

    PubMed

    Gardner, Paula

    2014-01-01

    The purpose of this study was to understand how neighbourhoods - as physical and social environments - influence community mobility. Seeking an insider's perspective, the study employed an ethnographic research design. Immersed within the daily lives of 6 older adults over an 8-month period, auditory, textual, and visual data was collected using the "go-along" interview method. During these interviews, the researcher accompanied participants on their natural outings while actively exploring their physical and social practices by asking questions, listening, and observing. Findings highlight a process of community mobility that is complex, dynamic and often difficult as participant's ability and willingness to journey into their neighborhoods were challenged by a myriad of individual and environmental factors that changed from one day to the next. Concerned in particular with the social environment, final analysis reveals how key social factors - social engagement and identity - play a critical role in the community mobility of older adults aging in place. Identity and social engagement are important social factors that play a role in community mobility. The need for social engagement and the preservation of identity are such strong motivators for community mobility that they can "trump" poor health, pain, functional ability and hazardous conditions. To effectively promote community mobility, the social lives and needs of individuals must be addressed.

  20. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

Top