Sample records for sound localization abilities

  1. Bone Conduction: Anatomy, Physiology, and Communication

    DTIC Science & Technology

    2007-05-01

    78 7.2 Human Localization Capabilities ..................................................................................84...main functions of the pinna are to direct incoming sound toward the EAC and to aid in sound localization . Some animals (e.g., dogs) can move their...pinnae to aid in sound localization , 9 but humans do not typically have this ability. People who may possess the ability to move their pinnae do

  2. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  3. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  4. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  5. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  6. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  7. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  8. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  9. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  10. A Functional Neuroimaging Study of Sound Localization: Visual Cortex Activity Predicts Performance in Early-Blind Individuals

    PubMed Central

    Gougoux, Frédéric; Zatorre, Robert J; Lassonde, Maryse; Voss, Patrice

    2005-01-01

    Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life. PMID:15678166

  11. Potential sound production by a deep-sea fish

    NASA Astrophysics Data System (ADS)

    Mann, David A.; Jarvis, Susan M.

    2004-05-01

    Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.

  12. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  13. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  14. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    PubMed Central

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in the occipital cortex. PMID:21716600

  15. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  16. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  17. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  18. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  19. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  20. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  1. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  2. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.

  3. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  4. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  5. Evaluation of auditory functions for Royal Canadian Mounted Police officers.

    PubMed

    Vaillancourt, Véronique; Laroche, Chantal; Giguère, Christian; Beaulieu, Marc-André; Legault, Jean-Pierre

    2011-06-01

    Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers' ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r² = 0.26 to 0.33) and sound localization (r² = 0.03 to 0.28). Only speech in quiet (r² = 0.68 to 0.85) is predicted adequately from threshold data. Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities. American Academy of Audiology.

  6. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  7. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  8. Understanding and mimicking the dual optimality of the fly ear

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Currano, Luke; Gee, Danny; Helms, Tristan; Yu, Miao

    2013-08-01

    The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.

  9. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  10. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    PubMed

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. A review of the perceptual effects of hearing loss for frequencies above 3 kHz.

    PubMed

    Moore, Brian C J

    2016-12-01

    Hearing loss caused by exposure to intense sounds usually has its greatest effects on audiometric thresholds at 4 and 6 kHz. However, in several countries compensation for occupational noise-induced hearing loss is calculated using the average of audiometric thresholds for selected frequencies up to 3 kHz, based on the implicit assumption that hearing loss for frequencies above 3 kHz has no material adverse consequences. This paper assesses whether this assumption is correct. Studies are reviewed that evaluate the role of hearing for frequencies above 3 kHz. Several studies show that frequencies above 3 kHz are important for the perception of speech, especially when background sounds are present. Hearing at high frequencies is also important for sound localization, especially for resolving front-back confusions. Hearing for frequencies above 3 kHz is important for the ability to understand speech in background sounds and for the ability to localize sounds. The audiometric threshold at 4 kHz and perhaps 6 kHz should be taken into account when assessing hearing in a medico-legal context.

  13. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    PubMed Central

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  15. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  17. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  18. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats

    PubMed Central

    Luo, Jinhong; Koselj, Klemen; Zsebők, Sándor; Siemers, Björn M.; Goerlitz, Holger R.

    2014-01-01

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey. PMID:24335559

  19. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats.

    PubMed

    Luo, Jinhong; Koselj, Klemen; Zsebok, Sándor; Siemers, Björn M; Goerlitz, Holger R

    2014-02-06

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey.

  20. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  1. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    PubMed

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  2. A Spiking Neural Network Model of the Medial Superior Olive Using Spike Timing Dependent Plasticity for Sound Localization

    PubMed Central

    Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.

    2010-01-01

    Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855

  3. Better protection from blasts without sacrificing situational awareness.

    PubMed

    Killion, Mead C; Monroe, Tim; Drambarean, Viorel

    2011-03-01

    A large number of soldiers returning from war report hearing loss and/or tinnitus. Many deployed soldiers decline to wear their hearing protection devices (HPDs) because they feel that earplugs interfere with their ability to detect and localize the enemy and their friends. The detection problem is easily handled in electronic devices with low-noise microphones. The localization problem is not as easy. In this paper, the factors that reduce situational awareness--hearing loss and restricted bandwidth in HPD devices--are discussed in light of available data, followed by a review of the cues to localization. Two electronic blast plug earplugs with 16-kHz bandwidth are described. Both provide subjectively transparent sound with regard to sound quality and localization, i.e., they sound almost as if nothing is in the ears, while protecting the ears from blasts. Finally, two formal experiments are described which investigated localization performance compared to popular existing military HPDs and the open ear. The tested earplugs performed well regarding maintaining situational awareness. Detection-distance and acceptance studies are underway.

  4. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  5. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    DTIC Science & Technology

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  6. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  7. Bimodal benefits on objective and subjective outcomes for adult cochlear implant users.

    PubMed

    Heo, Ji-Hye; Lee, Jae-Hee; Lee, Won-Sang

    2013-09-01

    Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments.

  8. Relative size of auditory pathways in symmetrically and asymmetrically eared owls.

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R

    2011-01-01

    Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.

  9. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    PubMed Central

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  10. Bimodal Benefits on Objective and Subjective Outcomes for Adult Cochlear Implant Users

    PubMed Central

    Heo, Ji-Hye; Lee, Won-Sang

    2013-01-01

    Background and Objectives Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Subjects and Methods Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Results Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Conclusions Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments. PMID:24653909

  11. Auditory plasticity in deaf children with bilateral cochlear implants

    NASA Astrophysics Data System (ADS)

    Litovsky, Ruth

    2005-04-01

    Human children with cochlear implants represent a unique population of individuals who have undergone variable amounts of auditory deprivation prior to being able to hear. Even more unique are children who received bilateral cochlear implants (BICIs), in sequential surgical procedures, several years apart. Auditory deprivation in these individuals consists of a two-stage process, whereby complete deafness is experienced initially, followed by deafness in one ear. We studied the effects of post-implant experience on the ability of deaf children to localize sounds and to understand speech in noise. These are two of the most important functions that are known to depend on binaural hearing. Children were tested at time intervals ranging from 3-months to 24-months following implantation of the second ear, while listening with either implant alone or bilaterally. Our findings suggest that the period during which plasticity occurs in human binaural system is protracted, extending into middle-to-late childhood. The rate at which benefits from bilateral hearing abilities are attained following deprivation is faster for speech intelligibility in noise compared with sound localization. Finally, the age at which the second implant was received may play an important role in the acquisition of binaural abilities. [Work supported by NIH-NIDCD.

  12. 75 FR 39665 - Marine Mammals; File No. 14791

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-12

    ... research on North Atlantic right whales (Eubalaena glacialis). ADDRESSES: The permit and related documents... objective is to determine: (1) the natural behavioral patterns right whales exhibit to approaching vessels and (2) the ability of right whales to localize and detect vessels and other sounds in their...

  13. Cognitive abilities relate to self-reported hearing disability.

    PubMed

    Zekveld, Adriana A; George, Erwin L J; Houtgast, Tammo; Kramer, Sophia E

    2013-10-01

    In this explorative study, the authors investigated the relationship between auditory and cognitive abilities and self-reported hearing disability. Thirty-two adults with mild to moderate hearing loss completed the Amsterdam Inventory for Auditory Disability and Handicap (AIADH; Kramer, Kapteyn, Festen, & Tobi, 1996) and performed the Text Reception Threshold (TRT; Zekveld, George, Kramer, Goverts, & Houtgast, 2007) test as well as tests of spatial working memory (SWM) and visual sustained attention. Regression analyses examined the predictive value of age, hearing thresholds (pure-tone averages [PTAs]), speech perception in noise (speech reception thresholds in noise [SRTNs]), and the cognitive tests for the 5 AIADH factors. Besides the variance explained by age, PTA, and SRTN, cognitive abilities were related to each hearing factor. The reported difficulties with sound detection and speech perception in quiet were less severe for participants with higher age, lower PTAs, and better TRTs. Fewer sound localization and speech perception in noise problems were reported by participants with better SRTNs and smaller SWM. Fewer sound discrimination difficulties were reported by subjects with better SRTNs and TRTs and smaller SWM. The results suggest a general role of the ability to read partly masked text in subjective hearing. Large working memory was associated with more reported hearing difficulties. This study shows that besides auditory variables and age, cognitive abilities are related to self-reported hearing disability.

  14. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  15. Aeroacoustics of Flight Vehicles: Theory and Practice. Volume 2. Noise Control

    DTIC Science & Technology

    1991-08-01

    noisiness, Localization and Precedence The ability to determine the location of sound sources is one of the major benefits of having a binaural hearing... binaural hearing is commonly called the Haas. or precedence, effect (ref. 16). This refers to the ability to hear as a single acoustic event the...propellers are operated at slightly different rpm values, beating interference between the two sources occurs, and the noise level in the cabin rises and

  16. Four-choice sound localization abilities of two Florida manatees, Trichechus manatus latirostris.

    PubMed

    Colbert, Debborah E; Gaspard, Joseph C; Reep, Roger; Mann, David A; Bauer, Gordon B

    2009-07-01

    The absolute sound localization abilities of two Florida manatees (Trichechus manatus latirostris) were measured using a four-choice discrimination paradigm, with test locations positioned at 45 deg., 90 deg., 270 deg. and 315 deg. angles relative to subjects facing 0 deg. Three broadband signals were tested at four durations (200, 500, 1000, 3000 ms), including a stimulus that spanned a wide range of frequencies (0.2-20 kHz), one stimulus that was restricted to frequencies with wavelengths shorter than their interaural time distances (6-20 kHz) and one that was limited to those with wavelengths longer than their interaural time distances (0.2-2 kHz). Two 3000 ms tonal signals were tested, including a 4 kHz stimulus, which is the midpoint of the 2.5-5.9 kHz fundamental frequency range of manatee vocalizations and a 16 kHz stimulus, which is in the range of manatee best-hearing sensitivity. Percentage correct within the broadband conditions ranged from 79% to 93% for Subject 1 and from 51% to 93% for Subject 2. Both performed above chance with the tonal signals but had much lower accuracy than with broadband signals, with Subject 1 at 44% and 33% and Subject 2 at 49% and 32% at the 4 kHz and 16 kHz conditions, respectively. These results demonstrate that manatees are able to localize frequency bands with wavelengths that are both shorter and longer than their interaural time distances and suggest that they have the ability to localize both manatee vocalizations and recreational boat engine noises.

  17. The natural history of sound localization in mammals--a story of neuronal inhibition.

    PubMed

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  18. The natural history of sound localization in mammals – a story of neuronal inhibition

    PubMed Central

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds. PMID:25324726

  19. Dynamic Temporal Processing of Nonspeech Acoustic Information by Children with Specific Language Impairment.

    ERIC Educational Resources Information Center

    Visto, Jane C.; And Others

    1996-01-01

    Ten children (ages 12-16) with specific language impairments (SLI) and controls matched for chronological or language age were tested with measures of complex sound localization involving the precedence effect phenomenon. SLI children exhibited tracking skills similar to language-age matched controls, indicating impairment in their ability to use…

  20. Tonotopic alterations in inhibitory input to the medial nucleus of the trapezoid body in a mouse model of Fragile X syndrome.

    PubMed

    McCullagh, Elizabeth A; Salcedo, Ernesto; Huntsman, Molly M; Klug, Achim

    2017-11-01

    Hyperexcitability and the imbalance of excitation/inhibition are one of the leading causes of abnormal sensory processing in Fragile X syndrome (FXS). The precise timing and distribution of excitation and inhibition is crucial for auditory processing at the level of the auditory brainstem, which is responsible for sound localization ability. Sound localization is one of the sensory abilities disrupted by loss of the Fragile X Mental Retardation 1 (Fmr1) gene. Using triple immunofluorescence staining we tested whether there were alterations in the number and size of presynaptic structures for the three primary neurotransmitters (glutamate, glycine, and GABA) in the auditory brainstem of Fmr1 knockout mice. We found decreases in either glycinergic or GABAergic inhibition to the medial nucleus of the trapezoid body (MNTB) specific to the tonotopic location within the nucleus. MNTB is one of the primary inhibitory nuclei in the auditory brainstem and participates in the sound localization process with fast and well-timed inhibition. Thus, a decrease in inhibitory afferents to MNTB neurons should lead to greater inhibitory output to the projections from this nucleus. In contrast, we did not see any other significant alterations in balance of excitation/inhibition in any of the other auditory brainstem nuclei measured, suggesting that the alterations observed in the MNTB are both nucleus and frequency specific. We furthermore show that glycinergic inhibition may be an important contributor to imbalances in excitation and inhibition in FXS and that the auditory brainstem is a useful circuit for testing these imbalances. © 2017 Wiley Periodicals, Inc.

  1. Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise

    PubMed Central

    Rakerd, Brad; Hartmann, William M.

    2010-01-01

    Binaural recordings of noise in rooms were used to determine the relationship between binaural coherence and the effectiveness of the interaural time difference (ITD) as a cue for human sound localization. Experiments showed a strong, monotonic relationship between the coherence and a listener’s ability to discriminate values of ITD. The relationship was found to be independent of other, widely varying acoustical properties of the rooms. However, the relationship varied dramatically with noise band center frequency. The ability to discriminate small ITD changes was greatest for a mid-frequency band. To achieve sensitivity comparable to mid-band, the binaural coherence had to be much larger at high frequency, where waveform ITD cues are imperceptible, and also at low frequency, where the binaural coherence in a room is necessarily large. Rivalry experiments with opposing interaural level differences (ILDs) found that the trading ratio between ITD and ILD increasingly favored the ILD as coherence decreased, suggesting that the perceptual weight of the ITD is decreased by increased reflections in rooms. PMID:21110600

  2. Altitude-dependent changes of directional hearing in mountaineers.

    PubMed Central

    Rosenberg, M E; Pollard, A J

    1992-01-01

    This study demonstrates apparent deterioration in the ability to localize sound associated with acute exposure to high altitude in ten subjects on three mountaineering expeditions. Furthermore, the auditory localization errors improved to sea level values after a period of acclimatization. Occurring at altitudes where overt neurological symptoms are not usually seen, impairment of sensory perception may explain the increase in accidental deaths associated with altitude exposure due to disorientation and misjudgment but before hypoxia is evident. PMID:1422652

  3. How Nemo finds home: the neuroecology of dispersal and of population connectivity in larvae of marine fishes.

    PubMed

    Leis, Jeffrey M; Siebeck, Ulrike; Dixson, Danielle L

    2011-11-01

    Nearly all demersal teleost marine fishes have pelagic larval stages lasting from several days to several weeks, during which time they are subject to dispersal. Fish larvae have considerable swimming abilities, and swim in an oriented manner in the sea. Thus, they can influence their dispersal and thereby, the connectivity of their populations. However, the sensory cues marine fish larvae use for orientation in the pelagic environment remain unclear. We review current understanding of these cues and how sensory abilities of larvae develop and are used to achieve orientation with particular emphasis on coral-reef fishes. The use of sound is best understood; it travels well underwater with little attenuation, and is current-independent but location-dependent, so species that primarily utilize sound for orientation will have location-dependent orientation. Larvae of many species and families can hear over a range of ~100-1000 Hz, and can distinguish among sounds. They can localize sources of sounds, but the means by which they do so is unclear. Larvae can hear during much of their pelagic larval phase, and ontogenetically, hearing sensitivity, and frequency range improve dramatically. Species differ in sensitivity to sound and in the rate of improvement in hearing during ontogeny. Due to large differences among-species within families, no significant differences in hearing sensitivity among families have been identified. Thus, distances over which larvae can detect a given sound vary among species and greatly increase ontogenetically. Olfactory cues are current-dependent and location-dependent, so species that primarily utilize olfactory cues will have location-dependent orientation, but must be able to swim upstream to locate sources of odor. Larvae can detect odors (e.g., predators, conspecifics), during most of their pelagic phase, and at least on small scales, can localize sources of odors in shallow water, although whether they can do this in pelagic environments is unknown. Little is known of the ontogeny of olfactory ability or the range over which larvae can localize sources of odors. Imprinting on an odor has been shown in one species of reef-fish. Celestial cues are current- and location-independent, so species that primarily utilize them will have location-independent orientation that can apply over broad scales. Use of sun compass or polarized light for orientation by fish larvae is implied by some behaviors, but has not been proven. Use of neither magnetic fields nor direction of waves for orientation has been shown in marine fish larvae. We highlight research priorities in this area. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  4. 75 FR 34634 - Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ...-AA08 Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... Guard is establishing a permanent Special Local Regulation on the navigable waters of Long Island Sound... Sound event. This special local regulation is necessary to provide for the safety of life by protecting...

  5. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal.

    PubMed

    Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann

    2009-11-05

    When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.

  6. Technology, Sound and Popular Music.

    ERIC Educational Resources Information Center

    Jones, Steve

    The ability to record sound is power over sound. Musicians, producers, recording engineers, and the popular music audience often refer to the sound of a recording as something distinct from the music it contains. Popular music is primarily mediated via electronics, via sound, and not by means of written notes. The ability to preserve or modify…

  7. Perception of environmental sounds by experienced cochlear implant patients.

    PubMed

    Shafiro, Valeriy; Gygi, Brian; Cheng, Min-Yu; Vachhani, Jay; Mulvey, Megan

    2011-01-01

    Environmental sound perception serves an important ecological function by providing listeners with information about objects and events in their immediate environment. Environmental sounds such as car horns, baby cries, or chirping birds can alert listeners to imminent dangers as well as contribute to one's sense of awareness and well being. Perception of environmental sounds as acoustically and semantically complex stimuli may also involve some factors common to the processing of speech. However, very limited research has investigated the abilities of cochlear implant (CI) patients to identify common environmental sounds, despite patients' general enthusiasm about them. This project (1) investigated the ability of patients with modern-day CIs to perceive environmental sounds, (2) explored associations among speech, environmental sounds, and basic auditory abilities, and (3) examined acoustic factors that might be involved in environmental sound perception. Seventeen experienced postlingually deafened CI patients participated in the study. Environmental sound perception was assessed with a large-item test composed of 40 sound sources, each represented by four different tokens. The relationship between speech and environmental sound perception and the role of working memory and some basic auditory abilities were examined based on patient performance on a battery of speech tests (HINT, CNC, and individual consonant and vowel tests), tests of basic auditory abilities (audiometric thresholds, gap detection, temporal pattern, and temporal order for tones tests), and a backward digit recall test. The results indicated substantially reduced ability to identify common environmental sounds in CI patients (45.3%). Except for vowels, all speech test scores significantly correlated with the environmental sound test scores: r = 0.73 for HINT in quiet, r = 0.69 for HINT in noise, r = 0.70 for CNC, r = 0.64 for consonants, and r = 0.48 for vowels. HINT and CNC scores in quiet moderately correlated with the temporal order for tones. However, the correlation between speech and environmental sounds changed little after partialling out the variance due to other variables. Present findings indicate that environmental sound identification is difficult for CI patients. They further suggest that speech and environmental sounds may overlap considerably in their perceptual processing. Certain spectrotemproral processing abilities are separately associated with speech and environmental sound performance. However, they do not appear to mediate the relationship between speech and environmental sounds in CI patients. Environmental sound rehabilitation may be beneficial to some patients. Environmental sound testing may have potential diagnostic applications, especially with difficult-to-test populations and might also be predictive of speech performance for prelingually deafened patients with cochlear implants.

  8. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  9. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  10. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  11. Re-Sonification of Objects, Events, and Environments

    NASA Astrophysics Data System (ADS)

    Fink, Alex M.

    Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.

  12. The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children

    ERIC Educational Resources Information Center

    Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin

    2012-01-01

    Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…

  13. Predicting speech intelligibility in noise for hearing-critical jobs

    NASA Astrophysics Data System (ADS)

    Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian

    2003-10-01

    Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.

  14. 75 FR 16700 - Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ...-AA08 Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... permanent Special Local Regulation on the navigable waters of Long Island Sound between Port Jefferson, NY and Captain's Cove Seaport, Bridgeport, CT due to the annual Swim Across the Sound event. The proposed...

  15. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  16. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  17. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  18. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  19. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  20. Effects of head movement and proprioceptive feedback in training of sound localization

    PubMed Central

    Honda, Akio; Shibata, Hiroshi; Hidaka, Souta; Gyoba, Jiro; Iwaya, Yukio; Suzuki, Yôiti

    2013-01-01

    We investigated the effects of listeners' head movements and proprioceptive feedback during sound localization practice on the subsequent accuracy of sound localization performance. The effects were examined under both restricted and unrestricted head movement conditions in the practice stage. In both cases, the participants were divided into two groups: a feedback group performed a sound localization drill with accurate proprioceptive feedback; a control group conducted it without the feedback. Results showed that (1) sound localization practice, while allowing for free head movement, led to improvement in sound localization performance and decreased actual angular errors along the horizontal plane, and that (2) proprioceptive feedback during practice decreased actual angular errors in the vertical plane. Our findings suggest that unrestricted head movement and proprioceptive feedback during sound localization training enhance perceptual motor learning by enabling listeners to use variable auditory cues and proprioceptive information. PMID:24349686

  1. A method for evaluating the relation between sound source segregation and masking

    PubMed Central

    Lutfi, Robert A.; Liu, Ching-Ju

    2011-01-01

    Sound source segregation refers to the ability to hear as separate entities two or more sound sources comprising a mixture. Masking refers to the ability of one sound to make another sound difficult to hear. Often in studies, masking is assumed to result from a failure of segregation, but this assumption may not always be correct. Here a method is offered to identify the relation between masking and sound source segregation in studies and an example is given of its application. PMID:21302979

  2. Left-right and front-back spatial hearing with multiple directional microphone configurations in modern hearing aids.

    PubMed

    Carette, Evelyne; Van den Bogaert, Tim; Laureyns, Mark; Wouters, Jan

    2014-10-01

    Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2-3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. The participants were positioned in a 13-speaker array (left-right, -90°/+90°) or 7-speaker array (FB, 0-180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. RESULTS were analyzed with repeated-measures analysis of variance. For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance. American Academy of Audiology.

  3. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    PubMed Central

    2015-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant–vowel–consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching pre-school children to decode, or read, single letters. The study compared a control group, which received the preschool’s standard letter-sound instruction, to an intervention group which received a 3-step letter-sound instruction intervention. The children’s growth in letter-sound reading and CVC word decoding abilities were assessed at baseline and 2, 4, 6 and 8 weeks. When compared to the control group, the growth of letter-sound reading ability was slightly higher for the intervention group. The rate of increase in letter-sound reading was significantly faster for the intervention group. In both groups, too few children learned to decode any CVC words to allow for analysis. Results of this study support the use of the intervention strategy in preschools for teaching children print-to-sound processing. PMID:26839494

  4. A Pilot Study on the Ability of Young Children and Adults to Identify and Reproduce Novel Speech Sounds.

    ERIC Educational Resources Information Center

    Yeni-Komshian, Grace; And Others

    This study was designed to compare children and adults on their initial ability to identify and reproduce novel speech sounds and to evaluate their performance after receiving several training sessions in producing these sounds. The novel speech sounds used were two voiceless fricatives which are consonant phonemes in Arabic but which are…

  5. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  6. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  7. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  8. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  9. Estrogen and hearing from a clinical point of view; characteristics of auditory function in women with Turner syndrome.

    PubMed

    Hederstierna, Christina; Hultcrantz, Malou; Rosenhall, Ulf

    2009-06-01

    Turner syndrome is a chromosomal aberration affecting 1:2000 newborn girls, in which all or part of one X chromosome is absent. This leads to ovarial dysgenesis and little or no endogenous estrogen production. These women have, among many other syndromal features, a high occurrence of ear and hearing problems, and neurocognitive dysfunctions, including reduced visual-spatial abilities; it is assumed that estrogen deficiency is at least partially responsible for these problems. In this, study 30 Turner women aged 40-67, with mild to moderate hearing loss, performed a battery of hearing tests aimed at localizing the lesion causing the sensorineural hearing impairment and assessing central auditory function, primarily sound localization. The results of TEOAE, ABR and speech recognition scores in noise were all indicative of cochlear dysfunction as the cause of the sensorineural impairment. Phase audiometry, a test for sound localization, showed mild disturbances in the Turner women compared to the reference group, suggesting that auditory-spatial dysfunction is another facet of the recognized neurocognitive phenotype in Turner women.

  10. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  11. Degraded neural and behavioral processing of speech sounds in a rat model of Rett syndrome

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Borland, Michael S.; Buell, Elizabeth P.; Centanni, Tracy M.; Fink, Melyssa K.; Im, Kwok W.; Wilson, Linda G.; Kilgard, Michael P.

    2015-01-01

    Individuals with Rett syndrome have greatly impaired speech and language abilities. Auditory brainstem responses to sounds are normal, but cortical responses are highly abnormal. In this study, we used the novel rat Mecp2 knockout model of Rett syndrome to document the neural and behavioral processing of speech sounds. We hypothesized that both speech discrimination ability and the neural response to speech sounds would be impaired in Mecp2 rats. We expected that extensive speech training would improve speech discrimination ability and the cortical response to speech sounds. Our results reveal that speech responses across all four auditory cortex fields of Mecp2 rats were hyperexcitable, responded slower, and were less able to follow rapidly presented sounds. While Mecp2 rats could accurately perform consonant and vowel discrimination tasks in quiet, they were significantly impaired at speech sound discrimination in background noise. Extensive speech training improved discrimination ability. Training shifted cortical responses in both Mecp2 and control rats to favor the onset of speech sounds. While training increased the response to low frequency sounds in control rats, the opposite occurred in Mecp2 rats. Although neural coding and plasticity are abnormal in the rat model of Rett syndrome, extensive therapy appears to be effective. These findings may help to explain some aspects of communication deficits in Rett syndrome and suggest that extensive rehabilitation therapy might prove beneficial. PMID:26321676

  12. Performances of Student Activism: Sound, Silence, Gender, and Dis/ability

    ERIC Educational Resources Information Center

    Pasque, Penny A.; Vargas, Juanita Gamez

    2014-01-01

    This chapter explores the various performances of activism by students through sound, silence, gender, and dis/ability and how these performances connect to social change efforts around issues such as human trafficking, homeless children, hunger, and children with varying abilities.

  13. Restoration of spatial hearing in adult cochlear implant users with single-sided deafness.

    PubMed

    Litovsky, Ruth Y; Moua, Keng; Godar, Shelly; Kan, Alan; Misurelli, Sara M; Lee, Daniel J

    2018-04-14

    In recent years, cochlear implants (CIs) have been provided in growing numbers to people with not only bilateral deafness but also to people with unilateral hearing loss, at times in order to alleviate tinnitus. This study presents audiological data from 15 adult participants (ages 48 ± 12 years) with single sided deafness. Results are presented from 9/15 adults, who received a CI (SSD-CI) in the deaf ear and were tested in Acoustic or Acoustic + CI hearing modes, and 6/15 adults who are planning to receive a CI, and were tested in the unilateral condition only. Testing included (1) audiometric measures of threshold, (2) speech understanding for CNC words and AzBIO sentences, (3) tinnitus handicap inventory, (4) sound localization with stationary sound sources, and (5) perceived auditory motion. Results showed that when listening to sentences in quiet, performance was excellent in the Acoustic and Acoustic + CI conditions. In noise, performance was similar between Acoustic and Acoustic + CI conditions in 4/6 participants tested, and slightly worse in the Acoustic + CI in 2/6 participants. In some cases, the CI provided reduced tinnitus handicap scores. When testing sound localization ability, the Acoustic + CI condition resulted in improved sound localization RMS error of 29.2° (SD: ±6.7°) compared to 56.6° (SD: ±16.5°) in the Acoustic-only condition. Preliminary results suggest that the perception of motion direction, whereby subjects are required to process and compare directional cues across multiple locations, is impaired when compared with that of normal hearing subjects. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear.

    PubMed

    Eric Lupo, J; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J

    2011-02-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (±420 μs at 500 Hz, ±310 μs for 1-4 kHz) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10-38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear

    PubMed Central

    Lupo, J. Eric; Koka, Kanthaiah; Thornton, Jennifer L.; Tollin, Daniel J.

    2010-01-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (± 420 µs at 500 Hz, ± 310 µs for 1–4 kHz ) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10–38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. PMID:21073935

  17. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  18. Combined effect of boundary layer recirculation factor and stable energy on local air quality in the Pearl River Delta over southern China.

    PubMed

    Li, Haowen; Wang, Baomin; Fang, Xingqin; Zhu, Wei; Fan, Qi; Liao, Zhiheng; Liu, Jian; Zhang, Asi; Fan, Shaojia

    2018-03-01

    Atmospheric boundary layer (ABL) has a significant impact on the spatial and temporal distribution of air pollutants. In order to gain a better understanding of how ABL affects the variation of air pollutants, atmospheric boundary layer observations were performed at Sanshui in the Pearl River Delta (PRD) region over southern China during the winter of 2013. Two types of typical ABL status that could lead to air pollution were analyzed comparatively: weak vertical diffusion ability type (WVDAT) and weak horizontal transportation ability type (WHTAT). Results show that (1) WVDAT was featured by moderate wind speed, consistent wind direction, and thick inversion layer at 600~1000 m above ground level (AGL), and air pollutants were restricted in the low altitudes due to the stable atmospheric structure; (2) WHTAT was characterized by calm wind, varied wind direction, and shallow intense ground inversion layer, and air pollutants accumulated in locally because of strong recirculation in the low ABL; (3) recirculation factor (RF) and stable energy (SE) were proved to be good indicators for horizontal transportation ability and vertical diffusion ability of the atmosphere, respectively. Combined utilization of RF and SE can be very helpful in the evaluation of air pollution potential of the ABL. Air quality data from ground and meteorological data collected from radio sounding in Sanshui in the Pearl River Delta showed that local air quality was poor when wind reversal was pronounced or temperature stratification state was stable. The combination of horizontal and vertical transportation ability of the local atmosphere should be taken into consideration when evaluating local environmental bearing capacity for air pollution.

  19. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  20. Temporal Processing Ability Is Related to Ear-Asymmetry for Detecting Time Cues in Sound: A Mismatch Negativity (MMN) Study

    ERIC Educational Resources Information Center

    Todd, Juanita; Finch, Brayden; Smith, Ellen; Budd, Timothy W.; Schall, Ulrich

    2011-01-01

    Temporal and spectral sound information is processed asymmetrically in the brain with the left-hemisphere showing an advantage for processing the former and the right-hemisphere for the latter. Using monaural sound presentation we demonstrate a context and ability dependent ear-asymmetry in brain measures of temporal change detection. Our measure…

  1. Local-world and cluster-growing weighted networks with controllable clustering

    NASA Astrophysics Data System (ADS)

    Yang, Chun-Xia; Tang, Min-Xuan; Tang, Hai-Qiang; Deng, Qiang-Qiang

    2014-12-01

    We constructed an improved weighted network model by introducing local-world selection mechanism and triangle coupling mechanism based on the traditional BBV model. The model gives power-law distributions of degree, strength and edge weight and presents the linear relationship both between the degree and strength and between the degree and the clustering coefficient. Particularly, the model is equipped with an ability to accelerate the speed increase of strength exceeding that of degree. Besides, the model is more sound and efficient in tuning clustering coefficient than the original BBV model. Finally, based on our improved model, we analyze the virus spread process and find that reducing the size of local-world has a great inhibited effect on virus spread.

  2. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  3. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  4. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    PubMed

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  5. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  6. Self-generated sounds of locomotion and ventilation and the evolution of human rhythmic abilities.

    PubMed

    Larsson, Matz

    2014-01-01

    It has been suggested that the basic building blocks of music mimic sounds of moving humans, and because the brain was primed to exploit such sounds, they eventually became incorporated in human culture. However, that raises further questions. Why do genetically close, culturally well-developed apes lack musical abilities? Did our switch to bipedalism influence the origins of music? Four hypotheses are raised: (1) Human locomotion and ventilation can mask critical sounds in the environment. (2) Synchronization of locomotion reduces that problem. (3) Predictable sounds of locomotion may stimulate the evolution of synchronized behavior. (4) Bipedal gait and the associated sounds of locomotion influenced the evolution of human rhythmic abilities. Theoretical models and research data suggest that noise of locomotion and ventilation may mask critical auditory information. People often synchronize steps subconsciously. Human locomotion is likely to produce more predictable sounds than those of non-human primates. Predictable locomotion sounds may have improved our capacity of entrainment to external rhythms and to feel the beat in music. A sense of rhythm could aid the brain in distinguishing among sounds arising from discrete sources and also help individuals to synchronize their movements with one another. Synchronization of group movement may improve perception by providing periods of relative silence and by facilitating auditory processing. The adaptive value of such skills to early ancestors may have been keener detection of prey or stalkers and enhanced communication. Bipedal walking may have influenced the development of entrainment in humans and thereby the evolution of rhythmic abilities.

  7. Musical ability and non-native speech-sound processing are linked through sensitivity to pitch and spectral information.

    PubMed

    Kempe, Vera; Bublitz, Dennis; Brooks, Patricia J

    2015-05-01

    Is the observed link between musical ability and non-native speech-sound processing due to enhanced sensitivity to acoustic features underlying both musical and linguistic processing? To address this question, native English speakers (N = 118) discriminated Norwegian tonal contrasts and Norwegian vowels. Short tones differing in temporal, pitch, and spectral characteristics were used to measure sensitivity to the various acoustic features implicated in musical and speech processing. Musical ability was measured using Gordon's Advanced Measures of Musical Audiation. Results showed that sensitivity to specific acoustic features played a role in non-native speech-sound processing: Controlling for non-verbal intelligence, prior foreign language-learning experience, and sex, sensitivity to pitch and spectral information partially mediated the link between musical ability and discrimination of non-native vowels and lexical tones. The findings suggest that while sensitivity to certain acoustic features partially mediates the relationship between musical ability and non-native speech-sound processing, complex tests of musical ability also tap into other shared mechanisms. © 2014 The British Psychological Society.

  8. Sound-localization experiments with barn owls in virtual space: influence of broadband interaural level different on head-turning behavior.

    PubMed

    Poganiatz, I; Wagner, H

    2001-04-01

    Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.

  9. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    PubMed

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  10. Auditory spatial representations of the world are compressed in blind humans.

    PubMed

    Kolarik, Andrew J; Pardhan, Shahina; Cirstea, Silvia; Moore, Brian C J

    2017-02-01

    Compared to sighted listeners, blind listeners often display enhanced auditory spatial abilities such as localization in azimuth. However, less is known about whether blind humans can accurately judge distance in extrapersonal space using auditory cues alone. Using virtualization techniques, we show that auditory spatial representations of the world beyond the peripersonal space of blind listeners are compressed compared to those for normally sighted controls. Blind participants overestimated the distance to nearby sources and underestimated the distance to remote sound sources, in both reverberant and anechoic environments, and for speech, music, and noise signals. Functions relating judged and actual virtual distance were well fitted by compressive power functions, indicating that the absence of visual information regarding the distance of sound sources may prevent accurate calibration of the distance information provided by auditory signals.

  11. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  12. Blind estimation of reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; O'Brien, William D.; Lansing, Charissa R.; Feng, Albert S.

    2003-11-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. Many state-of-the-art audio signal processing algorithms, for example in hearing-aids and telephony, are expected to have the ability to characterize the listening environment, and turn on an appropriate processing strategy accordingly. Thus, a method for characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, a method for estimating RT without prior knowledge of sound sources or room geometry is presented. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time-constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  13. Online estimation of room reverberation time

    NASA Astrophysics Data System (ADS)

    Ratnam, Rama; Jones, Douglas L.; Wheeler, Bruce C.; Feng, Albert S.

    2003-04-01

    The reverberation time (RT) is an important parameter for characterizing the quality of an auditory space. Sounds in reverberant environments are subject to coloration. This affects speech intelligibility and sound localization. State-of-the-art signal processing algorithms for hearing aids are expected to have the ability to evaluate the characteristics of the listening environment and turn on an appropriate processing strategy accordingly. Thus, a method for the characterization of room RT based on passively received microphone signals represents an important enabling technology. Current RT estimators, such as Schroeder's method or regression, depend on a controlled sound source, and thus cannot produce an online, blind RT estimate. Here, we describe a method for estimating RT without prior knowledge of sound sources or room geometry. The diffusive tail of reverberation was modeled as an exponentially damped Gaussian white noise process. The time constant of the decay, which provided a measure of the RT, was estimated using a maximum-likelihood procedure. The estimates were obtained continuously, and an order-statistics filter was used to extract the most likely RT from the accumulated estimates. The procedure was illustrated for connected speech. Results obtained for simulated and real room data are in good agreement with the real RT values.

  14. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  15. Earth Global Reference Atmospheric Model (Earth-GRAM) GRAM Virtual Meeting

    NASA Technical Reports Server (NTRS)

    White, Patrick

    2017-01-01

    What is Earth-GRAM? Provide monthly mean and standard deviation for any point in atmosphere; Monthly, Geographic, and Altitude Variation. Earth-GRAM is a C++ software package; Currently distributed as Earth-GRAM 2016. Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents. Used by engineering community because of ability to create dispersions inatmosphere at a rapid runtime; Often embedded in trajectory simulation software. Not a forecast model. Does not readily capture localized atmospheric effects.

  16. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

  17. Annoyance, detection and recognition of wind turbine noise.

    PubMed

    Van Renterghem, Timothy; Bockstael, Annelies; De Weirt, Valentine; Botteldooren, Dick

    2013-07-01

    Annoyance, recognition and detection of noise from a single wind turbine were studied by means of a two-stage listening experiment with 50 participants with normal hearing abilities. In-situ recordings made at close distance from a 1.8-MW wind turbine operating at 22 rpm were mixed with road traffic noise, and processed to simulate indoor sound pressure levels at LAeq 40 dBA. In a first part, where people were unaware of the true purpose of the experiment, samples were played during a quiet leisure activity. Under these conditions, pure wind turbine noise gave very similar annoyance ratings as unmixed highway noise at the same equivalent level, while annoyance by local road traffic noise was significantly higher. In a second experiment, listeners were asked to identify the sample containing wind turbine noise in a paired comparison test. The detection limit of wind turbine noise in presence of highway noise was estimated to be as low as a signal-to-noise ratio of -23 dBA. When mixed with local road traffic, such a detection limit could not be determined. These findings support that noticing the sound could be an important aspect of wind turbine noise annoyance at the low equivalent levels typically observed indoors in practice. Participants that easily recognized wind-turbine(-like) sounds could detect wind turbine noise better when submersed in road traffic noise. Recognition of wind turbine sounds is also linked to higher annoyance. Awareness of the source is therefore a relevant aspect of wind turbine noise perception which is consistent with previous research. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. [Auditory training in workshops: group therapy option].

    PubMed

    Santos, Juliana Nunes; do Couto, Isabel Cristina Plais; Amorim, Raquel Martins da Costa

    2006-01-01

    auditory training in groups. to verify in a group of individuals with mental retardation the efficacy of auditory training in a workshop environment. METHOD a longitudinal prospective study with 13 mentally retarded individuals from the Associação de Pais e Amigos do Excepcional (APAE) of Congonhas divided in two groups: case (n=5) and control (n=8) and who were submitted to ten auditory training sessions after verifying the integrity of the peripheral auditory system through evoked otoacoustic emissions. Participants were evaluated using a specific protocol concerning the auditory abilities (sound localization, auditory identification, memory, sequencing, auditory discrimination and auditory comprehension) at the beginning and at the end of the project. Data (entering, processing and analyses) were analyzed by the Epi Info 6.04 software. the groups did not differ regarding aspects of age (mean = 23.6 years) and gender (40% male). In the first evaluation both groups presented similar performances. In the final evaluation an improvement in the auditory abilities was observed for the individuals in the case group. When comparing the mean number of correct answers obtained by both groups in the first and final evaluations, a statistically significant result was obtained for sound localization (p=0.02), auditory sequencing (p=0.006) and auditory discrimination (p=0.03). group auditory training demonstrated to be effective in individuals with mental retardation, observing an improvement in the auditory abilities. More studies, with a larger number of participants, are necessary in order to confirm the findings of the present research. These results will help public health professionals to reanalyze the theory models used for therapy, so that they can use specific methods according to individual needs, such as auditory training workshops.

  19. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  20. Sound localization in noise in hearing-impaired listeners.

    PubMed

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  1. Effect of eye position on saccades and neuronal responses to acoustic stimuli in the superior colliculus of the behaving cat.

    PubMed

    Populin, Luis C; Tollin, Daniel J; Yin, Tom C T

    2004-10-01

    We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.

  2. Adjustment of interaural time difference in head related transfer functions based on listeners' anthropometry and its effect on sound localization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi

    2005-04-01

    Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.

  3. Spatial auditory processing in pinnipeds

    NASA Astrophysics Data System (ADS)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study. Results showed that males produce calls that were highly directional that together with social status influenced the response of receivers. Results from the playback study were able to confirm that the isolated acoustic components of this display resulted in similar responses among males. These three chapters provide further information about comparative aspects of spatial auditory processing in pinnipeds.

  4. Confusability of Consonant Phonemes in Sound Discrimination Tasks.

    ERIC Educational Resources Information Center

    Rudegeair, Robert E.

    The findings of Marsh and Sherman's investigation, in 1970, of the speech sound discrimination ability of kindergarten subjects, are discussed in this paper. In the study a comparison was made between performance when speech sounds were presented in isolation and when speech sounds were presented in a word context, using minimal sound contrasts.…

  5. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    PubMed

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  6. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  7. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  8. The effect of spatial auditory landmarks on ambulation.

    PubMed

    Karim, Adham M; Rumalla, Kavelin; King, Laurie A; Hullar, Timothy E

    2018-02-01

    The maintenance of balance and posture is a result of the collaborative efforts of vestibular, proprioceptive, and visual sensory inputs, but a fourth neural input, audition, may also improve balance. Here, we tested the hypothesis that auditory inputs function as environmental spatial landmarks whose effectiveness depends on sound localization ability during ambulation. Eight blindfolded normal young subjects performed the Fukuda-Unterberger test in three auditory conditions: silence, white noise played through headphones (head-referenced condition), and white noise played through a loudspeaker placed directly in front at 135 centimeters away from the ear at ear height (earth-referenced condition). For the earth-referenced condition, an additional experiment was performed where the effect of moving the speaker azimuthal position to 45, 90, 135, and 180° was tested. Subjects performed significantly better in the earth-referenced condition than in the head-referenced or silent conditions. Performance progressively decreased over the range from 0° to 135° but all subjects then improved slightly at the 180° compared to the 135° condition. These results suggest that presence of sound dramatically improves the ability to ambulate when vision is limited, but that sound sources must be located in the external environment in order to improve balance. This supports the hypothesis that they act by providing spatial landmarks against which head and body movement and orientation may be compared and corrected. Balance improvement in the azimuthal plane mirrors sensitivity to sound movement at similar positions, indicating that similar auditory mechanisms may underlie both processes. These results may help optimize the use of auditory cues to improve balance in particular patient populations. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Toward a Nonspeech Test of Auditory Cognition: Semantic Context Effects in Environmental Sound Identification in Adults of Varying Age and Hearing Abilities

    PubMed Central

    Sheft, Stanley; Norris, Molly; Spanos, George; Radasevich, Katherine; Formsma, Paige; Gygi, Brian

    2016-01-01

    Objective Sounds in everyday environments tend to follow one another as events unfold over time. The tacit knowledge of contextual relationships among environmental sounds can influence their perception. We examined the effect of semantic context on the identification of sequences of environmental sounds by adults of varying age and hearing abilities, with an aim to develop a nonspeech test of auditory cognition. Method The familiar environmental sound test (FEST) consisted of 25 individual sounds arranged into ten five-sound sequences: five contextually coherent and five incoherent. After hearing each sequence, listeners identified each sound and arranged them in the presentation order. FEST was administered to young normal-hearing, middle-to-older normal-hearing, and middle-to-older hearing-impaired adults (Experiment 1), and to postlingual cochlear-implant users and young normal-hearing adults tested through vocoder-simulated implants (Experiment 2). Results FEST scores revealed a strong positive effect of semantic context in all listener groups, with young normal-hearing listeners outperforming other groups. FEST scores also correlated with other measures of cognitive ability, and for CI users, with the intelligibility of speech-in-noise. Conclusions Being sensitive to semantic context effects, FEST can serve as a nonspeech test of auditory cognition for diverse listener populations to assess and potentially improve everyday listening skills. PMID:27893791

  10. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  11. The Relationship Between Speech, Language, and Phonological Awareness in Preschool-Age Children With Developmental Disabilities.

    PubMed

    Barton-Hulsey, Andrea; Sevcik, Rose A; Romski, MaryAnn

    2018-05-03

    A number of intrinsic factors, including expressive speech skills, have been suggested to place children with developmental disabilities at risk for limited development of reading skills. This study examines the relationship between these factors, speech ability, and children's phonological awareness skills. A nonexperimental study design was used to examine the relationship between intrinsic skills of speech, language, print, and letter-sound knowledge to phonological awareness in 42 children with developmental disabilities between the ages of 48 and 69 months. Hierarchical multiple regression was done to determine if speech ability accounted for a unique amount of variance in phonological awareness skill beyond what would be expected by developmental skills inclusive of receptive language and print and letter-sound knowledge. A range of skill in all areas of direct assessment was found. Children with limited speech were found to have emerging skills in print knowledge, letter-sound knowledge, and phonological awareness. Speech ability did not predict a significant amount of variance in phonological awareness beyond what would be expected by developmental skills of receptive language and print and letter-sound knowledge. Children with limited speech ability were found to have receptive language and letter-sound knowledge that supported the development of phonological awareness skills. This study provides implications for practitioners and researchers concerning the factors related to early reading development in children with limited speech ability and developmental disabilities.

  12. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds.

    PubMed

    Shinn-Cunningham, Barbara

    2017-10-17

    This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. The results from neuroscience and psychoacoustics are reviewed. In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with "normal hearing." How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. http://cred.pubs.asha.org/article.aspx?articleid=2601617.

  13. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  14. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  15. Measuring Young Children's Alphabet Knowledge: Development and Validation of Brief Letter-Sound Knowledge Assessments

    ERIC Educational Resources Information Center

    Piasta, Shayne B.; Phillips, Beth M.; Williams, Jeffrey M.; Bowles, Ryan P.; Anthony, Jason L.

    2016-01-01

    Early childhood teachers are increasingly encouraged to support children's development of letter-sound abilities. Assessment of letter-sound knowledge is key in planning for effective instruction, yet the letter-sound knowledge assessments currently available and suitable for preschool-age children demonstrate significant limitations. The purpose…

  16. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  17. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  18. Neural responses to sounds presented on and off the beat of ecologically valid music

    PubMed Central

    Tierney, Adam; Kraus, Nina

    2013-01-01

    The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system. PMID:23717268

  19. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  20. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  1. Characterizing the audibility of sound field with diffusion in architectural spaces

    NASA Astrophysics Data System (ADS)

    Utami, Sentagi Sesotya

    The significance of diffusion control in room acoustics is that it attempts to avoid echoes by dispersing reflections while removing less valuable sound energy. Some applications place emphasis on the enhancement of late reflections to promote a sense of envelopment, and on methods required to measure the performance of diffusers. What still remains unclear is the impact of diffusion on the audibility quality due to the geometric arrangement of architectural elements. The objective of this research is to characterize the audibility of the sound field with diffusion in architectural space. In order to address this objective, an approach utilizing various methods and new techniques relevant to room acoustics standards was applied. An array of microphones based on beam forming (i.e., an acoustic camera) was utilized for field measurements in a recording studio, classrooms, auditoriums, concert halls and sport arenas. Given the ability to combine a visual image with acoustical data, the impulse responses measured were analyzed to identify the impact of diffusive surfaces on the early, late, and reverberant sound fields. The effects of the room geometry and the proportions of the diffusive and absorptive surfaces were observed by utilizing geometrical room acoustics simulations. The degree of diffuseness in each space was measured by coherences from different measurement positions along with the acoustical conditions predicted by well-known objective parameters such as T30, EDT, C80, and C50. Noticeable differences of the auditory experience were investigated by utilizing computer-based survey techniques, including the use of an immersive virtual environment system, given the current software auralization capabilities. The results based on statistical analysis demonstrate the users' ability to localize the sound and to distinguish the intensity, clarity, and reverberation created within the virtual environment. Impact of architectural elements in diffusion control is evaluated by the design variable interaction, objectively and subjectively. Effectiveness of the diffusive surfaces is determined by the echo reduction and the sense of complete immersion in a given room acoustics volume. Application of such methodology at various stages of design provides the ability to create a better auditory experience by the users. The results based on the cases studied have contributed to the development of new acoustical treatment based on the diffusion characteristics.

  2. Spatial hearing ability of the pigmented Guinea pig (Cavia porcellus): Minimum audible angle and spatial release from masking in azimuth.

    PubMed

    Greene, Nathaniel T; Anbuhl, Kelsey L; Ferber, Alexander T; DeGuzman, Marisa; Allen, Paul D; Tollin, Daniel J

    2018-08-01

    Despite the common use of guinea pigs in investigations of the neural mechanisms of binaural and spatial hearing, their behavioral capabilities in spatial hearing tasks have surprisingly not been thoroughly investigated. To begin to fill this void, we tested the spatial hearing of adult male guinea pigs in several experiments using a paradigm based on the prepulse inhibition (PPI) of the acoustic startle response. In the first experiment, we presented continuous broadband noise from one speaker location and switched to a second speaker location (the "prepulse") along the azimuth prior to presenting a brief, ∼110 dB SPL startle-eliciting stimulus. We found that the startle response amplitude was systematically reduced for larger changes in speaker swap angle (i.e., greater PPI), indicating that using the speaker "swap" paradigm is sufficient to assess stimulus detection of spatially separated sounds. In a second set of experiments, we swapped low- and high-pass noise across the midline to estimate their ability to utilize interaural time- and level-difference cues, respectively. The results reveal that guinea pigs can utilize both binaural cues to discriminate azimuthal sound sources. A third set of experiments examined spatial release from masking using a continuous broadband noise masker and a broadband chirp signal, both presented concurrently at various speaker locations. In general, animals displayed an increase in startle amplitude (i.e., lower PPI) when the masker was presented at speaker locations near that of the chirp signal, and reduced startle amplitudes (increased PPI) indicating lower detection thresholds when the noise was presented from more distant speaker locations. In summary, these results indicate that guinea pigs can: 1) discriminate changes in source location within a hemifield as well as across the midline, 2) discriminate sources of low- and high-pass sounds, demonstrating that they can effectively utilize both low-frequency interaural time and high-frequency level difference sound localization cues, and 3) utilize spatial release from masking to discriminate sound sources. This report confirms the guinea pig as a suitable spatial hearing model and reinforces prior estimates of guinea pig hearing ability from acoustical and physiological measurements. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Enhanced auditory spatial localization in blind echolocators.

    PubMed

    Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A

    2015-01-01

    Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  5. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    NASA Astrophysics Data System (ADS)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  6. Open-Fit Domes and Children with Bilateral High-Frequency Sensorineural Hearing Loss: Benefits and Outcomes.

    PubMed

    Johnstone, Patti M; Yeager, Kelly R; Pomeroy, Marnie L; Hawk, Nicole

    2018-04-01

    Open-fit domes (OFDs) coupled with behind-the-ear (BTE) hearing aids were designed for adult listeners with moderate-to-severe bilateral high-frequency hearing loss (BHFL) with little to no concurrent loss in the lower frequencies. Adult research shows that BHFL degrades sound localization accuracy (SLA) and that BTE hearing aids with conventional earmolds (CEs) make matters worse. In contrast, research has shown that OFDs enhance spatial hearing percepts in adults with BHFL. Although the benefits of OFDs have been studied in adults with BHFL, no published studies to date have investigated the use of OFDs in children with the same hearing loss configuration. This study seeks to use SLA measurements to assess efficacy of bilateral OFDs in children with BHFL. To measure SLA in children with BHFL to determine the extent to which hearing loss, age, duration of CE use, and OFDs affect localization accuracy. A within-participant experimental design using repeated measures was used to determine the effect of OFDs on localization accuracy in children with BHFL. A between-participant experimental design was used to compare localization accuracy between children with BHFL and age-matched controls with normal hearing (NH). Eighteen children with BHFL who used CE and 18 age-matched NH controls. Children in both groups were divided into two age groups: older children (10-16 yr) and younger children (6-9 yr). All testing was done in a sound-treated booth with a horizontal array of 15 loudspeakers (radius of 1 m). The stimulus was a spondee word, "baseball": the level averaged 60 dB SPL and randomly roved (±8 dB). Each child was asked to identify the location of a sound source. Localization error was calculated across the loudspeaker array for each listening condition. A significant interaction was found between immediate benefit from OFD and duration of CE usage. Longer CE usage was associated with degraded localization accuracy using OFDs. Regardless of chronological age, children who had used CEs for <6 yr showed immediate localization benefit using OFDs, whereas children who had used CEs for >6 yr showed immediate localization interference using OFDs. Development, however, may play a role in SLA in children with BHFL. When unaided, older children had significantly better localization acuity than younger children with BHFL. When compared to age-matched controls, children with BHFL of all ages showed greater localization error. Nearly all (94% [17/18]) children with BHFL spontaneously reported immediate own-voice improvement when using OFDs. OFDs can provide sound localization benefit to younger children with BHFL. However, immediate benefit from OFDs is reduced by prolonged use of CEs. Although developmental factors may play a role in improving localization abilities over time, children with BHFL will rarely equal that of peers without early use of minimally disruptive hearing aid technology. Also, the occlusion effect likely impacts children far more than currently thought. American Academy of Audiology.

  7. Earth Global Reference Atmospheric Model (GRAM) Overview and Updates: DOLWG Meeting

    NASA Technical Reports Server (NTRS)

    White, Patrick

    2017-01-01

    What is Earth-GRAM (Global Reference Atmospheric Model): Provides monthly mean and standard deviation for any point in atmosphere - Monthly, Geographic, and Altitude Variation; Earth-GRAM is a C++ software package - Currently distributed as Earth-GRAM 2016; Atmospheric variables included: pressure, density, temperature, horizontal and vertical winds, speed of sound, and atmospheric constituents; Used by engineering community because of ability to create dispersions in atmosphere at a rapid runtime - Often embedded in trajectory simulation software; Not a forecast model; Does not readily capture localized atmospheric effects.

  8. Calibration of the R/V Marcus G. Langseth Seismic Array in shallow Cascadia waters using the Multi-Channel Streamer

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Tolstoy, M.; Carton, H. D.

    2013-12-01

    In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.

  9. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750

  10. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency.

    PubMed

    Branstetter, Brian K; DeLong, Caroline M; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin's (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin's ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin's acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition.

  11. Recognition of Frequency Modulated Whistle-Like Sounds by a Bottlenose Dolphin (Tursiops truncatus) and Humans with Transformations in Amplitude, Duration and Frequency

    PubMed Central

    Branstetter, Brian K.; DeLong, Caroline M.; Dziedzic, Brandon; Black, Amy; Bakhtiari, Kimberly

    2016-01-01

    Bottlenose dolphins (Tursiops truncatus) use the frequency contour of whistles produced by conspecifics for individual recognition. Here we tested a bottlenose dolphin’s (Tursiops truncatus) ability to recognize frequency modulated whistle-like sounds using a three alternative matching-to-sample paradigm. The dolphin was first trained to select a specific object (object A) in response to a specific sound (sound A) for a total of three object-sound associations. The sounds were then transformed by amplitude, duration, or frequency transposition while still preserving the frequency contour of each sound. For comparison purposes, 30 human participants completed an identical task with the same sounds, objects, and training procedure. The dolphin’s ability to correctly match objects to sounds was robust to changes in amplitude with only a minor decrement in performance for short durations. The dolphin failed to recognize sounds that were frequency transposed by plus or minus ½ octaves. Human participants demonstrated robust recognition with all acoustic transformations. The results indicate that this dolphin’s acoustic recognition of whistle-like sounds was constrained by absolute pitch. Unlike human speech, which varies considerably in average frequency, signature whistles are relatively stable in frequency, which may have selected for a whistle recognition system invariant to frequency transposition. PMID:26863519

  12. Shaking things up: Young infants' use of sound information for object individuation

    PubMed Central

    wilcox, Teresa

    2013-01-01

    A search task was used to assess 5- to 7-month-olds' ability to use property-rich sounds to individuate objects. Results suggest that infants interpret an occlusion event involving two distinct rattle sounds as involving two objects but are unsure of how to interpret two identical rattle sounds. PMID:22306182

  13. Letter-Sound Reading: Teaching Preschool Children Print-to-Sound Processing

    ERIC Educational Resources Information Center

    Wolf, Gail Marie

    2016-01-01

    This intervention study investigated the growth of letter sound reading and growth of consonant-vowel-consonant (CVC) word decoding abilities for a representative sample of 41 US children in preschool settings. Specifically, the study evaluated the effectiveness of a 3-step letter-sound teaching intervention in teaching preschool children to…

  14. Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging “periodicity-tagged” segregation of competing speech in rooms

    PubMed Central

    Sayles, Mark; Stasiak, Arkadiusz; Winter, Ian M.

    2015-01-01

    The auditory system typically processes information from concurrently active sound sources (e.g., two voices speaking at once), in the presence of multiple delayed, attenuated and distorted sound-wave reflections (reverberation). Brainstem circuits help segregate these complex acoustic mixtures into “auditory objects.” Psychophysical studies demonstrate a strong interaction between reverberation and fundamental-frequency (F0) modulation, leading to impaired segregation of competing vowels when segregation is on the basis of F0 differences. Neurophysiological studies of complex-sound segregation have concentrated on sounds with steady F0s, in anechoic environments. However, F0 modulation and reverberation are quasi-ubiquitous. We examine the ability of 129 single units in the ventral cochlear nucleus (VCN) of the anesthetized guinea pig to segregate the concurrent synthetic vowel sounds /a/ and /i/, based on temporal discharge patterns under closed-field conditions. We address the effects of added real-room reverberation, F0 modulation, and the interaction of these two factors, on brainstem neural segregation of voiced speech sounds. A firing-rate representation of single-vowels' spectral envelopes is robust to the combination of F0 modulation and reverberation: local firing-rate maxima and minima across the tonotopic array code vowel-formant structure. However, single-vowel F0-related periodicity information in shuffled inter-spike interval distributions is significantly degraded in the combined presence of reverberation and F0 modulation. Hence, segregation of double-vowels' spectral energy into two streams (corresponding to the two vowels), on the basis of temporal discharge patterns, is impaired by reverberation; specifically when F0 is modulated. All unit types (primary-like, chopper, onset) are similarly affected. These results offer neurophysiological insights to perceptual organization of complex acoustic scenes under realistically challenging listening conditions. PMID:25628545

  15. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  16. Seasonal and ontogenetic changes in movement patterns of sixgill sharks.

    PubMed

    Andrews, Kelly S; Williams, Greg D; Levin, Phillip S

    2010-09-08

    Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems.

  17. Cortical and Sensory Causes of Individual Differences in Selective Attention Ability Among Listeners With Normal Hearing Thresholds

    PubMed Central

    2017-01-01

    Purpose This review provides clinicians with an overview of recent findings relevant to understanding why listeners with normal hearing thresholds (NHTs) sometimes suffer from communication difficulties in noisy settings. Method The results from neuroscience and psychoacoustics are reviewed. Results In noisy settings, listeners focus their attention by engaging cortical brain networks to suppress unimportant sounds; they then can analyze and understand an important sound, such as speech, amidst competing sounds. Differences in the efficacy of top-down control of attention can affect communication abilities. In addition, subclinical deficits in sensory fidelity can disrupt the ability to perceptually segregate sound sources, interfering with selective attention, even in listeners with NHTs. Studies of variability in control of attention and in sensory coding fidelity may help to isolate and identify some of the causes of communication disorders in individuals presenting at the clinic with “normal hearing.” Conclusions How well an individual with NHTs can understand speech amidst competing sounds depends not only on the sound being audible but also on the integrity of cortical control networks and the fidelity of the representation of suprathreshold sound. Understanding the root cause of difficulties experienced by listeners with NHTs ultimately can lead to new, targeted interventions that address specific deficits affecting communication in noise. Presentation Video http://cred.pubs.asha.org/article.aspx?articleid=2601617 PMID:29049598

  18. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  19. The Influence of Environmental Sound Training on the Perception of Spectrally Degraded Speech and Environmental Sounds

    PubMed Central

    Sheft, Stanley; Gygi, Brian; Ho, Kim Thien N.

    2012-01-01

    Perceptual training with spectrally degraded environmental sounds results in improved environmental sound identification, with benefits shown to extend to untrained speech perception as well. The present study extended those findings to examine longer-term training effects as well as effects of mere repeated exposure to sounds over time. Participants received two pretests (1 week apart) prior to a week-long environmental sound training regimen, which was followed by two posttest sessions, separated by another week without training. Spectrally degraded stimuli, processed with a four-channel vocoder, consisted of a 160-item environmental sound test, word and sentence tests, and a battery of basic auditory abilities and cognitive tests. Results indicated significant improvements in all speech and environmental sound scores between the initial pretest and the last posttest with performance increments following both exposure and training. For environmental sounds (the stimulus class that was trained), the magnitude of positive change that accompanied training was much greater than that due to exposure alone, with improvement for untrained sounds roughly comparable to the speech benefit from exposure. Additional tests of auditory and cognitive abilities showed that speech and environmental sound performance were differentially correlated with tests of spectral and temporal-fine-structure processing, whereas working memory and executive function were correlated with speech, but not environmental sound perception. These findings indicate generalizability of environmental sound training and provide a basis for implementing environmental sound training programs for cochlear implant (CI) patients. PMID:22891070

  20. Sound production in Onuxodon fowleri (Carapidae) and its amplification by the host shell.

    PubMed

    Kéver, Loïc; Colleye, Orphal; Lugli, Marco; Lecchini, David; Lerouvreur, Franck; Herrel, Anthony; Parmentier, Eric

    2014-12-15

    Onuxodon species are well known for living inside pearl oysters. As in other carapids, their anatomy highlights their ability to make sounds but sound production has never been documented in Onuxodon. This paper describes sound production in Onuxodon fowleri as well as the anatomy of the sound production apparatus. Single-pulsed sounds and multiple-pulsed sounds that sometimes last more than 3 s were recorded in the field and in captivity (Makemo Island, French Polynesia). These pulses are characterized by a broadband frequency spectrum from 100 to 1000 Hz. Onuxodon fowleri is mainly characterized by its ability to modulate the pulse period, meaning that this species can produce pulsed sounds and tonal-like sounds using the same mechanism. In addition, the sound can be remarkably amplified by the shell cavity (peak gain can exceed 10 dB for some frequencies). The sonic apparatus of O. fowleri is characterized by a rocker bone in front of the swimbladder, modified vertebrae and epineurals, and two pairs of sonic muscles, one of which (primary sonic muscle) inserts on the rocker bone. The latter structure, which is absent in other carapid genera, appears to be sexually dimorphic suggesting differences in sound production in males and females. Sound production in O. fowleri could be an example of adaptation where an animal exploits features of its environment to enhance communication. © 2014. Published by The Company of Biologists Ltd.

  1. Evolution of directional hearing in moths via conversion of bat detection devices to asymmetric pressure gradient receivers

    PubMed Central

    Reid, Andrew; Marin-Cudraz, Thibaut

    2016-01-01

    Small animals typically localize sound sources by means of complex internal connections and baffles that effectively increase time or intensity differences between the two ears. However, some miniature acoustic species achieve directional hearing without such devices, indicating that other mechanisms have evolved. Using 3D laser vibrometry to measure tympanum deflection, we show that female lesser waxmoths (Achroia grisella) can orient toward the 100-kHz male song, because each ear functions independently as an asymmetric pressure gradient receiver that responds sharply to high-frequency sound arriving from an azimuth angle 30° contralateral to the animal's midline. We found that females presented with a song stimulus while running on a locomotion compensation sphere follow a trajectory 20°–40° to the left or right of the stimulus heading but not directly toward it, movement consistent with the tympanum deflections and suggestive of a monaural mechanism of auditory tracking. Moreover, females losing their track typically regain it by auditory scanning—sudden, wide deviations in their heading—and females initially facing away from the stimulus quickly change their general heading toward it, orientation indicating superior ability to resolve the front–rear ambiguity in source location. X-ray computer-aided tomography (CT) scans of the moths did not reveal any internal coupling between the two ears, confirming that an acoustic insect can localize a sound source based solely on the distinct features of each ear. PMID:27849607

  2. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  3. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  4. Auditory Brainstem Response to Complex Sounds Predicts Self-Reported Speech-in-Noise Performance

    ERIC Educational Resources Information Center

    Anderson, Samira; Parbery-Clark, Alexandra; White-Schwoch, Travis; Kraus, Nina

    2013-01-01

    Purpose: To compare the ability of the auditory brainstem response to complex sounds (cABR) to predict subjective ratings of speech understanding in noise on the Speech, Spatial, and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004) relative to the predictive ability of the Quick Speech-in-Noise test (QuickSIN; Killion, Niquette,…

  5. Loss of urban forest canopy and the related effects on soundscape and human directed attention

    NASA Astrophysics Data System (ADS)

    Laverne, Robert James Paul

    The specific questions addressed in this research are: Will the loss of trees in residential neighborhoods result in a change to the local soundscape? The investigation of this question leads to a related inquiry: Do the sounds of the environment in which a person is present affect their directed attention?. An invasive insect pest, the Emerald Ash Borer (Agrilus planipennis ), is killing millions of ash trees (genus Fraxinus) throughout North America. As the loss of tree canopy occurs, urban ecosystems change (including higher summer temperatures, more stormwater runoff, and poorer air quality) causing associated changes to human physical and mental health. Previous studies suggest that conditions in urban environments can result in chronic stress in humans and fatigue to directed attention, which is the ability to focus on tasks and to pay attention. Access to nature in cities can help refresh directed attention. The sights and sounds associated with parks, open spaces, and trees can serve as beneficial counterbalances to the irritating conditions associated with cities. This research examines changes to the quantity and quality of sounds in Arlington Heights, Illinois. A series of before-and-after sound recordings were gathered as trees died and were removed between 2013 and 2015. Comparison of recordings using the Raven sound analysis program revealed significant differences in some, but not all measures of sound attributes as tree canopy decreased. In general, more human-produced mechanical sounds (anthrophony) and fewer sounds associated with weather (geophony) were detected. Changes in sounds associated with animals (biophony) varied seasonally. Monitoring changes in the proportions of anthrophony, biophony and geophony can provide insight into changes in biodiversity, environmental health, and quality of life for humans. Before-tree-removal and after-tree-removal sound recordings served as the independent variable for randomly-assigned human volunteers as they performed the Stroop Test and the Necker Cube Pattern Control test to measure directed attention. The sound treatments were not found to have significant effects on the directed attention test scores. Future research is needed to investigate the characteristics of urban soundscapes that are detrimental or potentially conducive to human cognitive functioning.

  6. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds.

    PubMed

    Williamson, Ross S; Ahrens, Misha B; Linden, Jennifer F; Sahani, Maneesh

    2016-07-20

    Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds-a modulation of "input-specific gain" rather than "output gain"-may be a widespread motif in sensory processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  8. Effects of Temperature on Sound Production and Auditory Abilities in the Striped Raphael Catfish Platydoras armatulus (Family Doradidae)

    PubMed Central

    Papes, Sandra; Ladich, Friedrich

    2011-01-01

    Background Sound production and hearing sensitivity of ectothermic animals are affected by the ambient temperature. This is the first study investigating the influence of temperature on both sound production and on hearing abilities in a fish species, namely the neotropical Striped Raphael catfish Platydoras armatulus. Methodology/Principal Findings Doradid catfishes produce stridulation sounds by rubbing the pectoral spines in the shoulder girdle and drumming sounds by an elastic spring mechanism which vibrates the swimbladder. Eight fish were acclimated for at least three weeks to 22°, then to 30° and again to 22°C. Sounds were recorded in distress situations when fish were hand-held. The stridulation sounds became shorter at the higher temperature, whereas pulse number, maximum pulse period and sound pressure level did not change with temperature. The dominant frequency increased when the temperature was raised to 30°C and the minimum pulse period became longer when the temperature decreased again. The fundamental frequency of drumming sounds increased at the higher temperature. Using the auditory evoked potential (AEP) recording technique, the hearing thresholds were tested at six different frequencies from 0.1 to 4 kHz. The temporal resolution was determined by analyzing the minimum resolvable click period (0.3–5 ms). The hearing sensitivity was higher at the higher temperature and differences were more pronounced at higher frequencies. In general, latencies of AEPs in response to single clicks became shorter at the higher temperature, whereas temporal resolution in response to double-clicks did not change. Conclusions/Significance These data indicate that sound characteristics as well as hearing abilities are affected by temperatures in fishes. Constraints imposed on hearing sensitivity at different temperatures cannot be compensated even by longer acclimation periods. These changes in sound production and detection suggest that acoustic orientation and communication are affected by temperature changes in the neotropical catfish P. armatulus. PMID:22022618

  9. Evaluation of a localization training program for hearing impaired listeners.

    PubMed

    Kuk, Francis; Keenan, Denise M; Lau, Chi; Crose, Bryan; Schumacher, Jennifer

    2014-01-01

    To evaluate the effectiveness of a home-based and a laboratory-based localization training program. This study examined the effectiveness of a localization training program on improving the localization ability of 15 participants with a mild-to-moderately severe hearing loss. These participants had worn the study hearing aids in a previous study. The training consisted of laboratory-based training and home-based training. The participants were divided into three groups: a control group, a group that performed the laboratory training first followed by the home training, and a group that completed the home training first followed by the laboratory training. The participants were evaluated before any training (baseline), at 2 weeks, 1 month, 2 months, and 3 months after baseline testing. All training was completed by the second month. The participants only wore the study hearing aids between the second month and the third month. Localization testing and laboratory training were conducted in a sound-treated room with a 360 degree, 12 loudspeaker array. There were three stimuli each randomly presented three times from each loudspeaker (nine test items from each loudspeaker) for a total of 108 items on each test or training trial. The stimuli, including a continuous noise, a telephone ring, and a speech passage "Search for the sound from this speaker" were high-pass filtered above 2000 Hz. The test stimuli had a duration of 300 ms, whereas the training stimuli had five durations (3 s, 2 s, 1 s, 500 ms, and 300 ms) and four back attenuation (-8, -4, -2, and 0 dB re: front presentation) values. All stimuli were presented at 30 dB SL or the most comfortable listening level of the participants. Each participant completed 6 to 8, 2 hr laboratory-based training within a month. The home training required a two-loudspeaker computer system using 30 different sounds of various durations (5) by attenuation (4) combinations. The participants were required to use the home training program for 30 min per day, 5 days per week for 4 weeks. Localization data were evaluated using a 30 degree error criterion. There was a significant difference in localization scores for sounds that originated from the back between baseline and 3 months for the two groups that received training. The performance of the control group remained the same across the 3 month period. Generalization to other stimuli and in the unaided condition was also seen. There were no significant differences in localization performance from other directions between baseline and 3 months. These results indicated that the training program was effective in improving the localization skills of these listeners under the current test set-up. The current study demonstrated that hearing aid wearers can be trained on their front/back localization skills using either laboratory-based or home-based training program. The effectiveness of the training was generalized to other acoustic stimuli and the unaided conditions when the stimulus levels were fixed.

  10. Preschoolers' real-time coordination of vocal and facial emotional information.

    PubMed

    Berman, Jared M J; Chambers, Craig G; Graham, Susan A

    2016-02-01

    An eye-tracking methodology was used to examine the time course of 3- and 5-year-olds' ability to link speech bearing different acoustic cues to emotion (i.e., happy-sounding, neutral, and sad-sounding intonation) to photographs of faces reflecting different emotional expressions. Analyses of saccadic eye movement patterns indicated that, for both 3- and 5-year-olds, sad-sounding speech triggered gaze shifts to a matching (sad-looking) face from the earliest moments of speech processing. However, it was not until approximately 800ms into a happy-sounding utterance that preschoolers began to use the emotional cues from speech to identify a matching (happy-looking) face. Complementary analyses based on conscious/controlled behaviors (children's explicit points toward the faces) indicated that 5-year-olds, but not 3-year-olds, could successfully match happy-sounding and sad-sounding vocal affect to a corresponding emotional face. Together, the findings clarify developmental patterns in preschoolers' implicit versus explicit ability to coordinate emotional cues across modalities and highlight preschoolers' greater sensitivity to sad-sounding speech as the auditory signal unfolds in time. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  12. Effects of musical training on sound pattern processing in high-school students.

    PubMed

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  13. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  14. Sleep duration predicts behavioral and neural differences in adult speech sound learning.

    PubMed

    Earle, F Sayako; Landi, Nicole; Myers, Emily B

    2017-01-01

    Sleep is important for memory consolidation and contributes to the formation of new perceptual categories. This study examined sleep as a source of variability in typical learners' ability to form new speech sound categories. We trained monolingual English speakers to identify a set of non-native speech sounds at 8PM, and assessed their ability to identify and discriminate between these sounds immediately after training, and at 8AM on the following day. We tracked sleep duration overnight, and found that light sleep duration predicted gains in identification performance, while total sleep duration predicted gains in discrimination ability. Participants obtained an average of less than 6h of sleep, pointing to the degree of sleep deprivation as a potential factor. Behavioral measures were associated with ERP indexes of neural sensitivity to the learned contrast. These results demonstrate that the relative success in forming new perceptual categories depends on the duration of post-training sleep. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Sleep duration predicts behavioral and neural differences in adult speech sound learning

    PubMed Central

    Earle, F. Sayako; Landi, Nicole; Myers, Emily B.

    2016-01-01

    Sleep is important for memory consolidation and contributes to the formation of new perceptual categories. This study examined sleep as a source of variability in typical learners’ ability to form new speech sound categories. We trained monolingual English speakers to identify a set of non-native speech sounds at 8PM, and assessed their ability to identify and discriminate between these sounds immediately after training, and at 8AM on the following day. We tracked sleep duration overnight, and found that light sleep duration predicted gains in identification performance, while total sleep duration predicted gains in discrimination ability. Participants obtained an average of less than 6 hours of sleep, pointing to the degree of sleep deprivation as a potential factor. Behavioral measures were associated with ERP indexes of neural sensitivity to the learned contrast. These results demonstrate that the relative success in forming new perceptual categories depends on the duration of post-training sleep. PMID:27793703

  16. Effects of Bone Vibrator Position on Auditory Spatial Perception Tasks.

    PubMed

    McBride, Maranda; Tran, Phuong; Pollard, Kimberly A; Letowski, Tomasz; McMillan, Garnett P

    2015-12-01

    This study assessed listeners' ability to localize spatially differentiated virtual audio signals delivered by bone conduction (BC) vibrators and circumaural air conduction (AC) headphones. Although the skull offers little intracranial sound wave attenuation, previous studies have demonstrated listeners' ability to localize auditory signals delivered by a pair of BC vibrators coupled to the mandibular condyle bones. The current study extended this research to other BC vibrator locations on the skull. Each participant listened to virtual audio signals originating from 16 different horizontal locations using circumaural headphones or BC vibrators placed in front of, above, or behind the listener's ears. The listener's task was to indicate the signal's perceived direction of origin. Localization accuracy with the BC front and BC top positions was comparable to that with the headphones, but responses for the BC back position were less accurate than both the headphones and BC front position. This study supports the conclusion of previous studies that listeners can localize virtual 3D signals equally well using AC and BC transducers. Based on these results, it is apparent that BC devices could be substituted for AC headphones with little to no localization performance degradation. BC headphones can be used when spatial auditory information needs to be delivered without occluding the ears. Although vibrator placement in front of the ears appears optimal from the localization standpoint, the top or back position may be acceptable from an operational standpoint or if the BC system is integrated into headgear. © 2015, Human Factors and Ergonomics Society.

  17. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  18. Contributions of Morphological Awareness Skills to Word-Level Reading and Spelling in First-Grade Children with and without Speech Sound Disorder

    ERIC Educational Resources Information Center

    Apel, Kenn; Lawrence, Jessika

    2011-01-01

    Purpose: In this study, the authors compared the morphological awareness abilities of children with speech sound disorder (SSD) and children with typical speech skills and examined how morphological awareness ability predicted word-level reading and spelling performance above other known contributors to literacy development. Method: Eighty-eight…

  19. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  20. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  1. Special issues in hearing loss prevention in the Canadian military environment

    NASA Astrophysics Data System (ADS)

    Giguère, Christian; Laroche, Chantal

    2005-04-01

    Noise can be particularly noxious to hearing in the military. The personnel regularly face a wide range of noise-hazardous situations, many of which are seldom encountered in other work environments. High noise levels are associated with the operation of small arms and large caliber weapons, combat vehicles, aircrafts, ships and vessels, and industrial equipment. This can induce permanent and temporary hearing loss, compromise speech communication, interfere with the detection and localization of sound sources and warning sounds and thus, can jeopardize life or safety of the personnel. This paper will review the essential elements of a hearing loss prevention program proposed for the Canadian Armed Forces. The ultimate goal is to preserve hearing health as well as all hearing abilities necessary for effective operations. The program has been designed to meet the noise measurement and hazard investigation procedures, limits on noise exposure, use of hearing protection and other regulatory measures contained in the Canadian Occupational Health and Safety (COHS) Regulations (Part VII: Levels of Sound), while addressing the particular nature of the military environment. The paper will focus on issues that are not typically found in other occupational environments (variable work schedules, excessive impulse noise, extended exposures, communication devices).

  2. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  3. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  4. What is that mysterious booming sound?

    USGS Publications Warehouse

    Hill, David P.

    2011-01-01

    The residents of coastal North Carolina are occasionally treated to sequences of booming sounds of unknown origin. The sounds are often energetic enough to rattle windows and doors. A recent sequence occurred in early January 2011 during clear weather with no evidence of local thunder storms. Queries by a local reporter (Colin Hackman of the NBC affiliate WETC in Wilmington, North Carolina, personal communication 2011) seemed to eliminate common anthropogenic sources such as sonic booms or quarry blasts. So the commonly asked question, “What's making these booming sounds?” remained (and remains) unanswered.

  5. Seasonal and Ontogenetic Changes in Movement Patterns of Sixgill Sharks

    PubMed Central

    Andrews, Kelly S.; Williams, Greg D.; Levin, Phillip S.

    2010-01-01

    Background Understanding movement patterns is fundamental to population and conservation biology. The way an animal moves through its environment influences the dynamics of local populations and will determine how susceptible it is to natural or anthropogenic perturbations. It is of particular interest to understand the patterns of movement for species which are susceptible to human activities (e.g. fishing), or that exert a large influence on community structure, such as sharks. Methodology/Principal Findings We monitored the patterns of movement of 34 sixgill sharks Hexanchus griseus using two large-scale acoustic arrays inside and outside Puget Sound, Washington, USA. Sixgill sharks were residents in Puget Sound for up to at least four years before making large movements out of the estuary. Within Puget Sound, sixgills inhabited sites for several weeks at a time and returned to the same sites annually. Across four years, sixgills had consistent seasonal movements in which they moved to the north from winter to spring and moved to the south from summer to fall. Just prior to leaving Puget Sound, sixgills altered their behavior and moved twice as fast among sites. Nineteen of the thirty-four sixgills were detected leaving Puget Sound for the outer coast. Three of these sharks returned to Puget Sound. Conclusions/Significance For most large marine predators, we have a limited understanding of how they move through their environment, and this clouds our ability to successfully manage their populations and their communities. With detailed movement information, such as that being uncovered with acoustic monitoring, we can begin to quantify the spatial and temporal impacts of large predators within the framework of their ecosystems. PMID:20838617

  6. Monaural Sound Localization Based on Structure-Induced Acoustic Resonance

    PubMed Central

    Kim, Keonwook; Kim, Youngwoong

    2015-01-01

    A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214

  7. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Speech-on-speech masking in a front-back dimension and analysis of binaural parameters in rooms using MLS methods

    NASA Astrophysics Data System (ADS)

    Aaronson, Neil L.

    This dissertation deals with questions important to the problem of human sound source localization in rooms, starting with perceptual studies and moving on to physical measurements made in rooms. In Chapter 1, a perceptual study is performed relevant to a specific phenomenon the effect of speech reflections occurring in the front-back dimension and the ability of humans to segregate that from unreflected speech. Distracters were presented from the same source as the target speech, a loudspeaker directly in front of the listener, and also from a loudspeaker directly behind the listener, delayed relative to the front loudspeaker. Steps were taken to minimize the contributions of binaural difference cues. For all delays within +/-32 ms, a release from informational masking of about 2 dB occurred. This suggested that human listeners are able to segregate speech sources based on spatial cues, even with minimal binaural cues. In moving on to physical measurements in rooms, a method was sought for simultaneous measurement of room characteristics such as impulse response (IR) and reverberation time (RT60), and binaural parameters such as interaural time difference (ITD), interaural level difference (ILD), and the interaural cross-correlation function and coherence. Chapter 2 involves investigations into the usefulness of maximum length sequences (MLS) for these purposes. Comparisons to random telegraph noise (RTN) show that MLS performs better in the measurement of stationary and room transfer functions, IR, and RT60 by an order of magnitude in RMS percent error, even after Wiener filtering and exponential time-domain filtering have improved the accuracy of RTN measurements. Measurements were taken in real rooms in an effort to understand how the reverberant characteristics of rooms affect binaural parameters important to sound source localization. Chapter 3 deals with interaural coherence, a parameter important for localization and perception of auditory source width. MLS were used to measure waveform and envelope coherences in two rooms for various source distances and 0° azimuth through a head-and-torso simulator (KEMAR). A relationship is sought that relates these two types of coherence, since envelope coherence, while an important quantity, is generally less accessible than waveform coherence. A power law relationship is shown to exist between the two that works well within and across bands, for any source distance, and is robust to reverberant conditions of the room. Measurements of ITD, ILD, and coherence in rooms give insight into the way rooms affect these parameters, and in turn, the ability of listeners to localize sounds in rooms. Such measurements, along with room properties, are made and analyzed using MLS methods in Chapter 4. It was found that the pinnae cause incoherence for sound sources incident between 30° and 90°. In human listeners, this does not seem to adversely affect performance in lateralization experiments. The cause of poor coherence in rooms was studied as part of Chapter 4 as well. It was found that rooms affect coherence by introducing variance into the ITD spectra within the bands in which it is measured. A mathematical model to predict the interaural coherence within a band given the standard deviation of the ITD spectrum and the center frequency of the band gives an exponential relationship. This is found to work well in predicting measured coherence given ITD spectrum variance. The pinnae seem to affect the ITD spectrum in a similar way at incident sound angles for which coherence is poor in an anechoic environment.

  9. The precedence effect and its buildup and breakdown in ferrets and humans

    PubMed Central

    Tolnai, Sandra; Litovsky, Ruth Y.; King, Andrew J.

    2014-01-01

    Although many studies have examined the precedence effect (PE), few have tested whether it shows a buildup and breakdown in nonhuman animals comparable to that seen in humans. These processes are thought to reflect the ability of the auditory system to adjust to a listener's acoustic environment, and their mechanisms are still poorly understood. In this study, ferrets were trained on a two-alternative forced-choice task to discriminate the azimuthal direction of brief sounds. In one experiment, pairs of noise bursts were presented from two loudspeakers at different interstimulus delays (ISDs). Results showed that localization performance changed as a function of ISD in a manner consistent with the PE being operative. A second experiment investigated buildup and breakdown of the PE by measuring the ability of ferrets to discriminate the direction of a click pair following presentation of a conditioning train. Human listeners were also tested using this paradigm. In both species, performance was better when the test clicks and conditioning train had the same ISD but deteriorated following a switch in the direction of the leading and lagging sounds between the conditioning train and test clicks. These results suggest that ferrets, like humans, experience a buildup and breakdown of the PE. PMID:24606278

  10. Narrative Ability of Children with Speech Sound Disorders and the Prediction of Later Literacy Skills

    ERIC Educational Resources Information Center

    Wellman, Rachel L.; Lewis, Barbara A.; Freebairn, Lisa A.; Avrich, Allison A.; Hansen, Amy J.; Stein, Catherine M.

    2011-01-01

    Purpose: The main purpose of this study was to examine how children with isolated speech sound disorders (SSDs; n = 20), children with combined SSDs and language impairment (LI; n = 20), and typically developing children (n = 20), ages 3;3 (years;months) to 6;6, differ in narrative ability. The second purpose was to determine if early narrative…

  11. Speech sound articulation abilities of preschool-age children who stutter.

    PubMed

    Clark, Chagit E; Conture, Edward G; Walden, Tedra A; Lambert, Warren E

    2013-12-01

    The purpose of this study was to assess the association between speech sound articulation and childhood stuttering in a relatively large sample of preschool-age children who do and do not stutter, using the Goldman-Fristoe Test of Articulation-2 (GFTA-2; Goldman & Fristoe, 2000). Participants included 277 preschool-age children who do (CWS; n=128, 101 males) and do not stutter (CWNS; n=149, 76 males). Generalized estimating equations (GEE) were performed to assess between-group (CWS versus CWNS) differences on the GFTA-2. Additionally, within-group correlations were performed to explore the relation between CWS' speech sound articulation abilities and their stuttering frequency and severity, as well as their sound prolongation index (SPI; Schwartz & Conture, 1988). No significant differences were found between the articulation scores of preschool-age CWS and CWNS. However, there was a small gender effect for the 5-year-old age group, with girls generally exhibiting better articulation scores than boys. Additional findings indicated no relation between CWS' speech sound articulation abilities and their stuttering frequency, severity, or SPI. Findings suggest no apparent association between speech sound articulation-as measured by one standardized assessment (GFTA-2)-and childhood stuttering for this sample of preschool-age children (N=277). After reading this article, the reader will be able to: (1) discuss salient issues in the articulation literature relative to children who stutter; (2) compare/contrast the present study's methodologies and main findings to those of previous studies that investigated the association between childhood stuttering and speech sound articulation; (3) identify future research needs relative to the association between childhood stuttering and speech sound development; (4) replicate the present study's methodology to expand this body of knowledge. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. Virtual acoustic displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.

    1991-01-01

    A 3D auditory display can potentially enhance information transfer by combining directional and iconic information in a quite naturalistic representation of dynamic objects in the interface. Another aspect of auditory spatial clues is that, in conjunction with other modalities, it can act as a potentiator of information in the display. For example, visual and auditory cues together can reinforce the information content of the display and provide a greater sense of presence or realism in a manner not readily achievable by either modality alone. This phenomenon will be particularly useful in telepresence applications, such as advanced teleconferencing environments, shared electronic workspaces, and monitoring telerobotic activities in remote or hazardous situations. Thus, the combination of direct spatial cues with good principles of iconic design could provide an extremely powerful and information-rich display which is also quite easy to use. An alternative approach, recently developed at ARC, generates externalized, 3D sound cues over headphones in realtime using digital signal processing. Here, the synthesis technique involves the digital generation of stimuli using Head-Related Transfer Functions (HRTF's) measured in the two ear-canals of individual subjects. Other similar approaches include an analog system developed by Loomis, et. al., (1990) and digital systems which make use of transforms derived from normative mannikins and simulations of room acoustics. Such an interface also requires the careful psychophysical evaluation of listener's ability to accurately localize the virtual or synthetic sound sources. From an applied standpoint, measurement of each potential listener's HRTF's may not be possible in practice. For experienced listeners, localization performance was only slightly degraded compared to a subject's inherent ability. Alternatively, even inexperienced listeners may be able to adapt to a particular set of HRTF's as long as they provide adequate cues for localization. In general, these data suggest that most listeners can obtain useful directional information from an auditory display without requiring the use of individually-tailored HRTF's.

  13. Verbal auditory agnosia in a patient with traumatic brain injury: A case report.

    PubMed

    Kim, Jong Min; Woo, Seung Beom; Lee, Zeeihn; Heo, Sung Jae; Park, Donghwi

    2018-03-01

    Verbal auditory agnosia is the selective inability to recognize verbal sounds. Patients with this disorder lose the ability to understand language, write from dictation, and repeat words with reserved ability to identify nonverbal sounds. However, to the best of our knowledge, there was no report about verbal auditory agnosia in adult patient with traumatic brain injury. He was able to clearly distinguish between language and nonverbal sounds, and he did not have any difficulty in identifying the environmental sounds. However, he did not follow oral commands and could not repeat and dictate words. On the other hand, he had fluent and comprehensible speech, and was able to read and understand written words and sentences. Verbal auditory agnosia INTERVENTION:: He received speech therapy and cognitive rehabilitation during his hospitalization, and he practiced understanding of verbal language by providing written sentences together. Two months after hospitalization, he regained his ability to understand some verbal words. Six months after hospitalization, his ability to understand verbal language was improved to an understandable level when speaking slowly in front of his eyes, but his comprehension of verbal sound language was still word level, not sentence level. This case gives us the lesson that the evaluation of auditory functions as well as cognition and language functions important for accurate diagnosis and appropriate treatment, because the verbal auditory agnosia tends to be easily misdiagnosed as hearing impairment, cognitive dysfunction and sensory aphasia.

  14. Acoustic metamaterials capable of both sound insulation and energy harvesting

    NASA Astrophysics Data System (ADS)

    Li, Junfei; Zhou, Xiaoming; Huang, Guoliang; Hu, Gengkai

    2016-04-01

    Membrane-type acoustic metamaterials are well known for low-frequency sound insulation. In this work, by introducing a flexible piezoelectric patch, we propose sound-insulation metamaterials with the ability of energy harvesting from sound waves. The dual functionality of the metamaterial device has been verified by experimental results, which show an over 20 dB sound transmission loss and a maximum energy conversion efficiency up to 15.3% simultaneously. This novel property makes the metamaterial device more suitable for noise control applications.

  15. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  16. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  17. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  18. Initial Development of a Spatially Separated Speech-in-Noise and Localization Training Program

    PubMed Central

    Tyler, Richard S.; Witt, Shelley A.; Dunn, Camille C.; Wang, Wenjun

    2010-01-01

    Objective This article describes the initial development of a novel approach for training hearing-impaired listeners to improve their ability to understand speech in the presence of background noise and to also improve their ability to localize sounds. Design Most people with hearing loss, even those well fit with hearing devices, still experience significant problems understanding speech in noise. Prior research suggests that at least some subjects can experience improved speech understanding with training. However, all training systems that we are aware of have one basic, critical limitation. They do not provide spatial separation of the speech and noise, therefore ignoring the potential benefits of training binaural hearing. In this paper we describe our initial experience with a home-based training system that includes spatially separated speech-in-noise and localization training. Results Throughout the development of this system patient input, training and preliminary pilot data from individuals with bilateral cochlear implants were utilized. Positive feedback from subjective reports indicated that some individuals were engaged in the treatment, and formal testing showed benefit. Feedback and practical issues resulted from the reduction of an eight-loudspeaker to a two-loudspeaker system. Conclusions These preliminary findings suggest we have successfully developed a viable spatial hearing training system that can improve binaural hearing in noise and localization. Applications include, but are not limited to, hearing with hearing aids and cochlear implants. PMID:20701836

  19. A longitudinal study of the bilateral benefit in children with bilateral cochlear implants.

    PubMed

    Asp, Filip; Mäki-Torkko, Elina; Karltorp, Eva; Harder, Henrik; Hergils, Leif; Eskilsson, Gunnar; Stenfelt, Stefan

    2015-02-01

    To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Seventy-eight children aged 5.1-11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8-9.0 years provided normative data. For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience.

  20. Local inhibition of GABA affects precedence effect in the inferior colliculus

    PubMed Central

    Wang, Yanjun; Wang, Ningyu; Wang, Dan; Jia, Jun; Liu, Jinfeng; Xie, Yan; Wen, Xiaohui; Li, Xiaoting

    2014-01-01

    The precedence effect is a prerequisite for faithful sound localization in a complex auditory environment, and is a physiological phenomenon in which the auditory system selectively suppresses the directional information from echoes. Here we investigated how neurons in the inferior colliculus respond to the paired sounds that produce precedence-effect illusions, and whether their firing behavior can be modulated through inhibition with gamma-aminobutyric acid (GABA). We recorded extracellularly from 36 neurons in rat inferior colliculus under three conditions: no injection, injection with saline, and injection with gamma-aminobutyric acid. The paired sounds that produced precedence effects were two identical 4-ms noise bursts, which were delivered contralaterally or ipsilaterally to the recording site. The normalized neural responses were measured as a function of different inter-stimulus delays and half-maximal interstimulus delays were acquired. Neuronal responses to the lagging sounds were weak when the inter-stimulus delay was short, but increased gradually as the delay was lengthened. Saline injection produced no changes in neural responses, but after local gamma-aminobutyric acid application, responses to the lagging stimulus were suppressed. Application of gamma-aminobutyric acid affected the normalized response to lagging sounds, independently of whether they or the paired sounds were contralateral or ipsilateral to the recording site. These observations suggest that local inhibition by gamma-aminobutyric acid in the rat inferior colliculus shapes the neural responses to lagging sounds, and modulates the precedence effect. PMID:25206830

  1. Sound synthesis and evaluation of interactive footsteps and environmental sounds rendering for virtual reality applications.

    PubMed

    Nordahl, Rolf; Turchet, Luca; Serafin, Stefania

    2011-09-01

    We propose a system that affords real-time sound synthesis of footsteps on different materials. The system is based on microphones, which detect real footstep sounds from subjects, from which the ground reaction force (GRF) is estimated. Such GRF is used to control a sound synthesis engine based on physical models. Two experiments were conducted. In the first experiment, the ability of subjects to recognize the surface they were exposed to was assessed. In the second experiment, the sound synthesis engine was enhanced with environmental sounds. Results show that, in some conditions, adding a soundscape significantly improves the recognition of the simulated environment.

  2. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  3. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  4. Influence of computerized sounding out on spelling performance for children who do and do not rely on AAC.

    PubMed

    McCarthy, Jillian H; Hogan, Tiffany P; Beukelman, David R; Schwarz, Ilsa E

    2015-05-01

    Spelling is an important skill for individuals who rely on augmentative and alternative communication (AAC). The purpose of this study was to investigate how computerized sounding out influenced spelling accuracy of pseudo-words. Computerized sounding out was defined as a word elongated, thus providing an opportunity for a child to hear all the sounds in the word at a slower rate. Seven children with cerebral palsy, four who use AAC and three who do not, participated in a single subject AB design. The results of the study indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words produced by participants. The study provides preliminary evidence for the use of computerized sounding out during spelling tasks for children with cerebral palsy who do and do not use AAC. Future directions and clinical implications are discussed. We investigated how computerized sounding out influenced spelling accuracy of pseudowords for children with complex communication needs who did and did not use augmentative and alternative communication (AAC). Results indicated that the use of computerized sounding out increased the phonologic accuracy of the pseudo-words by participants, suggesting that computerized sounding out might assist in more accurate spelling for children who use AAC. Future research is needed to determine how language and reading abilities influence the use of computerized sounding out with children who have a range of speech intelligibility abilities and do and do not use AAC.

  5. Occupational Noise Exposure

    MedlinePlus

    ... induced hearing loss limits your ability to hear high frequency sounds and understand speech, which seriously impairs your ... at the base of the cochlea respond to high-frequency sounds, while those at the apex respond to ...

  6. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    PubMed

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from straight ahead (0°). The re-routing of sounds can restrict access to the monaural cues that provide a basis for determining sound location in the horizontal plane. Perhaps encouragingly, the results suggest that both monaural level and spectral cues may not be disrupted entirely by signal re-routing and that it may still be possible to reliably identify sounds originating on the hearing side. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. The hearing benefit of cochlear implantation for individuals with unilateral hearing loss, but no tinnitus.

    PubMed

    Skarzynski, Henryk; Lorens, Artur; Kruszynska, Marika; Obrycka, Anita; Pastuszak, Dorota; Skarzynski, Piotr Henryk

    2017-07-01

    Cochlear implants improve the hearing abilities of individuals with unilateral hearing loss and no tinnitus. The benefit is no different from that seen in patients with unilateral hearing loss and incapacitating tinnitus. To evaluate hearing outcomes after cochlear implantation in individuals with unilateral hearing loss and no tinnitus and compare them to those obtained in a similar group who had incapacitating tinnitus. Six cases who did not experience tinnitus before operation and 15 subjects with pre-operative tinnitus were evaluated with a structured interview, a monosyllabic word test under difficult listening situations, a sound localization test, and an APHAB (abbreviated profile of hearing aid benefit) questionnaire. All subjects used their cochlear implant more than 8 hours a day, 7 days a week. In 'no tinnitus' patients, mean benefit of cochlear implantation was 19% for quiet speech, 15% for speech in noise (with the same signal-to-noise ratio in the implanted and non-implanted ear), and 16% for a more favourable signal-to-noise ratio at the implanted ear. Sound localization error improved by an average of 19°. The global score of APHAB improved by 16%. The benefits across all evaluations did not differ significantly between the 'no tinnitus' and 'tinnitus' groups.

  8. Intrinsic Plasticity Induced by Group II Metabotropic Glutamate Receptors via Enhancement of High Threshold KV Currents in Sound Localizing Neurons

    PubMed Central

    Hamlet, William R.; Lu, Yong

    2016-01-01

    Intrinsic plasticity has emerged as an important mechanism regulating neuronal excitability and output under physiological and pathological conditions. Here, we report a novel form of intrinsic plasticity. Using perforated patch clamp recordings, we examined the modulatory effects of group II metabotropic glutamate receptors (mGluR II) on voltage-gated potassium (KV) currents and the firing properties of neurons in the chicken nucleus laminaris (NL), the first central auditory station where interaural time cues are analyzed for sound localization. We found that activation of mGluR II by synthetic agonists resulted in a selective increase of the high threshold KV currents. More importantly, synaptically released glutamate (with reuptake blocked) also enhanced the high threshold KV currents. The enhancement was frequency-coding region dependent, being more pronounced in low frequency neurons compared to middle and high frequency neurons. The intracellular mechanism involved the Gβγ signaling pathway associated with phospholipase C and protein kinase C. The modulation strengthened membrane outward rectification, sharpened action potentials, and improved the ability of NL neurons to follow high frequency inputs. These data suggest that mGluR II provides a feedforward modulatory mechanism that may regulate temporal processing under the condition of heightened synaptic inputs. PMID:26964678

  9. Musicians show general enhancement of complex sound encoding and better inhibition of irrelevant auditory change in music: an ERP study.

    PubMed

    Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; Macpherson, Megan; Weber-Fox, Christine

    2013-04-01

    Using electrophysiology, we have examined two questions in relation to musical training - namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in timbre of the sounds. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 event-related potential component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We therefore conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians' accuracy tended to suffer less from the change in timbre of the sounds, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. © 2013 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  10. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  11. Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

    PubMed

    Choi, Ji Eun; Moon, Il Joon; Kim, Eun Yeon; Park, Hee-Sung; Kim, Byung Kil; Chung, Won-Ho; Cho, Yang-Sun; Brown, Carolyn J; Hong, Sung Hwa

    The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or -90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.

  12. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  13. Selective attention in normal and impaired hearing.

    PubMed

    Shinn-Cunningham, Barbara G; Best, Virginia

    2008-12-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.

  14. Selective Attention in Normal and Impaired Hearing

    PubMed Central

    Shinn-Cunningham, Barbara G.; Best, Virginia

    2008-01-01

    A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention. PMID:18974202

  15. Contribution of self-motion perception to acoustic target localization.

    PubMed

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  16. Letter Names, Letter Sounds and Phonological Awareness: An Examination of Kindergarten Children across Letters and of Letters across Children

    ERIC Educational Resources Information Center

    Evans, Mary Ann; Bell, Michelle; Shaw, Deborah; Moretti, Shelley; Page, Jodi

    2006-01-01

    In this study 149 kindergarten children were assessed for knowledge of letter names and letter sounds, phonological awareness, and cognitive abilities. Through this it examined child and letter characteristics influencing the acquisition of alphabetic knowledge in a naturalistic context, the relationship between letter-sound knowledge and…

  17. Developmental differences in auditory detection and localization of approaching vehicles.

    PubMed

    Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas

    2013-04-01

    Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Sequential Bilateral Cochlear Implantation in a Patient with Bilateral Meniere’s Disease

    PubMed Central

    Holden, Laura K.; Neely, J. Gail; Gotter, Brenda D.; Mispagel, Karen M.; Firszt, Jill B.

    2012-01-01

    This case study describes a 45 year old female with bilateral, profound sensorineural hearing loss due to Meniere’s disease. She received her first cochlear implant in the right ear in 2008 and the second cochlear implant in the left ear in 2010. The case study examines the enhancement to speech recognition, particularly in noise, provided by bilateral cochlear implants. Speech recognition tests were administered prior to obtaining the second implant and at a number of test intervals following activation of the second device. Speech recognition in quiet and noise as well as localization abilities were assessed in several conditions to determine bilateral benefit and performance differences between ears. The results of the speech recognition testing indicated a substantial improvement in the patient’s ability to understand speech in noise and her ability to localize sound when using bilateral cochlear implants compared to using a unilateral implant or an implant and a hearing aid. In addition, the patient reported considerable improvement in her ability to communicate in daily life when using bilateral implants versus a unilateral implant. This case suggests that cochlear implantation is a viable option for patients who have lost their hearing to Meniere’s disease even when a number of medical treatments and surgical interventions have been performed to control vertigo. In the case presented, bilateral cochlear implantation was necessary for this patient to communicate successfully at home and at work. PMID:22463939

  19. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  20. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  1. Musicians Show General Enhancement of Complex Sound Encoding and Better Inhibition of Irrelevant Auditory Change in Music: An ERP Study

    PubMed Central

    Kaganovich, Natalya; Kim, Jihyun; Herring, Caryn; Schumaker, Jennifer; MacPherson, Megan; Weber-Fox, Christine

    2012-01-01

    Using electrophysiology, we have examined two questions in relation to musical training – namely, whether it enhances sensory encoding of the human voice and whether it improves the ability to ignore irrelevant auditory change. Participants performed an auditory distraction task, in which they identified each sound as either short (350 ms) or long (550 ms) and ignored a change in sounds’ timbre. Sounds consisted of a male and a female voice saying a neutral sound [a], and of a cello and a French Horn playing an F3 note. In some blocks, musical sounds occurred on 80% of trials, while voice sounds on 20% of trials. In other blocks, the reverse was true. Participants heard naturally recorded sounds in half of experimental blocks and their spectrally-rotated versions in the other half. Regarding voice perception, we found that musicians had a larger N1 ERP component not only to vocal sounds but also to their never before heard spectrally-rotated versions. We, therefore, conclude that musical training is associated with a general improvement in the early neural encoding of complex sounds. Regarding the ability to ignore irrelevant auditory change, musicians’ accuracy tended to suffer less from the change in sounds’ timbre, especially when deviants were musical notes. This behavioral finding was accompanied by a marginally larger re-orienting negativity in musicians, suggesting that their advantage may lie in a more efficient disengagement of attention from the distracting auditory dimension. PMID:23301775

  2. Novel Application of Glass Fibers Recovered From Waste Printed Circuit Boards as Sound and Thermal Insulation Material

    NASA Astrophysics Data System (ADS)

    Sun, Zhixing; Shen, Zhigang; Ma, Shulin; Zhang, Xiaojing

    2013-10-01

    The aim of this study is to investigate the feasibility of using glass fibers, a recycled material from waste printed circuit boards (WPCB), as sound absorption and thermal insulation material. Glass fibers were obtained through a fluidized-bed recycling process. Acoustic properties of the recovered glass fibers (RGF) were measured and compared with some commercial sound absorbing materials, such as expanded perlite (EP), expanded vermiculite (EV), and commercial glass fiber. Results show that RGF have good sound absorption ability over the whole tested frequency range (100-6400 Hz). The average sound absorption coefficient of RGF is 0.86, which is prior to those of EP (0.81) and EV (0.73). Noise reduction coefficient analysis indicates that the absorption ability of RGF can meet the requirement of II rating for sound absorbing material according to national standard. The thermal insulation results show that RGF has a fair low thermal conductivity (0.046 W/m K), which is comparable to those of some insulation materials (i.e., EV, EP, and rock wool). Besides, an empirical dependence of thermal conductivity on material temperature was determined for RGF. All the results showed that the reuse of RGF for sound and thermal insulation material provided a promising way for recycling WPCB and obtaining high beneficial products.

  3. Hearing Sensation Changes When a Warning Predicts a Loud Sound in the False Killer Whale (Pseudorca crassidens).

    PubMed

    Nachtigall, Paul E; Supin, Alexander Y

    2016-01-01

    Stranded whales and dolphins have sometimes been associated with loud anthropogenic sounds. Echolocating whales produce very loud sounds themselves and have developed the ability to protect their hearing from their own signals. A false killer whale's hearing sensitivity was measured when a faint warning sound was given just before the presentation of an increase in intensity to 170 dB. If the warning occurred within 1-9 s, as opposed to 20-40 s, the whale showed a 13-dB reduction in hearing sensitivity. Warning sounds before loud pulses may help mitigate the effects of loud anthropogenic sounds on wild animals.

  4. Sound absorption study of raw and expanded particulate vermiculites

    NASA Astrophysics Data System (ADS)

    Vašina, Martin; Plachá, Daniela; Mikeska, Marcel; Hružík, Lumír; Martynková, Gražyna Simha

    2016-12-01

    Expanded and raw vermiculite minerals were studied for their ability to absorb sound. Phase and structural characterization of the investigated vermiculites was found similar for both types, while morphology and surface properties vary. Sound waves reflect in wedge-like structure and get minimized, and later are absorbed totally. We found that thanks to porous character of expanded vermiculite the principle of absorption of sound into layered vermiculite morphology is analogous to principle of sound minimization in "anechoic chambers." It was found in this study that the best sound damping properties of the investigated vermiculites were in general obtained at higher powder bed heights and higher excitation frequencies.

  5. How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?

    PubMed Central

    Hasegawa, Yuji; Ikeno, Hidetoshi

    2011-01-01

    It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608

  6. Newborn human brain identifies repeated auditory feature conjunctions of low sequential probability.

    PubMed

    Ruusuvirta, Timo; Huotilainen, Minna; Fellman, Vineta; Näätänen, Risto

    2004-11-01

    Natural environments are usually composed of multiple sources for sounds. The sounds might physically differ from one another only as feature conjunctions, and several of them might occur repeatedly in the short term. Nevertheless, the detection of rare sounds requires the identification of the repeated ones. Adults have some limited ability to effortlessly identify repeated sounds in such acoustically complex environments, but the developmental onset of this finite ability is unknown. Sleeping newborn infants were presented with a repeated tone carrying six frequent (P = 0.15 each) and six rare (P approximately 0.017 each) conjunctions of its frequency, intensity and duration. Event-related potentials recorded from the infants' scalp were found to shift in amplitude towards positive polarity selectively in response to rare conjunctions. This finding suggests that humans are relatively hard-wired to preattentively identify repeated auditory feature conjunctions even when such conjunctions occur rarely among other similar ones.

  7. Traffic noise reduces foraging efficiency in wild owls

    NASA Astrophysics Data System (ADS)

    Senzaki, Masayuki; Yamaura, Yuichi; Francis, Clinton D.; Nakamura, Futoshi

    2016-08-01

    Anthropogenic noise has been increasing globally. Laboratory experiments suggest that noise disrupts foraging behavior across a range of species, but to reveal the full impacts of noise, we must examine the impacts of noise on foraging behavior among species in the wild. Owls are widespread nocturnal top predators and use prey rustling sounds for localizing prey when hunting. We conducted field experiments to examine the effect of traffic noise on owls’ ability to detect prey. Results suggest that foraging efficiency declines with increasing traffic noise levels due to acoustic masking and/or distraction and aversion to traffic noise. Moreover, we estimate that effects of traffic noise on owls’ ability to detect prey reach >120 m from a road, which is larger than the distance estimated from captive studies with bats. Our study provides the first evidence that noise reduces foraging efficiency in wild animals, and highlights the possible pervasive impacts of noise.

  8. Traffic noise reduces foraging efficiency in wild owls.

    PubMed

    Senzaki, Masayuki; Yamaura, Yuichi; Francis, Clinton D; Nakamura, Futoshi

    2016-08-18

    Anthropogenic noise has been increasing globally. Laboratory experiments suggest that noise disrupts foraging behavior across a range of species, but to reveal the full impacts of noise, we must examine the impacts of noise on foraging behavior among species in the wild. Owls are widespread nocturnal top predators and use prey rustling sounds for localizing prey when hunting. We conducted field experiments to examine the effect of traffic noise on owls' ability to detect prey. Results suggest that foraging efficiency declines with increasing traffic noise levels due to acoustic masking and/or distraction and aversion to traffic noise. Moreover, we estimate that effects of traffic noise on owls' ability to detect prey reach >120 m from a road, which is larger than the distance estimated from captive studies with bats. Our study provides the first evidence that noise reduces foraging efficiency in wild animals, and highlights the possible pervasive impacts of noise.

  9. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    ERIC Educational Resources Information Center

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  10. The Effect of Music and Sound Effects on the Listening Comprehension of Fourth Grade Students.

    ERIC Educational Resources Information Center

    Mann, Raymond E.

    This study was designed to determine if the addition of music and sound effects to recorded stories increased the comprehension and retention of information presented on tape to fourth grade students at three levels of reading ability. Two versions of four narrated stories were recorded, one version with music and sound effects, the other with…

  11. Analysis of Flame Extinguishment and Height in Low Frequency Acoustically Excited Methane Jet Diffusion Flame

    NASA Astrophysics Data System (ADS)

    Zong, Ruowen; Kang, Ruxue; Liu, Chen; Zhang, Zhiyang; Zhi, Youran

    2018-01-01

    The exploration of microgravity conditions in space is increasing and existing fire extinguishing technology is often inadequate for fire safety in this special environment. As a result, improving the efficiency of portable extinguishers is of growing importance. In this work, a visual study of the effects on methane jet diffusion flames by low frequency sound waves is conducted to assess the extinguishing ability of sound waves. With a small-scale sound wave extinguishing bench, the extinguishing ability of certain frequencies of sound waves are identified, and the response of the flame height is observed and analyzed. Results show that the flame structure changes with disturbance due to low frequency sound waves of 60-100 Hz, and quenches at effective frequencies in the range of 60-90 Hz. In this range, 60 Hz is considered to be the quick extinguishing frequency, while 70-90 Hz is the stable extinguishing frequency range. For a fixed frequency, the flame height decreases with sound pressure level (SPL). The flame height exhibits the greatest sensitivity to the 60 Hz acoustic waves, and the least to the 100 Hz acoustic waves. The flame height decreases almost identically with disturbance by 70-90 Hz acoustic waves.

  12. Analysis of Flame Extinguishment and Height in Low Frequency Acoustically Excited Methane Jet Diffusion Flame

    NASA Astrophysics Data System (ADS)

    Zong, Ruowen; Kang, Ruxue; Liu, Chen; Zhang, Zhiyang; Zhi, Youran

    2018-05-01

    The exploration of microgravity conditions in space is increasing and existing fire extinguishing technology is often inadequate for fire safety in this special environment. As a result, improving the efficiency of portable extinguishers is of growing importance. In this work, a visual study of the effects on methane jet diffusion flames by low frequency sound waves is conducted to assess the extinguishing ability of sound waves. With a small-scale sound wave extinguishing bench, the extinguishing ability of certain frequencies of sound waves are identified, and the response of the flame height is observed and analyzed. Results show that the flame structure changes with disturbance due to low frequency sound waves of 60-100 Hz, and quenches at effective frequencies in the range of 60-90 Hz. In this range, 60 Hz is considered to be the quick extinguishing frequency, while 70-90 Hz is the stable extinguishing frequency range. For a fixed frequency, the flame height decreases with sound pressure level (SPL). The flame height exhibits the greatest sensitivity to the 60 Hz acoustic waves, and the least to the 100 Hz acoustic waves. The flame height decreases almost identically with disturbance by 70-90 Hz acoustic waves.

  13. Understanding the neurophysiological basis of auditory abilities for social communication: a perspective on the value of ethological paradigms.

    PubMed

    Bennur, Sharath; Tsunada, Joji; Cohen, Yale E; Liu, Robert C

    2013-11-01

    Acoustic communication between animals requires them to detect, discriminate, and categorize conspecific or heterospecific vocalizations in their natural environment. Laboratory studies of the auditory-processing abilities that facilitate these tasks have typically employed a broad range of acoustic stimuli, ranging from natural sounds like vocalizations to "artificial" sounds like pure tones and noise bursts. However, even when using vocalizations, laboratory studies often test abilities like categorization in relatively artificial contexts. Consequently, it is not clear whether neural and behavioral correlates of these tasks (1) reflect extensive operant training, which drives plastic changes in auditory pathways, or (2) the innate capacity of the animal and its auditory system. Here, we review a number of recent studies, which suggest that adopting more ethological paradigms utilizing natural communication contexts are scientifically important for elucidating how the auditory system normally processes and learns communication sounds. Additionally, since learning the meaning of communication sounds generally involves social interactions that engage neuromodulatory systems differently than laboratory-based conditioning paradigms, we argue that scientists need to pursue more ethological approaches to more fully inform our understanding of how the auditory system is engaged during acoustic communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  15. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  16. Sound Science: Taking Action with Acoustics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinha, Dipen

    2014-07-16

    From tin whistles to sonic booms, sound waves interact with each other and with the medium through which they travel. By observing these interactions, we can identify substances that are hidden in sealed containers and obtain images of buried objects. By manipulating the ability of sound to push matter around, we can create novel structures and unique materials. Join the Lab's own sound hound, Dipen Sinha, as he describes how he uses fundamental research in acoustics for solving problems in industry, security and health.

  17. Sound Science: Taking Action with Acoustics

    ScienceCinema

    Sinha, Dipen

    2018-01-16

    From tin whistles to sonic booms, sound waves interact with each other and with the medium through which they travel. By observing these interactions, we can identify substances that are hidden in sealed containers and obtain images of buried objects. By manipulating the ability of sound to push matter around, we can create novel structures and unique materials. Join the Lab's own sound hound, Dipen Sinha, as he describes how he uses fundamental research in acoustics for solving problems in industry, security and health.

  18. New Stethoscope With Extensible Diaphragm.

    PubMed

    Takashina, Tsunekazu; Shimizu, Masashi; Muratake, Torakazu; Mayuzumi, Syuichi

    2016-08-25

    This study compared the diagnostic efficacy of the common suspended diaphragm stethoscope (SDS) with a new extensible diaphragm stethoscope (EDS) for low-frequency heart sounds. The EDS was developed by using an ethylene propylene diene monomer diaphragm. The results showed that the EDS enhanced both the volume and quality of low-frequency heart sounds, and improved the ability of examiners to auscultate such heart sounds. Based on the results of the sound analysis, the EDS is more efficient than the SDS. (Circ J 2016; 80: 2047-2049).

  19. Hearing ability in three clownfish species.

    PubMed

    Parmentier, Eric; Colleye, Orphal; Mann, David

    2009-07-01

    Clownfish live in social groups in which there is a size-based dominance hierarchy. In such a context, sonic cues could play a role in social organisation because dominant frequency and pulse length of sounds are strongly correlated with fish size. Data on the hearing ability of these fish are, however, needed to show that they have the sensory ability to detect the frequencies in their sounds. The present study determines the hearing sensitivity in three different anemonefish species (Amphiprion frenatus, Amphiprion ocellaris and Amphiprion clarkii), and compares it with the frequencies in their calls. The frequency range over which the three species can detect sounds was between 75 and 1800 Hz, and they were most sensitive to frequencies below 200 Hz. During sound production, dominant frequency is clearly related (R=0.95) to the fish size, whatever the species. Dominant frequency extends from 370 to 900 Hz for specimens having a size between 55 and 130 mm. The best hearing sensitivity of small specimens were found to be lower than the dominant frequency of their own calls. However, they were found to be close to the dominant frequency of larger fish calls. The interest of juveniles lies in localising the adults and thus their location on the reef.

  20. Sound-direction identification with bilateral cochlear implants.

    PubMed

    Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B

    2007-02-01

    The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.

  1. Gravitoinertial force magnitude and direction influence head-centric auditory localization

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.

    2001-01-01

    We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.

  2. 77 FR 23119 - Annual Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-18

    ... Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel; Biloxi, MS... enforce Special Local Regulations for the Smoking the Sound boat races in the Biloxi Ship Channel, Biloxi... during the Smoking the Sound boat races. During the enforcement period, entry into, transiting or...

  3. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  4. Ray-based acoustic localization of cavitation in a highly reverberant environment.

    PubMed

    Chang, Natasha A; Dowling, David R

    2009-05-01

    Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.

  5. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  6. Sound stream segregation: a neuromorphic approach to solve the “cocktail party problem” in real-time

    PubMed Central

    Thakur, Chetan Singh; Wang, Runchun M.; Afshar, Saeed; Hamilton, Tara J.; Tapson, Jonathan C.; Shamma, Shihab A.; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the “cocktail party effect.” It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition. PMID:26388721

  7. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    PubMed

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and speech recognition.

  8. Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane.

    PubMed

    Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A

    2018-06-14

    The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  10. Statistics of natural reverberation enable perceptual separation of sound and space

    PubMed Central

    Traer, James; McDermott, Josh H.

    2016-01-01

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us. PMID:27834730

  11. Statistics of natural reverberation enable perceptual separation of sound and space.

    PubMed

    Traer, James; McDermott, Josh H

    2016-11-29

    In everyday listening, sound reaches our ears directly from a source as well as indirectly via reflections known as reverberation. Reverberation profoundly distorts the sound from a source, yet humans can both identify sound sources and distinguish environments from the resulting sound, via mechanisms that remain unclear. The core computational challenge is that the acoustic signatures of the source and environment are combined in a single signal received by the ear. Here we ask whether our recognition of sound sources and spaces reflects an ability to separate their effects and whether any such separation is enabled by statistical regularities of real-world reverberation. To first determine whether such statistical regularities exist, we measured impulse responses (IRs) of 271 spaces sampled from the distribution encountered by humans during daily life. The sampled spaces were diverse, but their IRs were tightly constrained, exhibiting exponential decay at frequency-dependent rates: Mid frequencies reverberated longest whereas higher and lower frequencies decayed more rapidly, presumably due to absorptive properties of materials and air. To test whether humans leverage these regularities, we manipulated IR decay characteristics in simulated reverberant audio. Listeners could discriminate sound sources and environments from these signals, but their abilities degraded when reverberation characteristics deviated from those of real-world environments. Subjectively, atypical IRs were mistaken for sound sources. The results suggest the brain separates sound into contributions from the source and the environment, constrained by a prior on natural reverberation. This separation process may contribute to robust recognition while providing information about spaces around us.

  12. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  13. Cross-Modal Correspondences Enhance Performance on a Colour-to-Sound Sensory Substitution Device.

    PubMed

    Hamilton-Fletcher, Giles; Wright, Thomas D; Ward, Jamie

    Visual sensory substitution devices (SSDs) can represent visual characteristics through distinct patterns of sound, allowing a visually impaired user access to visual information. Previous SSDs have avoided colour and when they do encode colour, have assigned sounds to colour in a largely unprincipled way. This study introduces a new tablet-based SSD termed the ‘Creole’ (so called because it combines tactile scanning with image sonification) and a new algorithm for converting colour to sound that is based on established cross-modal correspondences (intuitive mappings between different sensory dimensions). To test the utility of correspondences, we examined the colour–sound associative memory and object recognition abilities of sighted users who had their device either coded in line with or opposite to sound–colour correspondences. Improved colour memory and reduced colour-errors were made by users who had the correspondence-based mappings. Interestingly, the colour–sound mappings that provided the highest improvements during the associative memory task also saw the greatest gains for recognising realistic objects that also featured these colours, indicating a transfer of abilities from memory to recognition. These users were also marginally better at matching sounds to images varying in luminance, even though luminance was coded identically across the different versions of the device. These findings are discussed with relevance for both colour and correspondences for sensory substitution use.

  14. Engaging teachers & students in geosciences by exploring local geoheritage sites

    NASA Astrophysics Data System (ADS)

    Gochis, E. E.; Gierke, J. S.

    2014-12-01

    Understanding geoscience concepts and the interactions of Earth system processes in one's own community has the potential to foster sound decision making for environmental, economic and social wellbeing. School-age children are an appropriate target audience for improving Earth Science literacy and attitudes towards scientific practices. However, many teachers charged with geoscience instruction lack awareness of local geological significant examples or the pedagogical ability to integrate place-based examples into their classroom practice. This situation is further complicated because many teachers of Earth science lack a firm background in geoscience course work. Strategies for effective K-12 teacher professional development programs that promote Earth Science literacy by integrating inquiry-based investigations of local and regional geoheritage sites into standards based curriculum were developed and tested with teachers at a rural school on the Hannahville Indian Reservation located in Michigan's Upper Peninsula. The workshops initiated long-term partnerships between classroom teachers and geoscience experts. We hypothesize that this model of professional development, where teachers of school-age children are prepared to teach local examples of earth system science, will lead to increased engagement in Earth Science content and increased awareness of local geoscience examples by K-12 students and the public.

  15. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  16. First and second sound in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.

    2009-11-01

    Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.

  17. NASA sounding rockets, 1958 - 1968: A historical summary

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1971-01-01

    The development and use of sounding rockets is traced from the Wac Corporal through the present generation of rockets. The Goddard Space Flight Center Sounding Rocket Program is discussed, and the use of sounding rockets during the IGY and the 1960's is described. Advantages of sounding rockets are identified as their simplicity and payload simplicity, low costs, payload recoverability, geographic flexibility, and temporal flexibility. The disadvantages are restricted time of observation, localized coverage, and payload limitations. Descriptions of major sounding rockets, trends in vehicle usage, and a compendium of NASA sounding rocket firings are also included.

  18. Visual influences on auditory spatial learning

    PubMed Central

    King, Andrew J.

    2008-01-01

    The visual and auditory systems frequently work together to facilitate the identification and localization of objects and events in the external world. Experience plays a critical role in establishing and maintaining congruent visual–auditory associations, so that the different sensory cues associated with targets that can be both seen and heard are synthesized appropriately. For stimulus location, visual information is normally more accurate and reliable and provides a reference for calibrating the perception of auditory space. During development, vision plays a key role in aligning neural representations of space in the brain, as revealed by the dramatic changes produced in auditory responses when visual inputs are altered, and is used throughout life to resolve short-term spatial conflicts between these modalities. However, accurate, and even supra-normal, auditory localization abilities can be achieved in the absence of vision, and the capacity of the mature brain to relearn to localize sound in the presence of substantially altered auditory spatial cues does not require visuomotor feedback. Thus, while vision is normally used to coordinate information across the senses, the neural circuits responsible for spatial hearing can be recalibrated in a vision-independent fashion. Nevertheless, early multisensory experience appears to be crucial for the emergence of an ability to match signals from different sensory modalities and therefore for the outcome of audiovisual-based rehabilitation of deaf patients in whom hearing has been restored by cochlear implantation. PMID:18986967

  19. Sex-Specific Differences in Agonistic Behaviour, Sound Production and Auditory Sensitivity in the Callichthyid Armoured Catfish Megalechis thoracata

    PubMed Central

    Hadjiaghai, Oliwia; Ladich, Friedrich

    2015-01-01

    Background Data on sex-specific differences in sound production, acoustic behaviour and hearing abilities in fishes are rare. Representatives of numerous catfish families are known to produce sounds in agonistic contexts (intraspecific aggression and interspecific disturbance situations) using their pectoral fins. The present study investigates differences in agonistic behaviour, sound production and hearing abilities in males and females of a callichthyid catfish. Methodology/Principal Findings Eight males and nine females of the armoured catfish Megalechis thoracata were investigated. Agonistic behaviour displayed during male-male and female-female dyadic contests and sounds emitted were recorded, sound characteristics analysed and hearing thresholds measured using the auditory evoked potential (AEP) recording technique. Male pectoral spines were on average 1.7-fold longer than those of same-sized females. Visual and acoustic threat displays differed between sexes. Males produced low-frequency harmonic barks at longer distances and thumps at close distances, whereas females emitted broad-band pulsed crackles when close to each other. Female aggressive sounds were significantly shorter than those of males (167 ms versus 219 to 240 ms) and of higher dominant frequency (562 Hz versus 132 to 403 Hz). Sound duration and sound level were positively correlated with body and pectoral spine length, but dominant frequency was inversely correlated only to spine length. Both sexes showed a similar U-shaped hearing curve with lowest thresholds between 0.2 and 1 kHz and a drop in sensitivity above 1 kHz. The main energies of sounds were located at the most sensitive frequencies. Conclusions/Significance Current data demonstrate that both male and female M. thoracata produce aggressive sounds, but the behavioural contexts and sound characteristics differ between sexes. Sexes do not differ in hearing, but it remains to be clarified if this is a general pattern among fish. This is the first study to describe sex-specific differences in agonistic behaviour in fishes. PMID:25775458

  20. Aural localization of silent objects by active human biosonar: neural representations of virtual echo-acoustic space.

    PubMed

    Wallmeier, Ludwig; Kish, Daniel; Wiegrebe, Lutz; Flanagin, Virginia L

    2015-03-01

    Some blind humans have developed the remarkable ability to detect and localize objects through the auditory analysis of self-generated tongue clicks. These echolocation experts show a corresponding increase in 'visual' cortex activity when listening to echo-acoustic sounds. Echolocation in real-life settings involves multiple reflections as well as active sound production, neither of which has been systematically addressed. We developed a virtualization technique that allows participants to actively perform such biosonar tasks in virtual echo-acoustic space during magnetic resonance imaging (MRI). Tongue clicks, emitted in the MRI scanner, are picked up by a microphone, convolved in real time with the binaural impulse responses of a virtual space, and presented via headphones as virtual echoes. In this manner, we investigated the brain activity during active echo-acoustic localization tasks. Our data show that, in blind echolocation experts, activations in the calcarine cortex are dramatically enhanced when a single reflector is introduced into otherwise anechoic virtual space. A pattern-classification analysis revealed that, in the blind, calcarine cortex activation patterns could discriminate left-side from right-side reflectors. This was found in both blind experts, but the effect was significant for only one of them. In sighted controls, 'visual' cortex activations were insignificant, but activation patterns in the planum temporale were sufficient to discriminate left-side from right-side reflectors. Our data suggest that blind and echolocation-trained, sighted subjects may recruit different neural substrates for the same active-echolocation task. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  1. The Sound Patterns of Camuno: Description and Explanation in Evolutionary Phonology

    ERIC Educational Resources Information Center

    Cresci, Michela

    2014-01-01

    This dissertation presents a linguistic study of the sound patterns of Camuno framed within Evolutionary Phonology (Blevins, 2004, 2006, to appear). Camuno is a variety of Eastern Lombard, a Romance language of northern Italy, spoken in Valcamonica. Camuno is not a local variety of Italian, but a sister of Italian, a local divergent development of…

  2. [Auditory processing evaluation in children born preterm].

    PubMed

    Gallo, Júlia; Dias, Karin Ziliotto; Pereira, Liliane Desgualdo; Azevedo, Marisa Frasson de; Sousa, Elaine Colombo

    2011-01-01

    To verify the performance of children born preterm on auditory processing evaluation, and to correlate the data with behavioral hearing assessment carried out at 12 months of age, comparing the results to those of auditory processing evaluation of children born full-term. Participants were 30 children with ages between 4 and 7 years, who were divided into two groups: Group 1 (children born preterm), and Group 2 (children born full-term). The auditory processing results of Group 1 were correlated to data obtained from the behavioral auditory evaluation carried out at 12 months of age. The results were compared between groups. Subjects in Group 1 presented at least one risk indicator for hearing loss at birth. In the behavioral auditory assessment carried out at 12 months of age, 38% of the children in Group 1 were at risk for central auditory processing deficits, and 93.75% presented auditory processing deficits on the evaluation. Significant differences were found between the groups for the temporal order test, the PSI test with ipsilateral competitive message, and the speech-in-noise test. The delay in sound localization ability was associated to temporal processing deficits. Children born preterm have worse performance in auditory processing evaluation than children born full-term. Delay in sound localization at 12 months is associated to deficits on the physiological mechanism of temporal processing in the auditory processing evaluation carried out between 4 and 7 years.

  3. Postnatal development of echolocation abilities in a bottlenose dolphin (Tursiops truncatus): temporal organization.

    PubMed

    Favaro, Livio; Gnone, Guido; Pessani, Daniela

    2013-03-01

    In spite of all the information available on adult bottlenose dolphin (Tursiops truncatus) biosonar, the ontogeny of its echolocation abilities has been investigated very little. Earlier studies have reported that neonatal dolphins can produce both whistles and burst-pulsed sounds just after birth and that early-pulsed sounds are probably a precursor of echolocation click trains. The aim of this research is to investigate the development of echolocation signals in a captive calf, born in the facilities of the Acquario di Genova. A set of 81 impulsive sounds were collected from birth to the seventh postnatal week and six additional echolocation click trains were recorded when the dolphin was 1 year old. Moreover, behavioral observations, concurring with sound production, were carried out by means of a video camera. For each sound we measured five acoustic parameters: click train duration (CTD), number of clicks per train, minimum, maximum, and mean click repetition rate (CRR). CTD and number of clicks per train were found to increase with age. Maximum and mean CRR followed a decreasing trend with dolphin growth starting from the second postnatal week. The calf's first head scanning movement was recorded 21 days after birth. Our data suggest that in the bottlenose dolphin the early postnatal weeks are essential for the development of echolocation abilities and that the temporal features of the echolocation click trains remain relatively stable from the seventh postnatal week up to the first year of life. © 2013 Wiley Periodicals, Inc.

  4. Auditory scene analysis in school-aged children with developmental language disorders

    PubMed Central

    Sussman, E.; Steinschneider, M.; Lee, W.; Lawson, K.

    2014-01-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7–15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. PMID:24548430

  5. The influence of (central) auditory processing disorder in speech sound disorders.

    PubMed

    Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Vilela, Nadia; Carvallo, Renata Mota Mamede; Wertzner, Haydée Fiszbein

    2016-01-01

    Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central) auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. To study phonological measures and (central) auditory processing of children with speech sound disorder. Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central) auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. The group with (central) auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central) auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  6. Acoustic transistor: Amplification and switch of sound by sound

    NASA Astrophysics Data System (ADS)

    Liang, Bin; Kan, Wei-wei; Zou, Xin-ye; Yin, Lei-lei; Cheng, Jian-chun

    2014-08-01

    We designed an acoustic transistor to manipulate sound in a manner similar to the manipulation of electric current by its electrical counterpart. The acoustic transistor is a three-terminal device with the essential ability to use a small monochromatic acoustic signal to control a much larger output signal within a broad frequency range. The output and controlling signals have the same frequency, suggesting the possibility of cascading the structure to amplify an acoustic signal. Capable of amplifying and switching sound by sound, acoustic transistors have various potential applications and may open the way to the design of conceptual devices such as acoustic logic gates.

  7. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.

    PubMed

    van der Heijden, Marcel; Versteegh, Corstiaen P C

    2015-10-01

    Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.

  8. When listening to rain sounds boosts arithmetic ability

    PubMed Central

    De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia

    2018-01-01

    Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli. PMID:29466472

  9. When listening to rain sounds boosts arithmetic ability.

    PubMed

    Proverbio, Alice Mado; De Benedetto, Francesco; Ferrari, Maria Vittoria; Ferrarini, Giorgia

    2018-01-01

    Studies in the literature have provided conflicting evidence about the effects of background noise or music on concurrent cognitive tasks. Some studies have shown a detrimental effect, while others have shown a beneficial effect of background auditory stimuli. The aim of this study was to investigate the influence of agitating, happy or touching music, as opposed to environmental sounds or silence, on the ability of non-musician subjects to perform arithmetic operations. Fifty university students (25 women and 25 men, 25 introverts and 25 extroverts) volunteered for the study. The participants were administered 180 easy or difficult arithmetic operations (division, multiplication, subtraction and addition) while listening to heavy rain sounds, silence or classical music. Silence was detrimental when participants were faced with difficult arithmetic operations, as it was associated with significantly worse accuracy and slower RTs than music or rain sound conditions. This finding suggests that the benefit of background stimulation was not music-specific but possibly due to an enhanced cerebral alertness level induced by the auditory stimulation. Introverts were always faster than extroverts in solving mathematical problems, except when the latter performed calculations accompanied by the sound of heavy rain, a condition that made them as fast as introverts. While the background auditory stimuli had no effect on the arithmetic ability of either group in the easy condition, it strongly affected extroverts in the difficult condition, with RTs being faster during agitating or joyful music as well as rain sounds, compared to the silent condition. For introverts, agitating music was associated with faster response times than the silent condition. This group difference may be explained on the basis of the notion that introverts have a generally higher arousal level compared to extroverts and would therefore benefit less from the background auditory stimuli.

  10. Light-induced vibration in the hearing organ

    PubMed Central

    Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders

    2014-01-01

    The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606

  11. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders

    PubMed Central

    Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.

    2013-01-01

    Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845

  12. Dispatching demand response transit service maximizing productivity and service quality guidebook : final report, March 2009.

    DOT National Transportation Integrated Search

    2009-03-01

    The ability of transit agencies to staff dispatch effectively and use technology to its full advantage is critical : in responding proactively as service changes occur and in making sound routing decisions. Sound routing : decisions result in improve...

  13. Method for creating an aeronautic sound shield having gas distributors arranged on the engines, wings, and nose of an aircraft

    NASA Technical Reports Server (NTRS)

    Corda, Stephen (Inventor); Smith, Mark Stephen (Inventor); Myre, David Daniel (Inventor)

    2008-01-01

    The present invention blocks and/or attenuates the upstream travel of acoustic disturbances or sound waves from a flight vehicle or components of a flight vehicle traveling at subsonic speed using a local injection of a high molecular weight gas. Additional benefit may also be obtained by lowering the temperature of the gas. Preferably, the invention has a means of distributing the high molecular weight gas from the nose, wing, component, or other structure of the flight vehicle into the upstream or surrounding air flow. Two techniques for distribution are direct gas injection and sublimation of the high molecular weight solid material from the vehicle surface. The high molecular weight and low temperature of the gas significantly decreases the local speed of sound such that a localized region of supersonic flow and possibly shock waves are formed, preventing the upstream travel of sound waves from the flight vehicle.

  14. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia.

    PubMed

    Castro-Camacho, Wendy; Peñaloza-López, Yolanda; Pérez-Ruiz, Santiago J; García-Pedroza, Felipe; Padilla-Ortiz, Ana L; Poblano, Adrián; Villarruel-Rivas, Concepción; Romero-Díaz, Alfredo; Careaga-Olvera, Aidé

    2015-04-01

    Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  15. The Radio Plasma Imager Investigation on the IMAGE Spacecraft

    NASA Technical Reports Server (NTRS)

    Reinisch, Bodo W.; Haines, D. M.; Bibl, K.; Cheney, G.; Galkin, I. A.; Huang, X.; Myers, S. H.; Sales, G. S.; Benson, R. F.; Fung, S. F.

    1999-01-01

    Radio plasma imaging uses total reflection of electromagnetic waves from plasmas whose plasma frequencies equal the radio sounding frequency and whose electron density gradients are parallel to the wave normals. The Radio Plasma Imager (RPI) has two orthogonal 500-m long dipole antennas in the spin plane for near omni-directional transmission. The third antenna is a 20-m dipole. Echoes from the magnetopause, plasmasphere and cusp will be received with three orthogonal antennas, allowing the determination of their angle-of-arrival. Thus it will be possible to create image fragments of the reflecting density structures. The instrument can execute a large variety of programmable measuring programs operating at frequencies between 3 kHz and 3 MHz. Tuning of the transmit antennas provides optimum power transfer from the 10 W transmitter to the antennas. The instrument can operate in three active sounding modes: (1) remote sounding to probe magnetospheric boundaries, (2) local (relaxation) sounding to probe the local plasma, and (3) whistler stimulation sounding. In addition, there is a passive mode to record natural emissions, and to determine the local electron density and temperature by using a thermal noise spectroscopy technique.

  16. Radio Sounding Science at High Powers

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Reinisch, B. W.; Song, P.; Fung, S. F.; Benson, R. F.; Taylor, W. W. L.; Cooper, J. F.; Garcia, L.; Markus, T.; Gallagher, D. L.

    2004-01-01

    Future space missions like the Jupiter Icy Moons Orbiter (JIMO) planned to orbit Callisto, Ganymede, and Europa can fully utilize a variable power radio sounder instrument. Radio sounding at 1 kHz to 10 MHz at medium power levels (10 W to kW) will provide long-range magnetospheric sounding (several Jovian radii) like those first pioneered by the radio plasma imager instrument on IMAGE at low power (less than l0 W) and much shorter distances (less than 5 R(sub E)). A radio sounder orbiting a Jovian icy moon would be able to globally measure time-variable electron densities in the moon ionosphere and the local magnetospheric environment. Near-spacecraft resonance and guided echoes respectively allow measurements of local field magnitude and local field line geometry, perturbed both by direct magnetospheric interactions and by induced components from subsurface oceans. JIMO would allow radio sounding transmissions at much higher powers (approx. 10 kW) making subsurface sounding of the Jovian icy moons possible at frequencies above the ionosphere peak plasma frequency. Subsurface variations in dielectric properties, can be probed for detection of dense and solid-liquid phase boundaries associated with oceans and related structures in overlying ice crusts.

  17. Dispersal without errors: symmetrical ears tune into the right frequency for survival.

    PubMed

    Gagliano, Monica; Depczynski, Martial; Simpson, Stephen D; Moore, James A Y

    2008-03-07

    Vertebrate animals localize sounds by comparing differences in the acoustic signal between the two ears and, accordingly, ear structures such as the otoliths of fishes are expected to develop symmetrically. Sound recently emerged as a leading candidate cue for reef fish larvae navigating from open waters back to the reef. Clearly, the integrity of the auditory organ has a direct bearing on what and how fish larvae hear. Yet, the link between otolith symmetry and effective navigation has never been investigated in fishes. We tested whether otolith asymmetry influenced the ability of returning larvae to detect and successfully recruit to favourable reef habitats. Our results suggest that larvae with asymmetrical otoliths not only encountered greater difficulties in detecting suitable settlement habitats, but may also suffer significantly higher rates of mortality. Further, we found that otolith asymmetries arising early in the embryonic stage were not corrected by any compensational growth mechanism during the larval stage. Because these errors persist and phenotypic selection penalizes asymmetrical individuals, asymmetry is likely to play an important role in shaping wild fish populations.

  18. Degradation of Auditory Localization Performance Due to Helmet Ear Coverage: The Effects of Normal Acoustic Reverberation

    DTIC Science & Technology

    2009-07-01

    Therefore, it’s safe to assume that most large errors are due to front-back confusions. Front-back confusions occur in part because the binaural ...two ear) cues that dominate sound localization do not distinguish the front and rear hemispheres. The two binaural cues relied on are interaural...121 (5), 3094–3094. Shinn-Cunningham, B. G.; Kopčo, N.; Martin, T. J. Localizing Nearby Sound Sources in a Classroom: Binaural Room Impulse

  19. Tool-use-associated sound in the evolution of language.

    PubMed

    Larsson, Matz

    2015-09-01

    Proponents of the motor theory of language evolution have primarily focused on the visual domain and communication through observation of movements. In the present paper, it is hypothesized that the production and perception of sound, particularly of incidental sound of locomotion (ISOL) and tool-use sound (TUS), also contributed. Human bipedalism resulted in rhythmic and more predictable ISOL. It has been proposed that this stimulated the evolution of musical abilities, auditory working memory, and abilities to produce complex vocalizations and to mimic natural sounds. Since the human brain proficiently extracts information about objects and events from the sounds they produce, TUS, and mimicry of TUS, might have achieved an iconic function. The prevalence of sound symbolism in many extant languages supports this idea. Self-produced TUS activates multimodal brain processing (motor neurons, hearing, proprioception, touch, vision), and TUS stimulates primate audiovisual mirror neurons, which is likely to stimulate the development of association chains. Tool use and auditory gestures involve motor processing of the forelimbs, which is associated with the evolution of vertebrate vocal communication. The production, perception, and mimicry of TUS may have resulted in a limited number of vocalizations or protowords that were associated with tool use. A new way to communicate about tools, especially when out of sight, would have had selective advantage. A gradual change in acoustic properties and/or meaning could have resulted in arbitrariness and an expanded repertoire of words. Humans have been increasingly exposed to TUS over millions of years, coinciding with the period during which spoken language evolved. ISOL and tool-use-related sound are worth further exploration.

  20. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  1. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  2. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  3. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  4. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  5. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution

    PubMed Central

    Imai, Mutsumi; Kita, Sotaro

    2014-01-01

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. PMID:25092666

  6. Understanding auditory distance estimation by humpback whales: a computational approach.

    PubMed

    Mercado, E; Green, S R; Schneider, J N

    2008-02-01

    Ranging, the ability to judge the distance to a sound source, depends on the presence of predictable patterns of attenuation. We measured long-range sound propagation in coastal waters to assess whether humpback whales might use frequency degradation cues to range singing whales. Two types of neural networks, a multi-layer and a single-layer perceptron, were trained to classify recorded sounds by distance traveled based on their frequency content. The multi-layer network successfully classified received sounds, demonstrating that the distorting effects of underwater propagation on frequency content provide sufficient cues to estimate source distance. Normalizing received sounds with respect to ambient noise levels increased the accuracy of distance estimates by single-layer perceptrons, indicating that familiarity with background noise can potentially improve a listening whale's ability to range. To assess whether frequency patterns predictive of source distance were likely to be perceived by whales, recordings were pre-processed using a computational model of the humpback whale's peripheral auditory system. Although signals processed with this model contained less information than the original recordings, neural networks trained with these physiologically based representations estimated source distance more accurately, suggesting that listening whales should be able to range singers using distance-dependent changes in frequency content.

  7. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  8. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  9. Identifying local characteristic lengths governing sound wave properties in solid foams

    NASA Astrophysics Data System (ADS)

    Tan Hoang, Minh; Perrot, Camille

    2013-02-01

    Identifying microscopic geometric properties and fluid flow through opened-cell and partially closed-cell solid structures is a challenge for material science, in particular, for the design of porous media used as sound absorbers in building and transportation industries. We revisit recent literature data to identify the local characteristic lengths dominating the transport properties and sound absorbing behavior of polyurethane foam samples by performing numerical homogenization simulations. To determine the characteristic sizes of the model, we need porosity and permeability measurements in conjunction with ligament lengths estimates from available scanning electron microscope images. We demonstrate that this description of the porous material, consistent with the critical path picture following from the percolation arguments, is widely applicable. This is an important step towards tuning sound proofing properties of complex materials.

  10. An acoustic metamaterial composed of multi-layer membrane-coated perforated plates for low-frequency sound insulation

    NASA Astrophysics Data System (ADS)

    Fan, Li; Chen, Zhe; Zhang, Shu-yi; Ding, Jin; Li, Xiao-juan; Zhang, Hui

    2015-04-01

    Insulating against low-frequency sound (below 500 Hz ) remains challenging despite the progress that has been achieved in sound insulation and absorption. In this work, an acoustic metamaterial based on membrane-coated perforated plates is presented for achieving sound insulation in a low-frequency range, even covering the lower audio frequency limit, 20 Hz . Theoretical analysis and finite element simulations demonstrate that this metamaterial can effectively block acoustic waves over a wide low-frequency band regardless of incident angles. Two mechanisms, non-resonance and monopolar resonance, operate in the metamaterial, resulting in a more powerful sound insulation ability than that achieved using periodically arranged multi-layer solid plates.

  11. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  12. Sound Solutions

    ERIC Educational Resources Information Center

    Starkman, Neal

    2007-01-01

    Poor classroom acoustics are impairing students' hearing and their ability to learn. However, technology has come up with a solution: tools that focus voices in a way that minimizes intrusive ambient noise and gets to the intended receiver--not merely amplifying the sound, but also clarifying and directing it. One provider of classroom audio…

  13. Cross-Modulation Interference with Lateralization of Mixed-Modulated Waveforms

    ERIC Educational Resources Information Center

    Hsieh, I-Hui; Petrosyan, Agavni; Goncalves, Oscar F.; Hickok, Gregory; Saberi, Kourosh

    2010-01-01

    Purpose: This study investigated the ability to use spatial information in mixed-modulated (MM) sounds containing concurrent frequency-modulated (FM) and amplitude-modulated (AM) sounds by exploring patterns of interference when different modulation types originated from different loci as may occur in a multisource acoustic field. Method:…

  14. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  15. Test of a motor theory of long-term auditory memory

    PubMed Central

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-01-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719

  16. Test of a motor theory of long-term auditory memory.

    PubMed

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.

  17. Developing the Ability for Making Evaluative Judgements

    ERIC Educational Resources Information Center

    Cowan, John

    2010-01-01

    It is suggested that a more specific emphasis should be placed in undergraduate education on the explicit development of the ability to make evaluative judgements. This higher level cognitive ability is highlighted as the foundation for much sound and successful personal and professional development throughout education, and in lifelong…

  18. Shallow characterization of the subsurface for the 2018 Mission to Mars

    NASA Astrophysics Data System (ADS)

    Ciarletti, V.; plettemeier, D.; Vieau, A. J.; Hassen-Khodja, R.; Lustrement, B.; Cais, P.; Clifford, S.

    2012-04-01

    The highest priority scientific objectives of the revised 2018 mission to Mars are (1) to search for evidence of past or present life, (2) to identify the samples that are most likely to preserve potential evidence of life and the nature of the early Martian environment that might have given rise to it and (3) to cache them for later retrieval back to Earth for more detailed analyses than can be performed by the rover's onboard analytical laboratory. WISDOM is a ground penetrating radar that has been designed to investigate the near subsurface of Mars down to a depth of ~2-3 m, with a vertical resolution of several centimeters - commensurate with the sampling capabilities of the ExoMars onboard drill. The ability of WISDOM to investigate the geology of the landing site in 3-dimensions will permit direct correlations between subsurface layers and horizons with those exposed in nearby outcrops and the interior of impact craters. By combining periodic soundings conducted during a Rover traverse with targeted, high density grid-type soundings of areas of potential scientific interest, it will be possible to construct a 3-dimensional map of the local radar stratigraphy. Of all of the Pasteur Payload instruments, only WISDOM has the ability to investigate and characterize the nature of the subsurface remotely. Moreover, the geoelectrical properties of H2O make WISDOM a powerful tool to understand the local distribution and state of subsurface H2O, including the potential presence of segregated ground ice and the persistent or transient occurrence of liquid water/brine. A WISDOM prototype, representative of the final flight model is now being tested. A series of calibrations and verifications have been initiated. The real performance of the instrument is currently assessed for various test environments. Results about the resolution and sensitivity achieved are presented as well as 3D representations of detected subsurface structures. Preliminary estimates of permittivity values are also shown.

  19. Diversity of acoustic tracheal system and its role for directional hearing in crickets

    PubMed Central

    2013-01-01

    Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512

  20. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  1. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... it is deemed necessary to ensure the safety of life or property. (i) For all power boat races listed...

  2. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  3. Sound credit scores and financial decisions despite cognitive aging.

    PubMed

    Li, Ye; Gao, Jie; Enkavi, A Zeynep; Zaval, Lisa; Weber, Elke U; Johnson, Eric J

    2015-01-06

    Age-related deterioration in cognitive ability may compromise the ability of older adults to make major financial decisions. We explore whether knowledge and expertise accumulated from past decisions can offset cognitive decline to maintain decision quality over the life span. Using a unique dataset that combines measures of cognitive ability (fluid intelligence) and of general and domain-specific knowledge (crystallized intelligence), credit report data, and other measures of decision quality, we show that domain-specific knowledge and expertise provide an alternative route for sound financial decisions. That is, cognitive aging does not spell doom for financial decision-making in domains where the decision maker has developed expertise. These results have important implications for public policy and for the design of effective interventions and decision aids.

  4. Auditory sequence analysis and phonological skill

    PubMed Central

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E.; Turton, Stuart; Griffiths, Timothy D.

    2012-01-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  5. On non-local energy transfer via zonal flow in the Dimits shift

    NASA Astrophysics Data System (ADS)

    St-Onge, Denis A.

    2017-10-01

    The two-dimensional Terry-Horton equation is shown to exhibit the Dimits shift when suitably modified to capture both the nonlinear enhancement of zonal/drift-wave interactions and the existence of residual Rosenbluth-Hinton states. This phenomenon persists through numerous simplifications of the equation, including a quasilinear approximation as well as a four-mode truncation. It is shown that the use of an appropriate adiabatic electron response, for which the electrons are not affected by the flux-averaged potential, results in an nonlinearity that can efficiently transfer energy non-locally to length scales of the order of the sound radius. The size of the shift for the nonlinear system is heuristically calculated and found to be in excellent agreement with numerical solutions. The existence of the Dimits shift for this system is then understood as an ability of the unstable primary modes to efficiently couple to stable modes at smaller scales, and the shift ends when these stable modes eventually destabilize as the density gradient is increased. This non-local mechanism of energy transfer is argued to be generically important even for more physically complete systems.

  6. Olivocochlear Efferent Control in Sound Localization and Experience-Dependent Learning

    PubMed Central

    Irving, Samuel; Moore, David R.; Liberman, M. Charles; Sumner, Christian J.

    2012-01-01

    Efferent auditory pathways have been implicated in sound localization and its plasticity. We examined the role of the olivocochlear system (OC) in horizontal sound localization by the ferret and in localization learning following unilateral earplugging. Under anesthesia, adult ferrets underwent olivocochlear bundle section at the floor of the fourth ventricle, either at the midline or laterally (left). Lesioned and control animals were trained to localize 1 s and 40ms amplitude-roved broadband noise stimuli from one of 12 loudspeakers. Neither type of lesion affected normal localization accuracy. All ferrets then received a left earplug and were tested and trained over 10 d. The plug profoundly disrupted localization. Ferrets in the control and lateral lesion groups improved significantly during subsequent training on the 1 s stimulus. No improvement (learning) occurred in the midline lesion group. Markedly poorer performance and failure to learn was observed with the 40 ms stimulus in all groups. Plug removal resulted in a rapid resumption of normal localization in all animals. Insertion of a subsequent plug in the right ear produced similar results to left earplugging. Learning in the lateral lesion group was independent of the side of the lesion relative to the earplug. Lesions in all reported cases were verified histologically. The results suggest the OC system is not needed for accurate localization, but that it is involved in relearning localization during unilateral conductive hearing loss. PMID:21325517

  7. Prosody Predicts Contest Outcome in Non-Verbal Dialogs

    PubMed Central

    Dreiss, Amélie N.; Chatelain, Philippe G.

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process. PMID:27907039

  8. Prosody Predicts Contest Outcome in Non-Verbal Dialogs.

    PubMed

    Dreiss, Amélie N; Chatelain, Philippe G; Roulin, Alexandre; Richner, Heinz

    2016-01-01

    Non-verbal communication has important implications for inter-individual relationships and negotiation success. However, to what extent humans can spontaneously use rhythm and prosody as a sole communication tool is largely unknown. We analysed human ability to resolve a conflict without verbal dialogs, independently of semantics. We invited pairs of subjects to communicate non-verbally using whistle sounds. Along with the production of more whistles, participants unwittingly used a subtle prosodic feature to compete over a resource (ice-cream scoops). Winners can be identified by their propensity to accentuate the first whistles blown when replying to their partner, compared to the following whistles. Naive listeners correctly identified this prosodic feature as a key determinant of which whistler won the interaction. These results suggest that in the absence of other communication channels, individuals spontaneously use a subtle variation of sound accentuation (prosody), instead of merely producing exuberant sounds, to impose themselves in a conflict of interest. We discuss the biological and cultural bases of this ability and their link with verbal communication. Our results highlight the human ability to use non-verbal communication in a negotiation process.

  9. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    PubMed Central

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-01-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas. PMID:28232739

  10. An intelligent artificial throat with sound-sensing ability based on laser induced graphene.

    PubMed

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-24

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  11. An intelligent artificial throat with sound-sensing ability based on laser induced graphene

    NASA Astrophysics Data System (ADS)

    Tao, Lu-Qi; Tian, He; Liu, Ying; Ju, Zhen-Yi; Pang, Yu; Chen, Yuan-Quan; Wang, Dan-Yang; Tian, Xiang-Guang; Yan, Jun-Chao; Deng, Ning-Qin; Yang, Yi; Ren, Tian-Ling

    2017-02-01

    Traditional sound sources and sound detectors are usually independent and discrete in the human hearing range. To minimize the device size and integrate it with wearable electronics, there is an urgent requirement of realizing the functional integration of generating and detecting sound in a single device. Here we show an intelligent laser-induced graphene artificial throat, which can not only generate sound but also detect sound in a single device. More importantly, the intelligent artificial throat will significantly assist for the disabled, because the simple throat vibrations such as hum, cough and scream with different intensity or frequency from a mute person can be detected and converted into controllable sounds. Furthermore, the laser-induced graphene artificial throat has the advantage of one-step fabrication, high efficiency, excellent flexibility and low cost, and it will open practical applications in voice control, wearable electronics and many other areas.

  12. Assessing Auditory Discrimination Skill of Malay Children Using Computer-based Method.

    PubMed

    Ting, H; Yunus, J; Mohd Nordin, M Z

    2005-01-01

    The purpose of this paper is to investigate the auditory discrimination skill of Malay children using computer-based method. Currently, most of the auditory discrimination assessments are conducted manually by Speech-Language Pathologist. These conventional tests are actually general tests of sound discrimination, which do not reflect the client's specific speech sound errors. Thus, we propose computer-based Malay auditory discrimination test to automate the whole process of assessment as well as to customize the test according to the specific speech error sounds of the client. The ability in discriminating voiced and unvoiced Malay speech sounds was studied for the Malay children aged between 7 and 10 years old. The study showed no major difficulty for the children in discriminating the Malay speech sounds except differentiating /g/-/k/ sounds. Averagely the children of 7 years old failed to discriminate /g/-/k/ sounds.

  13. On the relevance of source effects in geomagnetic pulsations for induction soundings

    NASA Astrophysics Data System (ADS)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  14. Reverberation enhances onset dominance in sound localization.

    PubMed

    Stecker, G Christopher; Moore, Travis M

    2018-02-01

    Temporal variation in sensitivity to sound-localization cues was measured in anechoic conditions and in simulated reverberation using the temporal weighting function (TWF) paradigm [Stecker and Hafter (2002). J. Acoust. Soc. Am. 112, 1046-1057]. Listeners judged the locations of Gabor click trains (4 kHz center frequency, 5-ms interclick interval) presented from an array of loudspeakers spanning 360° azimuth. Targets ranged ±56.25° across trials. Individual clicks within each train varied by an additional ±11.25° to allow TWF calculation by multiple regression. In separate conditions, sounds were presented directly or in the presence of simulated reverberation: 13 orders of lateral reflection were computed for a 10 m × 10 m room ( RT 60 ≊300 ms) and mapped to the appropriate locations in the loudspeaker array. Results reveal a marked increase in perceptual weight applied to the initial click in reverberation, along with a reduction in the impact of late-arriving sound. In a second experiment, target stimuli were preceded by trains of "conditioner" sounds with or without reverberation. Effects were modest and limited to the first few clicks in a train, suggesting that impacts of reverberant pre-exposure on localization may be limited to the processing of information from early reflections.

  15. A summary of research investigating echolocation abilities of blind and sighted humans.

    PubMed

    Kolarik, Andrew J; Cirstea, Silvia; Pardhan, Shahina; Moore, Brian C J

    2014-04-01

    There is currently considerable interest in the consequences of loss in one sensory modality on the remaining senses. Much of this work has focused on the development of enhanced auditory abilities among blind individuals, who are often able to use sound to navigate through space. It has now been established that many blind individuals produce sound emissions and use the returning echoes to provide them with information about objects in their surroundings, in a similar manner to bats navigating in the dark. In this review, we summarize current knowledge regarding human echolocation. Some blind individuals develop remarkable echolocation abilities, and are able to assess the position, size, distance, shape, and material of objects using reflected sound waves. After training, normally sighted people are also able to use echolocation to perceive objects, and can develop abilities comparable to, but typically somewhat poorer than, those of blind people. The underlying cues and mechanisms, operable range, spatial acuity and neurological underpinnings of echolocation are described. Echolocation can result in functional real life benefits. It is possible that these benefits can be optimized via suitable training, especially among those with recently acquired blindness, but this requires further study. Areas for further research are identified. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. [The underwater and airborne horizontal localization of sound by the northern fur seal].

    PubMed

    Babushina, E S; Poliakov, M A

    2004-01-01

    The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.

  17. 77 FR 6954 - Special Local Regulations; Safety and Security Zones; Recurring Events in Captain of the Port...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... Events in Captain of the Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Final rule... Sector Long Island Sound Captain of the Port (COTP) Zone. These limited access areas include special... Sector Long Island Sound, telephone 203-468- 4544, email [email protected] . If you have questions...

  18. 33 CFR 100.121 - Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain's Cove Seaport...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY REGATTAS AND MARINE PARADES... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Swim Across the Sound, Long... the Federal Register, separate marine broadcasts and local notice to mariners. [USCG-2009-0395, 75 FR...

  19. Failure of the precedence effect with a noise-band vocoder

    PubMed Central

    Seeber, Bernhard U.; Hafter, Ervin R.

    2011-01-01

    The precedence effect (PE) describes the ability to localize a direct, leading sound correctly when its delayed copy (lag) is present, though not separately audible. The relative contribution of binaural cues in the temporal fine structure (TFS) of lead–lag signals was compared to that of interaural level differences (ILDs) and interaural time differences (ITDs) carried in the envelope. In a localization dominance paradigm participants indicated the spatial location of lead–lag stimuli processed with a binaural noise-band vocoder whose noise carriers introduced random TFS. The PE appeared for noise bursts of 10 ms duration, indicating dominance of envelope information. However, for three test words the PE often failed even at short lead–lag delays, producing two images, one toward the lead and one toward the lag. When interaural correlation in the carrier was increased, the images appeared more centered, but often remained split. Although previous studies suggest dominance of TFS cues, no image is lateralized in accord with the ITD in the TFS. An interpretation in the context of auditory scene analysis is proposed: By replacing the TFS with that of noise the auditory system loses the ability to fuse lead and lag into one object, and thus to show the PE. PMID:21428515

  20. The evidence base for the application of contralateral bone anchored hearing aids in acquired unilateral sensorineural hearing loss in adults.

    PubMed

    Baguley, D M; Bird, J; Humphriss, R L; Prevost, A T

    2006-02-01

    . Acquired unilateral sensorineural hearing loss reduces the ability to localize sounds and to discriminate in background noise. . Four controlled trials attempt to determine the benefit of contralateral bone anchored hearing aids over contralateral routing of signal (CROS) hearing aids and over the unaided condition. All found no significant improvement in auditory localization with either aid. Speech discrimination in noise and subjective questionnaire measures of auditory abilities showed an advantage for bone anchored hearing aid (BAHA) > CROS > unaided conditions. . All four studies have material shortfalls: (i) the BAHA was always trialled after the CROS aid; (ii) CROS aids were only trialled for 4 weeks; (iii) none used any measure of hearing handicap when selecting subjects; (iv) two studies have a bias in terms of patient selection; (v) all studies were underpowered (vi) double reporting of patients occurred. . There is a paucity of evidence to support the efficacy of BAHA in the treatment of acquired unilateral sensorineural hearing loss. Clinicians should proceed with caution and perhaps await a larger randomized trial. . It is perhaps only appropriate to insert a BAHA peg at the time of vestibular schwanoma tumour excision in patients with good preoperative hearing, as their hearing handicap increases most.

  1. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  2. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  3. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research

    PubMed Central

    Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers’ speech-specific capabilities, rather than the perceivers’ psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants’ ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants’ acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery. PMID:29176886

  4. Differences in phonetic discrimination stem from differences in psychoacoustic abilities in learning the sounds of a second language: Evidence from ERP research.

    PubMed

    Lin, Yi; Fan, Ruolin; Mo, Lei

    2017-01-01

    The scientific community has been divided as to the origin of individual differences in perceiving the sounds of a second language (L2). There are two alternative explanations: a general psychoacoustic origin vs. a speech-specific one. A previous study showed that such individual variability is linked to the perceivers' speech-specific capabilities, rather than the perceivers' psychoacoustic abilities. However, we assume that the selection of participants and parameters of sound stimuli might not appropriate. Therefore, we adjusted the sound stimuli and recorded event-related potentials (ERPs) from two groups of early, proficient Cantonese (L1)-Mandarin (L2) bilinguals who differed in their mastery of the Mandarin (L2) phonetic contrast /in-ing/, to explore whether the individual differences in perceiving L2 stem from participants' ability to discriminate various pure tones (frequency, duration and pattern). To precisely measure the participants' acoustic discrimination, mismatch negativity (MMN) elicited by the oddball paradigm was recorded in the experiment. The results showed that significant differences between good perceivers (GPs) and poor perceivers (PPs) were found in the three general acoustic conditions (frequency, duration and pattern), and the MMN amplitude for GP was significantly larger than for PP. Therefore, our results support a general psychoacoustic origin of individual variability in L2 phonetic mastery.

  5. Method and apparatus for ultrasonic doppler velocimetry using speed of sound and reflection mode pulsed wideband doppler

    DOEpatents

    Shekarriz, Alireza; Sheen, David M.

    2000-01-01

    According to the present invention, a method and apparatus rely upon tomographic measurement of the speed of sound and fluid velocity in a pipe. The invention provides a more accurate profile of velocity within flow fields where the speed of sound varies within the cross-section of the pipe. This profile is obtained by reconstruction of the velocity profile from the local speed of sound measurement simultaneously with the flow velocity. The method of the present invention is real-time tomographic ultrasonic Doppler velocimetry utilizing a to plurality of ultrasonic transmission and reflection measurements along two orthogonal sets of parallel acoustic lines-of-sight. The fluid velocity profile and the acoustic velocity profile are determined by iteration between determining a fluid velocity profile and measuring local acoustic velocity until convergence is reached.

  6. The sound symbolism bootstrapping hypothesis for language acquisition and language evolution.

    PubMed

    Imai, Mutsumi; Kita, Sotaro

    2014-09-19

    Sound symbolism is a non-arbitrary relationship between speech sounds and meaning. We review evidence that, contrary to the traditional view in linguistics, sound symbolism is an important design feature of language, which affects online processing of language, and most importantly, language acquisition. We propose the sound symbolism bootstrapping hypothesis, claiming that (i) pre-verbal infants are sensitive to sound symbolism, due to a biologically endowed ability to map and integrate multi-modal input, (ii) sound symbolism helps infants gain referential insight for speech sounds, (iii) sound symbolism helps infants and toddlers associate speech sounds with their referents to establish a lexical representation and (iv) sound symbolism helps toddlers learn words by allowing them to focus on referents embedded in a complex scene, alleviating Quine's problem. We further explore the possibility that sound symbolism is deeply related to language evolution, drawing the parallel between historical development of language across generations and ontogenetic development within individuals. Finally, we suggest that sound symbolism bootstrapping is a part of a more general phenomenon of bootstrapping by means of iconic representations, drawing on similarities and close behavioural links between sound symbolism and speech-accompanying iconic gesture. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  7. Continuous robust sound event classification using time-frequency features and deep learning

    PubMed Central

    Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification. PMID:28892478

  8. Continuous robust sound event classification using time-frequency features and deep learning.

    PubMed

    McLoughlin, Ian; Zhang, Haomin; Xie, Zhipeng; Song, Yan; Xiao, Wei; Phan, Huy

    2017-01-01

    The automatic detection and recognition of sound events by computers is a requirement for a number of emerging sensing and human computer interaction technologies. Recent advances in this field have been achieved by machine learning classifiers working in conjunction with time-frequency feature representations. This combination has achieved excellent accuracy for classification of discrete sounds. The ability to recognise sounds under real-world noisy conditions, called robust sound event classification, is an especially challenging task that has attracted recent research attention. Another aspect of real-word conditions is the classification of continuous, occluded or overlapping sounds, rather than classification of short isolated sound recordings. This paper addresses the classification of noise-corrupted, occluded, overlapped, continuous sound recordings. It first proposes a standard evaluation task for such sounds based upon a common existing method for evaluating isolated sound classification. It then benchmarks several high performing isolated sound classifiers to operate with continuous sound data by incorporating an energy-based event detection front end. Results are reported for each tested system using the new task, to provide the first analysis of their performance for continuous sound event detection. In addition it proposes and evaluates a novel Bayesian-inspired front end for the segmentation and detection of continuous sound recordings prior to classification.

  9. To Modulate and Be Modulated: Estrogenic Influences on Auditory Processing of Communication Signals within a Socio-Neuro-Endocrine Framework

    PubMed Central

    Yoder, Kathleen M.; Vicario, David S.

    2012-01-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1. Local estradiol action within an auditory area is necessary for socially-relevant sounds to induce normal physiological responses in the brains of both sexes; 2. These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3. Estradiol action within the auditory forebrain enables behavioral discrimination among socially-relevant sounds in males; and 4. Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. Keywords: Estrogens, Songbird, Social Context, Auditory Perception PMID:22201281

  10. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  11. The Impact of Masker Fringe and Masker Sparial Uncertainty on Sound Localization

    DTIC Science & Technology

    2010-09-01

    spatial uncertainty on sound localization and to examine how such effects might be related to binaural detection and informational masking. 2 Methods...results from the binaural detection literature and suggest that a longer duration fringe provides a more robust context against which to judge the...results from the binaural detection literature, which suggest that forward masker fringe provides a greater benefit than backward masker fringe [2]. The

  12. Dragon Ears airborne acoustic array: CSP analysis applied to cross array to compute real-time 2D acoustic sound field

    NASA Astrophysics Data System (ADS)

    Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark

    2003-09-01

    This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.

  13. Phonological Encoding in Speech-Sound Disorder: Evidence from a Cross-Modal Priming Experiment

    ERIC Educational Resources Information Center

    Munson, Benjamin; Krause, Miriam O. P.

    2017-01-01

    Background: Psycholinguistic models of language production provide a framework for determining the locus of language breakdown that leads to speech-sound disorder (SSD) in children. Aims: To examine whether children with SSD differ from their age-matched peers with typical speech and language development (TD) in the ability phonologically to…

  14. Belief in the Claim of an Argument Increases Perceived Argument Soundness

    ERIC Educational Resources Information Center

    Wolfe, Michael B.; Kurby, Christopher A.

    2017-01-01

    We examined subjects' ability to judge the soundness of informal arguments. The argument claims matched or did not match subject beliefs. In all experiments subjects indicated beliefs about spanking and television violence in a prescreening. Subjects read one-sentence arguments consisting of a claim followed by a reason and then judged the…

  15. Examining Word Factors and Child Factors for Acquisition of Conditional Sound-Spelling Consistencies: A Longitudinal Study

    ERIC Educational Resources Information Center

    Kim, Young-Suk Grace; Petscher, Yaacov; Park, Younghee

    2016-01-01

    It has been suggested that children acquire spelling by picking up conditional sound-spelling consistencies. To examine this hypothesis, we investigated how variation in word characteristics (words that vary systematically in terms of phoneme-grapheme correspondences) and child factors (individual differences in the ability to extract…

  16. Bilateral Capacity for Speech Sound Processing in Auditory Comprehension: Evidence from Wada Procedures

    ERIC Educational Resources Information Center

    Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.

    2008-01-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…

  17. Clarifying the Associations between Language and Social Development in Autism: A Study of Non-Native Phoneme Recognition

    ERIC Educational Resources Information Center

    Constantino, John N.; Yang, Dan; Gray, Teddi L.; Gross, Maggie M.; Abbacchi, Anna M.; Smith, Sarah C.; Kohn, Catherine E.; Kuhl, Patricia K.

    2007-01-01

    Autism spectrum disorders (ASDs) are characterized by correlated deficiencies in social and language development. This study explored a fundamental aspect of auditory information processing (AIP) that is dependent on social experience and critical to early language development: the ability to compartmentalize close-sounding speech sounds into…

  18. Perception of Spectral Contrast by Hearing-Impaired Listeners

    ERIC Educational Resources Information Center

    Dreisbach, Laura E.; Leek, Marjorie R.; Lentz, Jennifer J.

    2005-01-01

    The ability to discriminate the spectral shapes of complex sounds is critical to accurate speech perception. Part of the difficulty experienced by listeners with hearing loss in understanding speech sounds in noise may be related to a smearing of the internal representation of the spectral peaks and valleys because of the loss of sensitivity and…

  19. Native and Novel Language Prosodic Sensitivity in English-Speaking Children with and without Dyslexia

    ERIC Educational Resources Information Center

    Anderson, Alida; Lin, Candise Y.; Wang, Min

    2013-01-01

    Children with reading disability and normal reading development were compared in their ability to discriminate native (English) and novel language (Mandarin) from nonlinguistic sounds. Children's preference for native versus novel language sounds and for disyllables containing dominant trochaic versus non-dominant iambic stress patterns was also…

  20. Anxiety sensitivity and auditory perception of heartbeat.

    PubMed

    Pollock, R A; Carter, A S; Amir, N; Marks, L E

    2006-12-01

    Anxiety sensitivity (AS) is the fear of sensations associated with autonomic arousal. AS has been associated with the development and maintenance of panic disorder. Given that panic patients often rate cardiac symptoms as the most fear-provoking feature of a panic attack, AS individuals may be especially responsive to cardiac stimuli. Consequently, we developed a signal-in-white-noise detection paradigm to examine the strategies that high and low AS individuals use to detect and discriminate normal and abnormal heartbeat sounds. Compared to low AS individuals, high AS individuals demonstrated a greater propensity to report the presence of normal, but not abnormal, heartbeat sounds. High and low AS individuals did not differ in their ability to perceive normal heartbeat sounds against a background of white noise; however, high AS individuals consistently demonstrated lower ability to discriminate abnormal heartbeats from background noise and between abnormal and normal heartbeats. AS was characterized by an elevated false alarm rate across all tasks. These results suggest that heartbeat sounds may be fear-relevant cues for AS individuals, and may affect their attention and perception in tasks involving threat signals.

  1. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    NASA Astrophysics Data System (ADS)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  2. Case studies of pre-engineered and manufactured sound isolation rooms for music practice and radio broadcast

    NASA Astrophysics Data System (ADS)

    Probst, Ron N.; Rypka, Dann

    2005-09-01

    Pre-engineered and manufactured sound isolation rooms were developed to ensure guaranteed sound isolation while offering the unique ability to be disassembled and relocated without loss of acoustic performance. Case studies of pre-engineered sound isolation rooms used for music practice and various radio broadcast purposes are highlighted. Three prominent universities wrestle with the challenges of growth and expansion while responding to the specialized acoustic requirements of these spaces. Reduced state funding for universities requires close examination of all options while ensuring sound isolation requirements are achieved. Changing curriculum, renovation, and new construction make pre-engineered and manufactured rooms with guaranteed acoustical performance good investments now and for the future. An added benefit is the optional integration of active acoustics to provide simulations of other spaces or venues along with the benefit of sound isolation.

  3. A Comparative Study of the Effect of Subliminal Messages on Public Speaking Ability.

    ERIC Educational Resources Information Center

    Schnell, James A.

    A study investigated the effectiveness of subliminal techniques (such as tape recorded programs) for improving public speaking ability. It was hypothesized that students who used subliminal tapes to improve public speaking ability would perform no differently from classmates who listened to identical-sounding placebo tape programs containing no…

  4. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  5. Active control of sound transmission through partitions composed of discretely controlled modules

    NASA Astrophysics Data System (ADS)

    Leishman, Timothy W.

    This thesis provides a detailed theoretical and experimental investigation of active segmented partitions (ASPs) for the control of sound transmission. ASPs are physically segmented arrays of interconnected acoustically and structurally small modules that are discretely controlled using electronic controllers. Theoretical analyses of the thesis first address physical principles fundamental to ASP modeling and experimental measurement techniques. Next, they explore specific module configurations, primarily using equivalent circuits. Measured normal-incidence transmission losses and related properties of experimental ASPs are determined using plane wave tubes and the two-microphone transfer function technique. A scanning laser vibrometer is also used to evaluate distributed transmitting surface vibrations. ASPs have the inherent potential to provide excellent active sound transmission control (ASTC) through lightweight structures, using very practical control strategies. The thesis analyzes several unique ASP configurations and evaluates their abilities to produce high transmission losses via global minimization of normal transmitting surface vibrations. A novel dual diaphragm configuration is shown to employ this strategy particularly well. It uses an important combination of acoustical actuation and mechano-acoustical segmentation to produce exceptionally high transmission loss (e.g., 50 to 80 dB) over a broad frequency range-including lower audible frequencies. Such performance is shown to be comparable to that produced by much more massive partitions composed of thick layers of steel or concrete and sand. The configuration uses only simple localized error sensors and actuators, permitting effective use of independent single-channel controllers in a decentralized format. This work counteracts the commonly accepted notion that active vibration control of partitions is an ineffective means of controlling sound transmission. With appropriate construction, actuation, and error sensing, ASPs can achieve high sound transmission loss through efficient global control of transmitting surface vibrations. This approach is applicable to a wide variety of source and receiving spaces-and to both near fields and far fields.

  6. [Usefulness of peristalsis, flatulence and evacuation for predicting oral route tolerance in patients subjected to major abdominal surgery].

    PubMed

    Hernández-Hernández, Betsabé; Figueroa-Gallaga, Luis; Sánchez-Castrillo, Christian; Belmonte-Montes, Carlos

    2007-01-01

    to evaluate the usefulness of bowel sounds, flatus and bowel movement presence to predict tolerance of oral intake in patients following major abdominal surgery. nutrition is one of the most important factors in the management of postoperative care. The early oral intake has shown to contribute to a faster recovery. Traditionally the beginning of postoperative feeding after major abdominal surgery is delayed until bowel sounds, flatus and/or bowel movement are present although there is no enough medical evidence for their usefulness. We studied 88 patients following major abdominal surgery. We registered the presence of bowel sounds, flatus and bowel movement each 24 hours in the postoperative period. We analized the relationship between the presence of these signs and the ability to tolerate oral intake. Predictive values, sensitivity, specificity and ROC curves were calculated. results shown that bowel sounds have an acCeptable sensibility but a very low specificity to predict the ability to tolerate oral intake. Unlike bowel sounds, bowel movements shown a low sensibility and a high specificity. Flatus turned out to have and intermediate sensitivity and specificity in the prediction of tolerance of oral feeding. in this study any of these signs were shown as a reliable indicator for beginning oral feeding because they have a moderate to low usefulness.

  7. Broadband Focusing Acoustic Lens Based on Fractal Metamaterials

    PubMed Central

    Song, Gang Yong; Huang, Bei; Dong, Hui Yuan; Cheng, Qiang; Cui, Tie Jun

    2016-01-01

    Acoustic metamaterials are artificial structures which can manipulate sound waves through their unconventional effective properties. Different from the locally resonant elements proposed in earlier studies, we propose an alternate route to realize acoustic metamaterials with both low loss and large refractive indices. We describe a new kind of acoustic metamaterial element with the fractal geometry. Due to the self-similar properties of the proposed structure, broadband acoustic responses may arise within a broad frequency range, making it a good candidate for a number of applications, such as super-resolution imaging and acoustic tunneling. A flat acoustic lens is designed and experimentally verified using this approach, showing excellent focusing abilities from 2 kHz and 5 kHz in the measured results. PMID:27782216

  8. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  9. Sound credit scores and financial decisions despite cognitive aging

    PubMed Central

    Li, Ye; Gao, Jie; Enkavi, A. Zeynep; Zaval, Lisa; Weber, Elke U.; Johnson, Eric J.

    2015-01-01

    Age-related deterioration in cognitive ability may compromise the ability of older adults to make major financial decisions. We explore whether knowledge and expertise accumulated from past decisions can offset cognitive decline to maintain decision quality over the life span. Using a unique dataset that combines measures of cognitive ability (fluid intelligence) and of general and domain-specific knowledge (crystallized intelligence), credit report data, and other measures of decision quality, we show that domain-specific knowledge and expertise provide an alternative route for sound financial decisions. That is, cognitive aging does not spell doom for financial decision-making in domains where the decision maker has developed expertise. These results have important implications for public policy and for the design of effective interventions and decision aids. PMID:25535381

  10. High-sensitivity acoustic sensors from nanofibre webs.

    PubMed

    Lang, Chenhong; Fang, Jian; Shao, Hao; Ding, Xin; Lin, Tong

    2016-03-23

    Considerable interest has been devoted to converting mechanical energy into electricity using polymer nanofibres. In particular, piezoelectric nanofibres produced by electrospinning have shown remarkable mechanical energy-to-electricity conversion ability. However, there is little data for the acoustic-to-electric conversion of electrospun nanofibres. Here we show that electrospun piezoelectric nanofibre webs have a strong acoustic-to-electric conversion ability. Using poly(vinylidene fluoride) as a model polymer and a sensor device that transfers sound directly to the nanofibre layer, we show that the sensor devices can detect low-frequency sound with a sensitivity as high as 266 mV Pa(-1). They can precisely distinguish sound waves in low to middle frequency region. These features make them especially suitable for noise detection. Our nanofibre device has more than five times higher sensitivity than a commercial piezoelectric poly(vinylidene fluoride) film device. Electrospun piezoelectric nanofibres may be useful for developing high-performance acoustic sensors.

  11. High-sensitivity acoustic sensors from nanofibre webs

    PubMed Central

    Lang, Chenhong; Fang, Jian; Shao, Hao; Ding, Xin; Lin, Tong

    2016-01-01

    Considerable interest has been devoted to converting mechanical energy into electricity using polymer nanofibres. In particular, piezoelectric nanofibres produced by electrospinning have shown remarkable mechanical energy-to-electricity conversion ability. However, there is little data for the acoustic-to-electric conversion of electrospun nanofibres. Here we show that electrospun piezoelectric nanofibre webs have a strong acoustic-to-electric conversion ability. Using poly(vinylidene fluoride) as a model polymer and a sensor device that transfers sound directly to the nanofibre layer, we show that the sensor devices can detect low-frequency sound with a sensitivity as high as 266 mV Pa−1. They can precisely distinguish sound waves in low to middle frequency region. These features make them especially suitable for noise detection. Our nanofibre device has more than five times higher sensitivity than a commercial piezoelectric poly(vinylidene fluoride) film device. Electrospun piezoelectric nanofibres may be useful for developing high-performance acoustic sensors. PMID:27005010

  12. Relationships between early literacy and nonlinguistic rhythmic processes in kindergarteners.

    PubMed

    Ozernov-Palchik, Ola; Wolf, Maryanne; Patel, Aniruddh D

    2018-03-01

    A growing number of studies report links between nonlinguistic rhythmic abilities and certain linguistic abilities, particularly phonological skills. The current study investigated the relationship between nonlinguistic rhythmic processing, phonological abilities, and early literacy abilities in kindergarteners. A distinctive aspect of the current work was the exploration of whether processing of different types of rhythmic patterns is differentially related to kindergarteners' phonological and reading-related abilities. Specifically, we examined the processing of metrical versus nonmetrical rhythmic patterns, that is, patterns capable of being subdivided into equal temporal intervals or not (Povel & Essens, 1985). This is an important comparison because most music involves metrical sequences, in which rhythm often has an underlying temporal grid of isochronous units. In contrast, nonmetrical sequences are arguably more typical to speech rhythm, which is temporally structured but does not involve an underlying grid of equal temporal units. A rhythm discrimination app with metrical and nonmetrical patterns was administered to 74 kindergarteners in conjunction with cognitive and preliteracy measures. Findings support a relationship among rhythm perception, phonological awareness, and letter-sound knowledge (an essential precursor of reading). A mediation analysis revealed that the association between rhythm perception and letter-sound knowledge is mediated through phonological awareness. Furthermore, metrical perception accounted for unique variance in letter-sound knowledge above all other language and cognitive measures. These results point to a unique role for temporal regularity processing in the association between musical rhythm and literacy in young children. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Optical microphone

    DOEpatents

    Veligdan, James T.

    2000-01-11

    An optical microphone includes a laser and beam splitter cooperating therewith for splitting a laser beam into a reference beam and a signal beam. A reflecting sensor receives the signal beam and reflects it in a plurality of reflections through sound pressure waves. A photodetector receives both the reference beam and reflected signal beam for heterodyning thereof to produce an acoustic signal for the sound waves. The sound waves vary the local refractive index in the path of the signal beam which experiences a Doppler frequency shift directly analogous with the sound waves.

  14. Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts

    DTIC Science & Technology

    2012-07-01

    percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the

  15. Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer.

    PubMed

    Dorman, Michael F; Natale, Sarah; Loiselle, Louise

    2018-03-01

    Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology

  16. Reach on sound: a key to object permanence in visually impaired children.

    PubMed

    Fazzi, Elisa; Signorini, Sabrina Giovanna; Bomba, Monica; Luparia, Antonella; Lanners, Josée; Balottin, Umberto

    2011-04-01

    The capacity to reach an object presented through sound clue indicates, in the blind child, the acquisition of object permanence and gives information over his/her cognitive development. To assess cognitive development in congenitally blind children with or without multiple disabilities. Cohort study. Thirty-seven congenitally blind subjects (17 with associated multiple disabilities, 20 mainly blind) were enrolled. We used Bigelow's protocol to evaluate "reach on sound" capacity over time (at 6, 12, 18, 24, and 36 months), and a battery of clinical, neurophysiological and cognitive instruments to assess clinical features. Tasks n.1 to 5 were acquired by most of the mainly blind children by 12 months of age. Task 6 coincided with a drop in performance, and the acquisition of the subsequent tasks showed a less agehomogeneous pattern. In blind children with multiple disabilities, task acquisition rates were lower, with the curves dipping in relation to the more complex tasks. The mainly blind subjects managed to overcome Fraiberg's "conceptual problem"--i.e., they acquired the ability to attribute an external object with identity and substance even when it manifested its presence through sound only--and thus developed the ability to reach an object presented through sound. Instead, most of the blind children with multiple disabilities presented poor performances on the "reach on sound" protocol and were unable, before 36 months of age, to develop the strategies needed to resolve Fraiberg's "conceptual problem". Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Behavioral and Neural Discrimination of Speech Sounds After Moderate or Intense Noise Exposure in Rats

    PubMed Central

    Reed, Amanda C.; Centanni, Tracy M.; Borland, Michael S.; Matney, Chanel J.; Engineer, Crystal T.; Kilgard, Michael P.

    2015-01-01

    Objectives Hearing loss is a commonly experienced disability in a variety of populations including veterans and the elderly and can often cause significant impairment in the ability to understand spoken language. In this study, we tested the hypothesis that neural and behavioral responses to speech will be differentially impaired in an animal model after two forms of hearing loss. Design Sixteen female Sprague–Dawley rats were exposed to one of two types of broadband noise which was either moderate or intense. In nine of these rats, auditory cortex recordings were taken 4 weeks after noise exposure (NE). The other seven were pretrained on a speech sound discrimination task prior to NE and were then tested on the same task after hearing loss. Results Following intense NE, rats had few neural responses to speech stimuli. These rats were able to detect speech sounds but were no longer able to discriminate between speech sounds. Following moderate NE, rats had reorganized cortical maps and altered neural responses to speech stimuli but were still able to accurately discriminate between similar speech sounds during behavioral testing. Conclusions These results suggest that rats are able to adjust to the neural changes after moderate NE and discriminate speech sounds, but they are not able to recover behavioral abilities after intense NE. Animal models could help clarify the adaptive and pathological neural changes that contribute to speech processing in hearing-impaired populations and could be used to test potential behavioral and pharmacological therapies. PMID:25072238

  18. Regeneration of Sensory Hair Cells Requires Localized Interactions between the Notch and Wnt Pathways.

    PubMed

    Romero-Carvajal, Andrés; Navajas Acedo, Joaquín; Jiang, Linjia; Kozlovskaja-Gumbrienė, Agnė; Alexander, Richard; Li, Hua; Piotrowski, Tatjana

    2015-08-10

    In vertebrates, mechano-electrical transduction of sound is accomplished by sensory hair cells. Whereas mammalian hair cells are not replaced when lost, in fish they constantly renew and regenerate after injury. In vivo tracking and cell fate analyses of all dividing cells during lateral line hair cell regeneration revealed that support and hair cell progenitors localize to distinct tissue compartments. Importantly, we find that the balance between self-renewal and differentiation in these compartments is controlled by spatially restricted Notch signaling and its inhibition of Wnt-induced proliferation. The ability to simultaneously study and manipulate individual cell behaviors and multiple pathways in vivo transforms the lateral line into a powerful paradigm to mechanistically dissect sensory organ regeneration. The striking similarities to other vertebrate stem cell compartments uniquely place zebrafish to help elucidate why mammals possess such low capacity to regenerate hair cells. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Using Sound Knowledge to Teach about Noise-Induced Hearing Loss

    ERIC Educational Resources Information Center

    McDonnough, Jacqueline T.; Matkins, Juanita Jo

    2007-01-01

    Throughout our lives we are surrounded by sounds in our environment. Our ability to hear plays an essential part in our everyday existence. Students should develop an understanding of the role technology plays in personal and social decisions. If we are to meet these goals we need to integrate aspects of responsible behavior toward hearing health…

  20. Intersensory Redundancy Facilitates Learning of Arbitrary Relations between Vowel Sounds and Objects in Seven-Month-Old Infants.

    ERIC Educational Resources Information Center

    Gogate, Lakshmi J.; Bahrick, Lorraine E.

    1998-01-01

    Investigated 7-month olds' ability to relate vowel sounds with objects when intersensory redundancy was present versus absent. Found that infants detected a mismatch in the vowel-object pairs in the moving-synchronous condition but not in the still or moving-asynchronous condition, demonstrating that temporal synchrony between vocalizations and…

  1. Evidence for a Familial Speech Sound Disorder Subtype in a Multigenerational Study of Oral and Hand Motor Sequencing Ability

    ERIC Educational Resources Information Center

    Peter, Beate; Raskind, Wendy H.

    2011-01-01

    Purpose: To evaluate phenotypic expressions of speech sound disorder (SSD) in multigenerational families with evidence of familial forms of SSD. Method: Members of five multigenerational families (N = 36) produced rapid sequences of monosyllables and disyllables and tapped computer keys with repetitive and alternating movements. Results: Measures…

  2. The Relationship between Auditory Temporal Processing, Phonemic Awareness, and Reading Disability.

    ERIC Educational Resources Information Center

    Bretherton, Lesley; Holmes, V. M.

    2003-01-01

    Investigated the relationship between auditory temporal processing of nonspeech sounds and phonological awareness ability in 8- to 12-year-olds with a reading disability, placed in groups based on performance on Tallal's tone-order judgment task. Found that a tone-order deficit did not relate to performance on order processing of speech sounds, to…

  3. Video and Sound Production: Flip out! Game on!

    ERIC Educational Resources Information Center

    Hunt, Marc W.

    2013-01-01

    The author started teaching TV and sound production in a career and technical education (CTE) setting six years ago. The first couple months of teaching provided a steep learning curve for him. He is highly experienced in his industry, but teaching the content presented a new set of obstacles. His students had a broad range of abilities,…

  4. The Importance of "What": Infants Use Featural Information to Index Events

    ERIC Educational Resources Information Center

    Kirkham, Natasha Z.; Richardson, Daniel C.; Wu, Rachel; Johnson, Scott P.

    2012-01-01

    Dynamic spatial indexing is the ability to encode, remember, and track the location of complex events. For example, in a previous study, 6-month-old infants were familiarized to a toy making a particular sound in a particular location, and later they fixated that empty location when they heard the sound presented alone ("Journal of Experimental…

  5. Oral and Hand Movement Speeds Are Associated with Expressive Language Ability in Children with Speech Sound Disorder

    ERIC Educational Resources Information Center

    Peter, Beate

    2012-01-01

    This study tested the hypothesis that children with speech sound disorder have generalized slowed motor speeds. It evaluated associations among oral and hand motor speeds and measures of speech (articulation and phonology) and language (receptive vocabulary, sentence comprehension, sentence imitation), in 11 children with moderate to severe SSD…

  6. Energy localization and frequency analysis in the locust ear.

    PubMed

    Malkin, Robert; McDonagh, Thomas R; Mhatre, Natasha; Scott, Thomas S; Robert, Daniel

    2014-01-06

    Animal ears are exquisitely adapted to capture sound energy and perform signal analysis. Studying the ear of the locust, we show how frequency signal analysis can be performed solely by using the structural features of the tympanum. Incident sound waves generate mechanical vibrational waves that travel across the tympanum. These waves shoal in a tsunami-like fashion, resulting in energy localization that focuses vibrations onto the mechanosensory neurons in a frequency-dependent manner. Using finite element analysis, we demonstrate that two mechanical properties of the locust tympanum, distributed thickness and tension, are necessary and sufficient to generate frequency-dependent energy localization.

  7. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  8. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    PubMed

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  9. Atmospheric effects on microphone array analysis of aircraft vortex sound

    DOT National Transportation Integrated Search

    2006-05-08

    This paper provides the basis of a comprehensive analysis of vortex sound propagation : through the atmosphere in order to assess real atmospheric effects on acoustic array : processing. Such effects may impact vortex localization accuracy and detect...

  10. Oceanographic Measurements Program Review.

    DTIC Science & Technology

    1982-03-01

    prototype Advanced Microstructure Profiler (AMP) was completed and the unit was operationally tested in local waters (Lake Washington and Puget Sound ...Expendables ....... ............. ..21 A.W. Green The Developent of an Air-Launched ................ 25 Expendable Sound Velocimeter (AXSV); R. Bixby...8217., ,? , .’,*, ;; .,’...; "’ . :" .* " . .. ". ;’ - ~ ~ ~ ~ ’ V’ 7T W, V a .. -- THE DEVELOPMENT OF AN AIR-LAUNCHED EXPENDABLE SOUND VELOCIMETER (AXSV) Richard Bixby

  11. Cutting Pattern Identification for Coal Mining Shearer through a Swarm Intelligence–Based Variable Translation Wavelet Neural Network

    PubMed Central

    Xu, Jing; Wang, Zhongbin; Tan, Chao; Liu, Xinhua

    2018-01-01

    As a sound signal has the advantages of non-contacted measurement, compact structure, and low power consumption, it has resulted in much attention in many fields. In this paper, the sound signal of the coal mining shearer is analyzed to realize the accurate online cutting pattern identification and guarantee the safety quality of the working face. The original acoustic signal is first collected through an industrial microphone and decomposed by adaptive ensemble empirical mode decomposition (EEMD). A 13-dimensional set composed by the normalized energy of each level is extracted as the feature vector in the next step. Then, a swarm intelligence optimization algorithm inspired by bat foraging behavior is applied to determine key parameters of the traditional variable translation wavelet neural network (VTWNN). Moreover, a disturbance coefficient is introduced into the basic bat algorithm (BA) to overcome the disadvantage of easily falling into local extremum and limited exploration ability. The VTWNN optimized by the modified BA (VTWNN-MBA) is used as the cutting pattern recognizer. Finally, a simulation example, with an accuracy of 95.25%, and a series of comparisons are conducted to prove the effectiveness and superiority of the proposed method. PMID:29382120

  12. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    NASA Astrophysics Data System (ADS)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  13. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  14. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  15. The earthquake disaster risk characteristic and the problem in the earthquake emergency rescue of mountainous southwestern Sichuan

    NASA Astrophysics Data System (ADS)

    Yuan, S.; Xin, C.; Ying, Z.

    2016-12-01

    In recent years, earthquake disaster occurred frequently in Chinese mainland, the secondary disaster which have been caused by it is more serious in mountainous region. Because of the influence of terrain and geological conditions, the difficulty of earthquake emergency rescue work greatly increased, rescue force is also urged. Yet, it has been studied less on earthquake emergency rescue in mountainous region, the research in existing equipment whether can meet the actual needs of local earthquake emergency rescue is poorly. This paper intends to discuss and solve these problems. Through the mountainous regions Ganzi and Liangshan states in Sichuan field research, we investigated the process of earthquake emergency response and the projects for rescue force after an earthquake, and we also collected and collated local rescue force based data. By consulting experts and statistical analyzing the basic data, there are mainly two problems: The first is about local rescue force, they are poorly equipped and lack in the knowledge of medical help or identify architectural structure. There are no countries to establish a sound financial investment protection mechanism. Also, rescue equipment's updates and maintenance; The second problem is in earthquake emergency rescue progress. In the complicated geologic structure of mountainous regions, traffic and communication may be interrupted by landslides and mud-rock flows after earthquake. The outside rescue force may not arrive in time, rescue equipment was transported by manpower. Because of unknown earthquake disaster information, the local rescue force was deployed unreasonable. From the above, the local government worker should analyze the characteristics of the earthquake disaster in mountainous regions, and research how to improve their earthquake emergency rescue ability. We think they can do that by strengthening and regulating the rescue force structure, enhancing the skills and knowledge, training rescue workers, outfitting the light and portable rescue equipment, improving the public's self and mutual aid ability. All these measures will help local government reach the final goal of reducing the earthquake disaster.

  16. Sound waves and resonances in electron-hole plasma

    NASA Astrophysics Data System (ADS)

    Lucas, Andrew

    2016-06-01

    Inspired by the recent experimental signatures of relativistic hydrodynamics in graphene, we investigate theoretically the behavior of hydrodynamic sound modes in such quasirelativistic fluids near charge neutrality, within linear response. Locally driving an electron fluid at a resonant frequency to such a sound mode can lead to large increases in the electrical response at the edges of the sample, a signature, which cannot be explained using diffusive models of transport. We discuss the robustness of this signal to various effects, including electron-acoustic phonon coupling, disorder, and long-range Coulomb interactions. These long-range interactions convert the sound mode into a collective plasmonic mode at low frequencies unless the fluid is charge neutral. At the smallest frequencies, the response in a disordered fluid is quantitatively what is predicted by a "momentum relaxation time" approximation. However, this approximation fails at higher frequencies (which can be parametrically small), where the classical localization of sound waves cannot be neglected. Experimental observation of such resonances is a clear signature of relativistic hydrodynamics, and provides an upper bound on the viscosity of the electron-hole plasma.

  17. Effects of fixed labial orthodontic appliances on speech sound production.

    PubMed

    Paley, Jonathan S; Cisneros, George J; Nicolay, Olivier F; LeBlanc, Etoile M

    2016-05-01

    To explore the impact of fixed labial orthodontic appliances on speech sound production. Speech evaluations were performed on 23 patients with fixed labial appliances. Evaluations were performed immediately prior to appliance insertion, immediately following insertion, and 1 and 2 months post insertion. Baseline dental/skeletal variables were correlated with the ability to accommodate the presence of the appliances. Appliance effects were variable: 44% of the subjects were unaffected, 39% were temporarily affected but adapted within 2 months, and 17% of patients showed persistent sound errors at 2 months. Resolution of acquired sound errors was noted by 8 months post-appliance removal. Maladaptation to appliances was correlated to severity of malocclusion as determined by the Grainger's Treatment Priority Index. Sibilant sounds, most notably /s/, were affected most often. (1) Insertion of fixed labial appliances has an effect on speech sound production. (2) Sibilant and stopped sounds are affected, with /s/ being affected most often. (3) Accommodation to fixed appliances depends on the severity of malocclusion.

  18. Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution

    PubMed Central

    Park, Yeonseok; Choi, Anthony

    2017-01-01

    The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625

  19. A Framework for Speech Activity Detection Using Adaptive Auditory Receptive Fields.

    PubMed

    Carlin, Michael A; Elhilali, Mounya

    2015-12-01

    One of the hallmarks of sound processing in the brain is the ability of the nervous system to adapt to changing behavioral demands and surrounding soundscapes. It can dynamically shift sensory and cognitive resources to focus on relevant sounds. Neurophysiological studies indicate that this ability is supported by adaptively retuning the shapes of cortical spectro-temporal receptive fields (STRFs) to enhance features of target sounds while suppressing those of task-irrelevant distractors. Because an important component of human communication is the ability of a listener to dynamically track speech in noisy environments, the solution obtained by auditory neurophysiology implies a useful adaptation strategy for speech activity detection (SAD). SAD is an important first step in a number of automated speech processing systems, and performance is often reduced in highly noisy environments. In this paper, we describe how task-driven adaptation is induced in an ensemble of neurophysiological STRFs, and show how speech-adapted STRFs reorient themselves to enhance spectro-temporal modulations of speech while suppressing those associated with a variety of nonspeech sounds. We then show how an adapted ensemble of STRFs can better detect speech in unseen noisy environments compared to an unadapted ensemble and a noise-robust baseline. Finally, we use a stimulus reconstruction task to demonstrate how the adapted STRF ensemble better captures the spectrotemporal modulations of attended speech in clean and noisy conditions. Our results suggest that a biologically plausible adaptation framework can be applied to speech processing systems to dynamically adapt feature representations for improving noise robustness.

  20. Factors Associated with Speech-Sound Stimulability.

    ERIC Educational Resources Information Center

    Lof, Gregory L.

    1996-01-01

    This study examined stimulability in 30 children (ages 3 to 5) with articulation impairments. Factors found to relate to stimulability were articulation visibility, the child's age, the family's socioeconomic status, and the child's overall imitative ability. Perception, severity, otitis media history, language abilities, consistency of…

  1. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    PubMed

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  2. On the Locality of Transient Electromagnetic Soundings with a Single-Loop Configuration

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2018-03-01

    The possibilities of reconstructing two-dimensional (2D) cross sections based on the data of the profile soundings by the transient electromagnetic method (TEM) with a single ungrounded loop are illustrated on three-dimensional (3D) models. The process of reconstruction includes three main steps: transformation of the responses in the depth dependence of resistivity ρ(h) measured along the profile, with their subsequent stitching into the 2D pseudo section; point-by-point one-dimensional (1D) inversion of the responses with the starting model constructed based on the transformations; and correction of the 2D cross section with the use of 2.5-dimensional (2.5D) block inversion. It is shown that single-loop TEM soundings allow studying the geological media within a local domain the lateral dimensions of which are commensurate with the depth of the investigation. The structure of the medium beyond this domain insignificantly affects the sounding results. This locality enables the TEM to reconstruct the geoelectrical structure of the medium from the 2D cross sections with the minimal distortions caused by the lack of information beyond the profile of the transient response measurements.

  3. Assessment of auditory and psychosocial handicap associated with unilateral hearing loss among Indian patients.

    PubMed

    Augustine, Ann Mary; Chrysolyte, Shipra B; Thenmozhi, K; Rupa, V

    2013-04-01

    In order to assess psychosocial and auditory handicap in Indian patients with unilateral sensorineural hearing loss (USNHL), a prospective study was conducted on 50 adults with USNHL in the ENT Outpatient clinic of a tertiary care centre. The hearing handicap inventory for adults (HHIA) as well as speech in noise and sound localization tests were administered to patients with USNHL. An equal number of age-matched, normal controls also underwent the speech and sound localization tests. The results showed that HHIA scores ranged from 0 to 60 (mean 20.7). Most patients (84.8 %) had either mild to moderate or no handicap. Emotional subscale scores were higher than social subscale scores (p = 0.01). When the effect of sociodemographic factors on HHIA scores was analysed, educated individuals were found to have higher social subscale scores (p = 0.04). Age, sex, side and duration of hearing loss, occupation and income did not affect HHIA scores. Speech in noise and sound localization were significantly poorer in cases compared to controls (p < 0.001). About 75 % of patients refused a rehabilitative device. We conclude that USNHL in Indian adults does not usually produce severe handicap. When present, the handicap is more emotional than social. USNHL significantly affects sound localization and speech in noise. Yet, affected patients seldom seek a rehabilitative device.

  4. Fit for the frontline? A focus group exploration of auditory tasks carried out by infantry and combat support personnel.

    PubMed

    Bevis, Zoe L; Semeraro, Hannah D; van Besouw, Rachel M; Rowan, Daniel; Lineton, Ben; Allsopp, Adrian J

    2014-01-01

    In order to preserve their operational effectiveness and ultimately their survival, military personnel must be able to detect important acoustic signals and maintain situational awareness. The possession of sufficient hearing ability to perform job-specific auditory tasks is defined as auditory fitness for duty (AFFD). Pure tone audiometry (PTA) is used to assess AFFD in the UK military; however, it is unclear whether PTA is able to accurately predict performance on job-specific auditory tasks. The aim of the current study was to gather information about auditory tasks carried out by infantry personnel on the frontline and the environment these tasks are performed in. The study consisted of 16 focus group interviews with an average of five participants per group. Eighty British army personnel were recruited from five infantry regiments. The focus group guideline included seven open-ended questions designed to elicit information about the auditory tasks performed on operational duty. Content analysis of the data resulted in two main themes: (1) the auditory tasks personnel are expected to perform and (2) situations where personnel felt their hearing ability was reduced. Auditory tasks were divided into subthemes of sound detection, speech communication and sound localization. Reasons for reduced performance included background noise, hearing protection and attention difficulties. The current study provided an important and novel insight to the complex auditory environment experienced by British infantry personnel and identified 17 auditory tasks carried out by personnel on operational duties. These auditory tasks will be used to inform the development of a functional AFFD test for infantry personnel.

  5. An affordable compact humanoid robot for Autism Spectrum Disorder interventions in children.

    PubMed

    Dickstein-Fischer, Laurie; Alexander, Elizabeth; Yan, Xiaoan; Su, Hao; Harrington, Kevin; Fischer, Gregory S

    2011-01-01

    Autism Spectrum Disorder impacts an ever-increasing number of children. The disorder is marked by social functioning that is characterized by impairment in the use of nonverbal behaviors, failure to develop appropriate peer relationships and lack of social and emotional exchanges. Providing early intervention through the modality of play therapy has been effective in improving behavioral and social outcomes for children with autism. Interacting with humanoid robots that provide simple emotional response and interaction has been shown to improve the communication skills of autistic children. In particular, early intervention and continuous care provide significantly better outcomes. Currently, there are no robots capable of meeting these requirements that are both low-cost and available to families of autistic children for in-home use. This paper proposes the piloting the use of robotics as an improved diagnostic and early intervention tool for autistic children that is affordable, non-threatening, durable, and capable of interacting with an autistic child. This robot has the ability to track the child with its 3 degree of freedom (DOF) eyes and 3-DOF head, open and close its 1-DOF beak and 1-DOF each eyelids, raise its 1-DOF each wings, play sound, and record sound. These attributes will give it the ability to be used for the diagnosis and treatment of autism. As part of this project, the robot and the electronic and control software have been developed, and integrating semi-autonomous interaction, teleoperation from a remote healthcare provider and initiating trials with children in a local clinic are in progress.

  6. Responses of auditory-cortex neurons to structural features of natural sounds.

    PubMed

    Nelken, I; Rotman, Y; Bar Yosef, O

    1999-01-14

    Sound-processing strategies that use the highly non-random structure of natural sounds may confer evolutionary advantage to many species. Auditory processing of natural sounds has been studied almost exclusively in the context of species-specific vocalizations, although these form only a small part of the acoustic biotope. To study the relationships between properties of natural soundscapes and neuronal processing mechanisms in the auditory system, we analysed sound from a range of different environments. Here we show that for many non-animal sounds and background mixtures of animal sounds, energy in different frequency bands is coherently modulated. Co-modulation of different frequency bands in background noise facilitates the detection of tones in noise by humans, a phenomenon known as co-modulation masking release (CMR). We show that co-modulation also improves the ability of auditory-cortex neurons to detect tones in noise, and we propose that this property of auditory neurons may underlie behavioural CMR. This correspondence may represent an adaptation of the auditory system for the use of an attribute of natural sounds to facilitate real-world processing tasks.

  7. Green symphonies: a call for studies on acoustic communication in plants

    PubMed Central

    2013-01-01

    Sound and its use in communication have significantly contributed to shaping the ecology, evolution, behavior, and ultimately the success of many animal species. Yet, the ability to use sound is not a prerogative of animals. Plants may also use sound, but we have been unable to effectively research what the ecological and evolutionary implications might be in a plant’s life. Why should plants emit and receive sound and is there information contained in those sounds? I hypothesize that it would be particularly advantageous for plants to learn about the surrounding environment using sound, as acoustic signals propagate rapidly and with minimal energetic or fitness costs. In fact, both emission and detection of sound may have adaptive value in plants by affecting responses in other organisms, plants, and animals alike. The systematic exploration of the functional, ecological, and evolutionary significance of sound in the life of plants is expected to prompt a reinterpretation of our understanding of these organisms and galvanize the emergence of novel concepts and perspectives on their communicative complexity. PMID:23754865

  8. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  9. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  10. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  11. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  12. Difficulty in Learning Similar-Sounding Words: A Developmental Stage or a General Property of Learning?

    ERIC Educational Resources Information Center

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning…

  13. Operating a Geiger-Muller Tube Using a PC Sound Card

    ERIC Educational Resources Information Center

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Muller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the…

  14. Systematic Instruction in Phoneme-Grapheme Correspondence for Students with Reading Disabilities

    ERIC Educational Resources Information Center

    Earle, Gentry A.; Sayeski, Kristin L.

    2017-01-01

    Letter-sound knowledge is a strong predictor of a student's ability to decode words. Approximately 50% of English words can be decoded by following a sound-symbol correspondence rule alone and an additional 36% are spelled with only one error. Many students with reading disabilities or who struggle to learn to read have difficulty with phonology,…

  15. Novel recycling of nonmetal particles from waste printed wiring boards to produce porous composite for sound absorbing application.

    PubMed

    Sun, Zhixing; Shen, Zhigang; Zhang, Xiaojing; Ma, Shulin

    2014-01-01

    Nonmetal materials take up about 70 wt% of waste printed wiring boards (WPWB), which are usually recycled as low-value fillers or even directly disposed by landfill dumping and incineration. In this research, a novel reuse ofthe nonmetals to produce porous composites for sound absorbing application was demonstrated. The manufacturing process, absorbing performance and mechanical properties of the composites were studied. The results show that the high porous structure of the composites leads to an excellent sound absorption ability in broad-band frequency range. Average absorption coefficient of above 0.4 can be achievedby the composite in the frequency range from 100 to 6400 Hz. When the particle size is larger than 0.2 mm, the absorption ability of the composite is comparable to that of commercial wood-fibre board and urea-formaldehyde foam. Mechanical analysis indicates that the porous composites possess sufficient structural strength for self-sustaining applications. All the results indicate that producing sound absorbing composite with nonmetal particles from WPWB provides an efficient and profitable way for recycling this waste resource and can resolve both the environment pollution and noise pollution problems.

  16. Top-down modulation of auditory processing: effects of sound context, musical expertise and attentional focus.

    PubMed

    Tervaniemi, M; Kruck, S; De Baene, W; Schröger, E; Alter, K; Friederici, A D

    2009-10-01

    By recording auditory electrical brain potentials, we investigated whether the basic sound parameters (frequency, duration and intensity) are differentially encoded among speech vs. music sounds by musicians and non-musicians during different attentional demands. To this end, a pseudoword and an instrumental sound of comparable frequency and duration were presented. The accuracy of neural discrimination was tested by manipulations of frequency, duration and intensity. Additionally, the subjects' attentional focus was manipulated by instructions to ignore the sounds while watching a silent movie or to attentively discriminate the different sounds. In both musicians and non-musicians, the pre-attentively evoked mismatch negativity (MMN) component was larger to slight changes in music than in speech sounds. The MMN was also larger to intensity changes in music sounds and to duration changes in speech sounds. During attentional listening, all subjects more readily discriminated changes among speech sounds than among music sounds as indexed by the N2b response strength. Furthermore, during attentional listening, musicians displayed larger MMN and N2b than non-musicians for both music and speech sounds. Taken together, the data indicate that the discriminative abilities in human audition differ between music and speech sounds as a function of the sound-change context and the subjective familiarity of the sound parameters. These findings provide clear evidence for top-down modulatory effects in audition. In other words, the processing of sounds is realized by a dynamically adapting network considering type of sound, expertise and attentional demands, rather than by a strictly modularly organized stimulus-driven system.

  17. Sparse representation of Gravitational Sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  18. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  19. Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization

    PubMed Central

    Keating, Peter; Nodal, Fernando R; King, Andrew J

    2014-01-01

    For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. PMID:24256073

  20. Local Application of Sodium Salicylate Enhances Auditory Responses in the Rat’s Dorsal Cortex of the Inferior Colliculus

    PubMed Central

    Patel, Chirag R.; Zhang, Huiming

    2014-01-01

    Sodium salicylate (SS) is a widely used medication with side effects on hearing. In order to understand these side effects, we recorded sound-driven local-field potentials in a neural structure, the dorsal cortex of the inferior colliculus (ICd). Using a microiontophoretic technique, we applied SS at sites of recording and studied how auditory responses were affected by the drug. Furthermore, we studied how the responses were affected by combined local application of SS and an agonists/antagonist of the type-A or type-B γ-aminobutyric acid receptor (GABAA or GABAB receptor). Results revealed that SS applied alone enhanced auditory responses in the ICd, indicating that the drug had local targets in the structure. Simultaneous application of the drug and a GABAergic receptor antagonist synergistically enhanced amplitudes of responses. The synergistic interaction between SS and a GABAA receptor antagonist had a relatively early start in reference to the onset of acoustic stimulation and the duration of this interaction was independent of sound intensity. The interaction between SS and a GABAB receptor antagonist had a relatively late start, and the duration of this interaction was dependent on sound intensity. Simultaneous application of the drug and a GABAergic receptor agonist produced an effect different from the sum of effects produced by the two drugs released individually. These differences between simultaneous and individual drug applications suggest that SS modified GABAergic inhibition in the ICd. Our results indicate that SS can affect sound-driven activity in the ICd by modulating local GABAergic inhibition. PMID:25452744

  1. Calibration of International Space Station (ISS) Node 1 Vibro-Acoustic Model

    NASA Technical Reports Server (NTRS)

    Zhang, Weiguo; Raveendra, Ravi

    2014-01-01

    Reported here is the ability of utilizing the Energy Finite Element Method (E-FEM) to predict the vibro-acoustic sound fields within the International Space Station (ISS) Node 1 and to compare the results with actual measurements of leak sounds made by a one atmosphere to vacuum leak through a small hole in the pressure wall of the Node 1 STA module during its period of storage at Stennis Space Center (SSC). While the E-FEM method represents a reverberant sound field calculation, of importance to this application is the requirement to also handle the direct field effect of the sound generation. It was also important to be able to compute the sound fields in the ultrasonic frequency range. This report demonstrates the capability of this technology as applied to this type of application.

  2. Method and apparatus for inspecting conduits

    DOEpatents

    Spisak, Michael J.; Nance, Roy A.

    1997-01-01

    An apparatus and method for ultrasonic inspection of a conduit are provided. The method involves directing a first ultrasonic pulse at a particular area of the conduit at a first angle, receiving the reflected sound from the first ultrasonic pulse, substantially simultaneously or subsequently in very close time proximity directing a second ultrasonic pulse at said area of the conduit from a substantially different angle than said first angle, receiving the reflected sound from the second ultrasonic pulse, and comparing the received sounds to determine if there is a defect in that area of the conduit. The apparatus of the invention is suitable for carrying out the above-described method. The method and apparatus of the present invention provide the ability to distinguish between sounds reflected by defects in a conduit and sounds reflected by harmless deposits associated with the conduit.

  3. The African cichlid fish Astatotilapia burtoni uses acoustic communication for reproduction: sound production, hearing, and behavioral significance.

    PubMed

    Maruska, Karen P; Ung, Uyhun S; Fernald, Russell D

    2012-01-01

    Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2-5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes.

  4. The African Cichlid Fish Astatotilapia burtoni Uses Acoustic Communication for Reproduction: Sound Production, Hearing, and Behavioral Significance

    PubMed Central

    Maruska, Karen P.; Ung, Uyhun S.; Fernald, Russell D.

    2012-01-01

    Sexual reproduction in all animals depends on effective communication between signalers and receivers. Many fish species, especially the African cichlids, are well known for their bright coloration and the importance of visual signaling during courtship and mate choice, but little is known about what role acoustic communication plays during mating and how it contributes to sexual selection in this phenotypically diverse group of vertebrates. Here we examined acoustic communication during reproduction in the social cichlid fish, Astatotilapia burtoni. We characterized the sounds and associated behaviors produced by dominant males during courtship, tested for differences in hearing ability associated with female reproductive state and male social status, and then tested the hypothesis that female mate preference is influenced by male sound production. We show that dominant males produce intentional courtship sounds in close proximity to females, and that sounds are spectrally similar to their hearing abilities. Females were 2–5-fold more sensitive to low frequency sounds in the spectral range of male courtship sounds when they were sexually-receptive compared to during the mouthbrooding parental phase. Hearing thresholds were also negatively correlated with circulating sex-steroid levels in females but positively correlated in males, suggesting a potential role for steroids in reproductive-state auditory plasticity. Behavioral experiments showed that receptive females preferred to affiliate with males that were associated with playback of courtship sounds compared to noise controls, indicating that acoustic information is likely important for female mate choice. These data show for the first time in a Tanganyikan cichlid that acoustic communication is important during reproduction as part of a multimodal signaling repertoire, and that perception of auditory information changes depending on the animal's internal physiological state. Our results highlight the importance of examining non-visual sensory modalities as potential substrates for sexual selection contributing to the incredible phenotypic diversity of African cichlid fishes. PMID:22624055

  5. Neurobiology of Everyday Communication: What Have We Learned From Music?

    PubMed

    Kraus, Nina; White-Schwoch, Travis

    2016-06-09

    Sound is an invisible but powerful force that is central to everyday life. Studies in the neurobiology of everyday communication seek to elucidate the neural mechanisms underlying sound processing, their stability, their plasticity, and their links to language abilities and disabilities. This sound processing lies at the nexus of cognitive, sensorimotor, and reward networks. Music provides a powerful experimental model to understand these biological foundations of communication, especially with regard to auditory learning. We review studies of music training that employ a biological approach to reveal the integrity of sound processing in the brain, the bearing these mechanisms have on everyday communication, and how these processes are shaped by experience. Together, these experiments illustrate that music works in synergistic partnerships with language skills and the ability to make sense of speech in complex, everyday listening environments. The active, repeated engagement with sound demanded by music making augments the neural processing of speech, eventually cascading to listening and language. This generalization from music to everyday communications illustrates both that these auditory brain mechanisms have a profound potential for plasticity and that sound processing is biologically intertwined with listening and language skills. A new wave of studies has pushed neuroscience beyond the traditional laboratory by revealing the effects of community music training in underserved populations. These community-based studies reinforce laboratory work highlight how the auditory system achieves a remarkable balance between stability and flexibility in processing speech. Moreover, these community studies have the potential to inform health care, education, and social policy by lending a neurobiological perspective to their efficacy. © The Author(s) 2016.

  6. Decoding the neural signatures of emotions expressed through sound.

    PubMed

    Sachs, Matthew E; Habibi, Assal; Damasio, Antonio; Kaplan, Jonas T

    2018-07-01

    Effective social functioning relies in part on the ability to identify emotions from auditory stimuli and respond appropriately. Previous studies have uncovered brain regions engaged by the affective information conveyed by sound. But some of the acoustical properties of sounds that express certain emotions vary remarkably with the instrument used to produce them, for example the human voice or a violin. Do these brain regions respond in the same way to different emotions regardless of the sound source? To address this question, we had participants (N = 38, 20 females) listen to brief audio excerpts produced by the violin, clarinet, and human voice, each conveying one of three target emotions-happiness, sadness, and fear-while brain activity was measured with fMRI. We used multivoxel pattern analysis to test whether emotion-specific neural responses to the voice could predict emotion-specific neural responses to musical instruments and vice-versa. A whole-brain searchlight analysis revealed that patterns of activity within the primary and secondary auditory cortex, posterior insula, and parietal operculum were predictive of the affective content of sound both within and across instruments. Furthermore, classification accuracy within the anterior insula was correlated with behavioral measures of empathy. The findings suggest that these brain regions carry emotion-specific patterns that generalize across sounds with different acoustical properties. Also, individuals with greater empathic ability have more distinct neural patterns related to perceiving emotions. These results extend previous knowledge regarding how the human brain extracts emotional meaning from auditory stimuli and enables us to understand and connect with others effectively. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. DXL: A Sounding Rocket Mission for the Study of Solar Wind Charge Exchange and Local Hot Bubble X-Ray Emission

    NASA Technical Reports Server (NTRS)

    Galeazzi, M.; Prasai, K.; Uprety, Y.; Chiao, M.; Collier, M. R.; Koutroumpa, D.; Porter, F. S.; Snowden, S.; Cravens, T.; Robertson, I.; hide

    2011-01-01

    The Diffuse X-rays from the Local galaxy (DXL) mission is an approved sounding rocket project with a first launch scheduled around December 2012. Its goal is to identify and separate the X-ray emission generated by solar wind charge exchange from that of the local hot bubble to improve our understanding of both. With 1,000 square centimeters proportional counters and grasp of about 10 square centimeters sr both in the 1/4 and 3/4 keV bands, DXL will achieve in a 5-minute flight what cannot be achieved by current and future X-ray satellites.

  8. Auditory-musical processing in autism spectrum disorders: a review of behavioral and brain imaging studies.

    PubMed

    Ouimet, Tia; Foster, Nicholas E V; Tryfon, Ana; Hyde, Krista L

    2012-04-01

    Autism spectrum disorder (ASD) is a complex neurodevelopmental condition characterized by atypical social and communication skills, repetitive behaviors, and atypical visual and auditory perception. Studies in vision have reported enhanced detailed ("local") processing but diminished holistic ("global") processing of visual features in ASD. Individuals with ASD also show enhanced processing of simple visual stimuli but diminished processing of complex visual stimuli. Relative to the visual domain, auditory global-local distinctions, and the effects of stimulus complexity on auditory processing in ASD, are less clear. However, one remarkable finding is that many individuals with ASD have enhanced musical abilities, such as superior pitch processing. This review provides a critical evaluation of behavioral and brain imaging studies of auditory processing with respect to current theories in ASD. We have focused on auditory-musical processing in terms of global versus local processing and simple versus complex sound processing. This review contributes to a better understanding of auditory processing differences in ASD. A deeper comprehension of sensory perception in ASD is key to better defining ASD phenotypes and, in turn, may lead to better interventions. © 2012 New York Academy of Sciences.

  9. On non-local energy transfer via zonal flow in the Dimits shift

    DOE PAGES

    St-Onge, Denis A.

    2017-10-10

    The two-dimensional Terry–Horton equation is shown to exhibit the Dimits shift when suitably modified to capture both the nonlinear enhancement of zonal/drift-wave interactions and the existence of residual Rosenbluth–Hinton states. This phenomenon persists through numerous simplifications of the equation, including a quasilinear approximation as well as a four-mode truncation. It is shown that the use of an appropriate adiabatic electron response, for which the electrons are not affected by the flux-averaged potential, results in anmore » $$\\boldsymbol{E}\\times \\boldsymbol{B}$$ nonlinearity that can efficiently transfer energy non-locally to length scales of the order of the sound radius. The size of the shift for the nonlinear system is heuristically calculated and found to be in excellent agreement with numerical solutions. The existence of the Dimits shift for this system is then understood as an ability of the unstable primary modes to efficiently couple to stable modes at smaller scales, and the shift ends when these stable modes eventually destabilize as the density gradient is increased. This non-local mechanism of energy transfer is argued to be generically important even for more physically complete systems.« less

  10. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  11. The impact of workload on the ability to localize audible alarms.

    PubMed

    Edworthy, Judy; Reid, Scott; Peel, Katie; Lock, Samantha; Williams, Jessica; Newbury, Chloe; Foster, Joseph; Farrington, Martin

    2018-10-01

    Very little is known about people's ability to localize sound under varying workload conditions, though it would be expected that increasing workload should degrade performance. A set of eight auditory clinical alarms already known to have relatively high localizability (the ease with which their location is identified) when tested alone were tested in six conditions where workload was varied. Participants were required to indicate the location of a series of alarms emanating at random from one of eight speaker locations. Additionally, they were asked to read, carry out mental arithmetic tasks, be exposed to typical ICU noise, or carry out either the reading task or the mental arithmetic task in ICU noise. Performance in the localizability task was best in the control condition (no secondary task) and worst in those tasks which involved both a secondary task and noise. The data does therefore demonstrate the typical pattern of increasing workload affecting a primary task in an area where there is little data. In addition, the data demonstrates that performance in the control condition results in a missed alarm on one in ten occurrences, whereas performance in the heaviest workload conditions results in a missed alarm on every fourth occurrence. This finding has implications for the understanding of both 'inattentional deafness' and 'alarm fatigue' in clinical environments. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  13. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  14. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    PubMed

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  15. Evidence for cue-independent spatial representation in the human auditory cortex during active listening

    PubMed Central

    McLaughlin, Susan A.; Rinne, Teemu; Stecker, G. Christopher

    2017-01-01

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues—particularly interaural time and level differences (ITD and ILD)—that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and—critically—for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues. PMID:28827357

  16. Structural Changes and Lack of HCN1 Channels in the Binaural Auditory Brainstem of the Naked Mole-Rat (Heterocephalus glaber).

    PubMed

    Gessele, Nikodemus; Garcia-Pino, Elisabet; Omerbašić, Damir; Park, Thomas J; Koch, Ursula

    2016-01-01

    Naked mole-rats (Heterocephalus glaber) live in large eu-social, underground colonies in narrow burrows and are exposed to a large repertoire of communication signals but negligible binaural sound localization cues, such as interaural time and intensity differences. We therefore asked whether monaural and binaural auditory brainstem nuclei in the naked mole-rat are differentially adjusted to this acoustic environment. Using antibody stainings against excitatory and inhibitory presynaptic structures, namely the vesicular glutamate transporter VGluT1 and the glycine transporter GlyT2 we identified all major auditory brainstem nuclei except the superior paraolivary nucleus in these animals. Naked mole-rats possess a well structured medial superior olive, with a similar synaptic arrangement to interaural-time-difference encoding animals. The neighboring lateral superior olive, which analyzes interaural intensity differences, is large and elongated, whereas the medial nucleus of the trapezoid body, which provides the contralateral inhibitory input to these binaural nuclei, is reduced in size. In contrast, the cochlear nucleus, the nuclei of the lateral lemniscus and the inferior colliculus are not considerably different when compared to other rodent species. Most interestingly, binaural auditory brainstem nuclei lack the membrane-bound hyperpolarization-activated channel HCN1, a voltage-gated ion channel that greatly contributes to the fast integration times in binaural nuclei of the superior olivary complex in other species. This suggests substantially lengthened membrane time constants and thus prolonged temporal integration of inputs in binaural auditory brainstem neurons and might be linked to the severely degenerated sound localization abilities in these animals.

  17. Jet-noise reduction through liquid-base foam injection.

    NASA Technical Reports Server (NTRS)

    Manson, L.; Burge, H. L.

    1971-01-01

    An experimental investigation has been made of the sound-absorbing properties of liquid-base foams and of their ability to reduce jet noise. Protein, detergent, and polymer foaming agents were used in water solutions. A method of foam generation was developed to permit systematic variation of the foam density. The investigation included measurements of sound-absorption coefficents for both plane normal incidence waves and diffuse sound fields. The intrinsic acoustic properties of foam, e.g., the characteristic impedance and the propagation constant, were also determined. The sound emitted by a 1-in.-diam cold nitrogen jet was measured for subsonic (300 m/sec) and supersonic (422 m/sec) jets, with and without foam injection. Noise reductions up to 10 PNdB were measured.

  18. The sensorimotor and social sides of the architecture of speech.

    PubMed

    Pezzulo, Giovanni; Barca, Laura; D'Ausilio, Alessando

    2014-12-01

    Speech is a complex skill to master. In addition to sophisticated phono-articulatory abilities, speech acquisition requires neuronal systems configured for vocal learning, with adaptable sensorimotor maps that couple heard speech sounds with motor programs for speech production; imitation and self-imitation mechanisms that can train the sensorimotor maps to reproduce heard speech sounds; and a "pedagogical" learning environment that supports tutor learning.

  19. The Importance of Concept of Word in Text as a Predictor of Sight Word Development in Spanish

    ERIC Educational Resources Information Center

    Ford, Karen L.; Invernizzi, Marcia A.; Meyer, J. Patrick

    2015-01-01

    The goal of the current study was to determine whether Concept of Word in Text (COW-T) predicts later sight word reading achievement in Spanish, as it does in English. COW-T requires that children have beginning sound awareness, automatic recognition of letters and letter sounds, and the ability to coordinate these skills to finger point…

  20. Development of an alarm sound database and simulator.

    PubMed

    Takeuchi, Akihiro; Hirose, Minoru; Shinbo, Toshiro; Imai, Megumi; Mamorita, Noritaka; Ikeda, Noriaki

    2006-10-01

    The purpose of this study was to develop an interactive software package of alarm sounds to present, recognize and share problems about alarm sounds among medical staff and medical manufactures. The alarm sounds were recorded in variable alarm conditions in a WAV file. The alarm conditions were arbitrarily induced by modifying attachments of various medical devices. The software package that integrated an alarm sound database and simulator was used to assess the ability to identify the monitor that sounded the alarm for the medical staff. Eighty alarm sound files (40MB in total) were recorded from 41 medical devices made by 28 companies. There were three pairs of similar alarm sounds that could not easily be distinguished, two alarm sounds which had a different priority, either low or high. The alarm sound database was created in an Excel file (ASDB.xls 170 kB, 40 MB with photos), and included a list of file names that were hyperlinked to alarm sound files. An alarm sound simulator (AlmSS) was constructed with two modules for simultaneously playing alarm sound files and for designing new alarm sounds. The AlmSS was used in the assessing procedure to determine whether 19 clinical engineers could identify 13 alarm sounds only by their distinctive sounds. They were asked to choose from a list of devices and to rate the priority of each alarm. The overall correct identification rate of the alarm sounds was 48%, and six characteristic alarm sounds were correctly recognized by beetween 63% to 100% of the subjects. The overall recognition rate of the alarm sound priority was only 27%. We have developed an interactive software package of alarm sounds by integrating the database and the alarm sound simulator (URL: http://info.ahs.kitasato-u.ac.jp/tkweb/alarm/asdb.html ). The AlmSS was useful for replaying multiple alarm sounds simultaneously and designing new alarm sounds interactively.

  1. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  2. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  3. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    NASA Astrophysics Data System (ADS)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did not add significantly to SLR during Meltwater Pulse 1a (14.0-14.5 ka).

  4. Utility of bilateral acoustic hearing in combination with electrical stimulation provided by the cochlear implant.

    PubMed

    Plant, Kerrie; Babic, Leanne

    2016-01-01

    The aim of the study was to quantify the benefit provided by having access to amplified acoustic hearing in the implanted ear for use in combination with contralateral acoustic hearing and the electrical stimulation provided by the cochlear implant. Measures of spatial and non-spatial hearing abilities were obtained to compare performance obtained with different configurations of acoustic hearing in combination with electrical stimulation. In the combined listening condition participants had access to bilateral acoustic hearing whereas the bimodal condition used acoustic hearing contralateral to the implanted ear only. Experience was provided with each of the listening conditions using a repeated-measures A-B-B-A experimental design. Sixteen post-linguistically hearing-impaired adults participated in the study. Group mean benefit was obtained with use of the combined mode on measures of speech recognition in coincident speech in noise, localization ability, subjective ratings of real-world benefit, and musical sound quality ratings. Access to bilateral acoustic hearing after cochlear implantation provides significant benefit on a range of functional measures.

  5. Operating a Geiger Müller tube using a PC sound card

    NASA Astrophysics Data System (ADS)

    Azooz, A. A.

    2009-01-01

    In this paper, a simple MATLAB-based PC program that enables the computer to function as a replacement for the electronic scalar-counter system associated with a Geiger-Müller (GM) tube is described. The program utilizes the ability of MATLAB to acquire data directly from the computer sound card. The signal from the GM tube is applied to the computer sound card via the line in port. All standard GM experiments, pulse shape and statistical analysis experiments can be carried out using this system. A new visual demonstration of dead time effects is also presented.

  6. The effects of ethnicity, musicianship, and tone language experience on pitch perception.

    PubMed

    Zheng, Yi; Samuel, Arthur G

    2018-02-01

    Language and music are intertwined: music training can facilitate language abilities, and language experiences can also help with some music tasks. Possible language-music transfer effects are explored in two experiments in this study. In Experiment 1, we tested native Mandarin, Korean, and English speakers on a pitch discrimination task with two types of sounds: speech sounds and fundamental frequency (F0) patterns derived from speech sounds. To control for factors that might influence participants' performance, we included cognitive ability tasks testing memory and intelligence. In addition, two music skill tasks were used to examine general transfer effects from language to music. Prior studies showing that tone language speakers have an advantage on pitch tasks have been taken as support for three alternative hypotheses: specific transfer effects, general transfer effects, and an ethnicity effect. In Experiment 1, musicians outperformed non-musicians on both speech and F0 sounds, suggesting a music-to-language transfer effect. Korean and Mandarin speakers performed similarly, and they both outperformed English speakers, providing some evidence for an ethnicity effect. Alternatively, this could be due to population selection bias. In Experiment 2, we recruited Chinese Americans approximating the native English speakers' language background to further test the ethnicity effect. Chinese Americans, regardless of their tone language experiences, performed similarly to their non-Asian American counterparts in all tasks. Therefore, although this study provides additional evidence of transfer effects across music and language, it casts doubt on the contribution of ethnicity to differences observed in pitch perception and general music abilities.

  7. Optimum sensor placement for microphone arrays

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.

    Microphone arrays can be used for high-quality sound pickup in reverberant and noisy environments. Sound capture using conventional single microphone methods suffers severe degradation under these conditions. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing enhanced signal-to-noise ratio (SNR) when compared to single microphone performance. The overall performance of an array system is governed by its ability to locate and track sound sources and its ability to capture sound from desired spatial volumes. These abilities are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize placement for a specified number of sensors in a given acoustical environment. The objective of the optimization is to obtain the greatest average system SNR for sound capture in the region of interest. A two-step sound source location method is presented. In the first step, time delay of arrival (TDOA) estimates for select microphone pairs are determined using a modified version of the Omologo-Svaizer cross-power spectrum phase expression. In the second step, the TDOA estimates are used in a least-mean-squares gradient descent search algorithm to obtain a location estimate. Statistics for TDOA estimate error as a function of microphone pair/sound source geometry and acoustic environment are gathered from a set of experiments. These statistics are used to model position estimation accuracy for a given array geometry. The effectiveness of sound source capture is also dependent on array geometry and the acoustical environment. Simple beamforming and time delay compensation (TDC) methods provide spatial selectivity but suffer performance degradation in reverberant environments. Matched filter array (MFA) processing can mitigate the effects of reverberation. The shape and gain advantage of the capture region for these techniques is described and shown to be highly influenced by the placement of array sensors. A procedure is developed to evaluate a given array configuration based on the above-mentioned metrics. Constrained placement optimizations are performed that maximize SNR for both TDC and MFA capture methods. Results are compared for various acoustic environments and various enclosure sizes. General guidelines are presented for placement strategy and bandwidth dependence, as they relate to reverberation levels, ambient noise, and enclosure geometry. An overall performance function is described based on these metrics. Performance of the microphone array system is also constrained by the design limitations of the supporting hardware. Two newly developed hardware architectures are presented that support the described algorithms. A low- cost 8-channel system with off-the-shelf componentry was designed and its performance evaluated. A massively parallel 512-channel custom-built system is in development-its capabilities and the rationale for its design are described.

  8. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  9. Differential Neural Contributions to Native- and Foreign-Language Talker Identification

    ERIC Educational Resources Information Center

    Perrachione, Tyler K.; Pierrehumbert, Janet B.; Wong, Patrick C. M.

    2009-01-01

    Humans are remarkably adept at identifying individuals by the sound of their voice, a behavior supported by the nervous system's ability to integrate information from voice and speech perception. Talker-identification abilities are significantly impaired when listeners are unfamiliar with the language being spoken. Recent behavioral studies…

  10. Phonological Awareness Training. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2012

    2012-01-01

    Phonological awareness, or the ability to detect or manipulate the sounds in words independent of meaning, has been identified as a key early literacy skill and precursor to reading. For the purposes of this review, "phonological awareness training" refers to any practice targeting young children's phonological awareness abilities.…

  11. The Computer-Assisted Hypnosis Scale: Standardization and Norming of a Computer-Administered Measure of Hypnotic Ability.

    ERIC Educational Resources Information Center

    Grant, Carolyn D.; Nash, Michael R.

    1995-01-01

    In a counterbalanced, within subjects, repeated measures design, 130 undergraduates were administered the Computer-Assisted Hypnosis Scale (CAHS) and the Stanford Hypnotic Susceptibility Scale and were hypnotized. The CAHS was shown to be a psychometrically sound instrument for measuring hypnotic ability. (SLD)

  12. A Method for Assessing Auditory Spatial Analysis in Reverberant Multitalker Environments.

    PubMed

    Weller, Tobias; Best, Virginia; Buchholz, Jörg M; Young, Taegan

    2016-07-01

    Deficits in spatial hearing can have a negative impact on listeners' ability to orient in their environment and follow conversations in noisy backgrounds and may exacerbate the experience of hearing loss as a handicap. However, there are no good tools available for reliably capturing the spatial hearing abilities of listeners in complex acoustic environments containing multiple sounds of interest. The purpose of this study was to explore a new method to measure auditory spatial analysis in a reverberant multitalker scenario. This study was a descriptive case control study. Ten listeners with normal hearing (NH) aged 20-31 yr and 16 listeners with hearing impairment (HI) aged 52-85 yr participated in the study. The latter group had symmetrical sensorineural hearing losses with a four-frequency average hearing loss of 29.7 dB HL. A large reverberant room was simulated using a loudspeaker array in an anechoic chamber. In this simulated room, 96 scenes comprising between one and six concurrent talkers at different locations were generated. Listeners were presented with 45-sec samples of each scene, and were required to count, locate, and identify the gender of all talkers, using a graphical user interface on an iPad. Performance was evaluated in terms of correctly counting the sources and accuracy in localizing their direction. Listeners with NH were able to reliably analyze scenes with up to four simultaneous talkers, while most listeners with hearing loss demonstrated errors even with two talkers at a time. Localization performance decreased in both groups with increasing number of talkers and was significantly poorer in listeners with HI. Overall performance was significantly correlated with hearing loss. This new method appears to be useful for estimating spatial abilities in realistic multitalker scenes. The method is sensitive to the number of sources in the scene, and to effects of sensorineural hearing loss. Further work will be needed to compare this method to more traditional single-source localization tests. American Academy of Audiology.

  13. An exploration of visitor motivations: The search for silence

    NASA Astrophysics Data System (ADS)

    Marin, Lelaina D.

    2011-12-01

    This research aims to study the relationship between visitor motivations for experiencing solitude, sounds of nature, and quiet and a visitor's soundscape experience. This relationship will improve managers' ability to provide satisfying and diverse experiences for their visitors and "protect" something that is increasingly rare outside of national parks and other protected natural areas; natural sounds and quiet. Chapter 1 focuses on the effect motivation for a quiet setting can have on acceptability of natural or human-caused sound in Muir Woods National Monument. This study used a dose-response methodology where visitors listened to five audio recordings varying in the percentage of time that human-caused sound was louder than natural sound (percent time above). Visitors were then asked to rate the acceptability of each recording. Three sound-related motivations for visiting Muir Woods were examined: "enjoying peace and quiet", "hearing sounds of nature" and "experiencing solitude." Cluster analysis was used to identify discrete groups with similar motivational profiles (i.e., low, moderate and high motivation for quiet). Results indicated that as percent time above natural sound increased, visitor ratings of human-caused sound decreased. Tolerance for human-caused sound also decreased as motivation for quiet increased. Consensus regarding the acceptability of sound was greatest when the percent time above natural sound was lowest (i.e., quietest sounds). Chapter 2 describes a study of the ability of motivations to predict which of three locations a visitor would most likely choose for recreation. Particular focus was given to sound-related motivations. Data for this study were collected at three sites with varying visitation levels within two national parks; Sequoia National Park-backcountry (low visitation), Sequoia National Park-frontcountry (moderate visitation), and Muir Woods National Monument-frontcountry (high visitation). Survey respondents were asked to rate the importance of six items in their decision to visit the particular park; (a) scenic beauty; (b) experience solitude; (c) time with family and friends; (d) get exercise; (e) experience the sounds of nature; and (f) peace and quiet. Results showed that, of the three study sites, those visitors more motivated to spend time with family and friends and experience the sounds of nature were more likely to visit a frontcountry site, while those motivated for experiencing solitude and getting exercise were more likely to visit a backcountry site. The experience of peace and quiet was not a significant predictor of park location chosen, suggesting that respondents were similarly motivated for quiet across all three sites. Both chapters in this thesis reveal interesting results that may cause managers to consider soundscape management differently in frontcountry and backcountry areas of national parks. For example, these results imply setting acoustic standards, designating management zones, and using education programs to manage for and meet varying levels of motivation for experiencing natural sounds and quiet.

  14. Effects of sound level fluctuations on annoyance caused by aircraft-flyover noise

    NASA Technical Reports Server (NTRS)

    Mccurdy, D. A.

    1979-01-01

    A laboratory experiment was conducted to determine the effects of variations in the rate and magnitude of sound level fluctuations on the annoyance caused by aircraft-flyover noise. The effects of tonal content, noise duration, and sound pressure level on annoyance were also studied. An aircraft-noise synthesis system was used to synthesize 32 aircraft-flyover noise stimuli representing the factorial combinations of 2 tone conditions, 2 noise durations, 2 sound pressure levels, 2 level fluctuation rates, and 2 level fluctuation magnitudes. Thirty-two test subjects made annoyance judgements on a total of 64 stimuli in a subjective listening test facility simulating an outdoor acoustic environment. Variations in the rate and magnitude of level fluctuations were found to have little, if any, effect on annoyance. Tonal content, noise duration, sound pressure level, and the interaction of tonal content with sound pressure level were found to affect the judged annoyance significantly. The addition of tone corrections and/or duration corrections significantly improved the annoyance prediction ability of noise rating scales.

  15. Computer-aided auscultation learning system for nursing technique instruction.

    PubMed

    Hou, Chun-Ju; Chen, Yen-Ting; Hu, Ling-Chen; Chuang, Chih-Chieh; Chiu, Yu-Hsien; Tsai, Ming-Shih

    2008-01-01

    Pulmonary auscultation is a physical assessment skill learned by nursing students for examining the respiratory system. Generally, a sound simulator equipped mannequin is used to group teach auscultation techniques via classroom demonstration. However, nursing students cannot readily duplicate this learning environment for self-study. The advancement of electronic and digital signal processing technologies facilitates simulating this learning environment. This study aims to develop a computer-aided auscultation learning system for assisting teachers and nursing students in auscultation teaching and learning. This system provides teachers with signal recording and processing of lung sounds and immediate playback of lung sounds for students. A graphical user interface allows teachers to control the measuring device, draw lung sound waveforms, highlight lung sound segments of interest, and include descriptive text. Effects on learning lung sound auscultation were evaluated for verifying the feasibility of the system. Fifteen nursing students voluntarily participated in the repeated experiment. The results of a paired t test showed that auscultative abilities of the students were significantly improved by using the computer-aided auscultation learning system.

  16. [Perception by teenagers and adults of the changed by amplitude sound sequences used in models of movement of the sound source].

    PubMed

    Andreeva, I G; Vartanian, I A

    2012-01-01

    The ability to evaluate direction of amplitude changes of sound stimuli was studied in adults and in the 11-12- and 15-16-year old teenagers. The stimuli representing sequences of fragments of the tone of 1 kHz, whose amplitude is changing with time, are used as model of approach and withdrawal of the sound sources. The 11-12-year old teenagers at estimation of direction of amplitude changes were shown to make the significantly higher number of errors as compared with two other examined groups, including those in repeated experiments. The structure of errors - the ratio of the portion of errors at estimation of increasing and decreasing by amplitude stimulus - turned out to be different in teenagers and in adults. The question is discussed about the effect of unspecific activation of the large hemisphere cortex in teenagers on processes if taking solution about the complex sound stimulus, including a possibility estimation of approach and withdrawal of the sound source.

  17. DETECTION AND IDENTIFICATION OF SPEECH SOUNDS USING CORTICAL ACTIVITY PATTERNS

    PubMed Central

    Centanni, T.M.; Sloan, A.M.; Reed, A.C.; Engineer, C.T.; Rennaker, R.; Kilgard, M.P.

    2014-01-01

    We have developed a classifier capable of locating and identifying speech sounds using activity from rat auditory cortex with an accuracy equivalent to behavioral performance without the need to specify the onset time of the speech sounds. This classifier can identify speech sounds from a large speech set within 40 ms of stimulus presentation. To compare the temporal limits of the classifier to behavior, we developed a novel task that requires rats to identify individual consonant sounds from a stream of distracter consonants. The classifier successfully predicted the ability of rats to accurately identify speech sounds for syllable presentation rates up to 10 syllables per second (up to 17.9 ± 1.5 bits/sec), which is comparable to human performance. Our results demonstrate that the spatiotemporal patterns generated in primary auditory cortex can be used to quickly and accurately identify consonant sounds from a continuous speech stream without prior knowledge of the stimulus onset times. Improved understanding of the neural mechanisms that support robust speech processing in difficult listening conditions could improve the identification and treatment of a variety of speech processing disorders. PMID:24286757

  18. Technology for enhancing chest auscultation in clinical simulation.

    PubMed

    Ward, Jeffrey J; Wattier, Bryan A

    2011-06-01

    The ability to use an acoustic stethoscope to detect lung and/or heart sounds, and then to then communicate one's interpretation of those sounds is an essential skill for many medical professionals. Interpretation of lung and heart sounds, in the context of history and other examination findings, often aids the differential diagnosis. Bedside assessment of changing auscultation findings may also guide treatment. Learning lung and heart auscultation skills typically involves listening to pre-recorded normal and adventitious sounds, often followed by laboratory instruction to guide stethoscope placement, and finally correlating the sounds with the associated pathophysiology and pathology. Recently, medical simulation has become an important tool for teaching prior to clinical practice, and for evaluating bedside auscultation skills. When simulating cardiovascular or pulmonary problems, high-quality lung and heart sounds should be able to accurately corroborate other findings such as vital signs, arterial blood gas values, or imaging. Digital audio technology, the Internet, and high-fidelity simulators have increased opportunities for educators and learners. We review the application of these technologies and describe options for reproducing lung and heart sounds, as well as their advantages and potential limitations.

  19. Preface

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; Highland, Lynn M.

    2008-01-01

    The idea for Landslides and Engineering Geology of the Seattle, Washington, Areagrew out of a major landslide disaster that occurred in the Puget Sound region at the beginning of 1997. Unusually heavy snowfall in late December 1996 followed by warm, intense rainfall on 31 December through 2 January 1997 produced hundreds of damaging landslides in communities surrounding Puget Sound. This disaster resulted in significant efforts of the local geotechnical community and local governments to repair the damage and to mitigate the effects of future landslides. The magnitude of the disaster attracted the attention of the U.S. Geological Survey (USGS), which was just beginning a large multihazards project for Puget Sound. The USGS immediately added a regional study of landslides to that project. Soon a partnership formed between the City of Seattle and the USGS to assess landslide hazards of Seattle.

  20. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    PubMed Central

    Kim, Youngwoong; Kim, Keonwook

    2015-01-01

    The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. PMID:26580618

  1. GABAergic Neural Activity Involved in Salicylate-Induced Auditory Cortex Gain Enhancement

    PubMed Central

    Lu, Jianzhong; Lobarinas, Edward; Deng, Anchun; Goodey, Ronald; Stolzberg, Daniel; Salvi, Richard J.; Sun, Wei

    2011-01-01

    Although high doses of sodium salicylate impair cochlear function, it paradoxically enhances sound-evoked activity in the auditory cortex (AC) and augments acoustic startle reflex responses, neural and behavioral metrics associated with hyperexcitability and hyperacusis. To explore the neural mechanisms underlying salicylate-induced hyperexcitability and “increased central gain”, we examined the effects of γ-aminobutyric acid (GABA) receptor agonists and antagonists on salicylate-induced hyperexcitability in the AC and startle reflex responses. Consistent with our previous findings, local or systemic application of salicylate significantly increased the amplitude of sound-evoked AC neural activity, but generally reduced spontaneous activity in the AC. Systemic injection of salicylate also significantly increased the acoustic startle reflex. S-baclofen or R-baclofen, GABA-B agonists, which suppressed sound-evoked AC neural firing rate and local field potentials, also suppressed the salicylate-induced enhancement of the AC field potential and the acoustic startle reflex. Local application of vigabatrin, which enhances GABA concentration in the brain, suppressed the salicylate-induced enhancement of AC firing rate. Systemic injection of vigabatrin also reduced the salicylate-induced enhancement of acoustic startle reflex. Collectively, these results suggest that the sound-evoked behavioral and neural hyperactivity induced by salicylate may arise from a salicylate-induced suppression GABAergic inhibition in the AC. PMID:21664433

  2. How to generate a sound-localization map in fish

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  3. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  4. 33 CFR 154.1125 - Additional response plan requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Prince William Sound, Alaska § 154.1125 Additional response plan requirements. (a) The owner or operator of a TAPAA facility shall include the following information in the Prince William Sound appendix to... for personnel, including local residents and fishermen, from the following locations in Prince William...

  5. Learning-Related Shifts in Generalization Gradients for Complex Sounds

    PubMed Central

    Wisniewski, Matthew G.; Church, Barbara A.; Mercado, Eduardo

    2010-01-01

    Learning to discriminate stimuli can alter how one distinguishes related stimuli. For instance, training an individual to differentiate between two stimuli along a single dimension can alter how that individual generalizes learned responses. In this study, we examined the persistence of shifts in generalization gradients after training with sounds. University students were trained to differentiate two sounds that varied along a complex acoustic dimension. Students subsequently were tested on their ability to recognize a sound they experienced during training when it was presented among several novel sounds varying along this same dimension. Peak shift was observed in Experiment 1 when generalization tests immediately followed training, and in Experiment 2 when tests were delayed by 24 hours. These findings further support the universality of generalization processes across species, modalities, and levels of stimulus complexity. They also raise new questions about the mechanisms underlying learning-related shifts in generalization gradients. PMID:19815929

  6. Lexical processing and distributional knowledge in sound-spelling mapping in a consistent orthography: A longitudinal study of reading and spelling in dyslexic and typically developing children.

    PubMed

    Marinelli, Chiara Valeria; Cellini, Pamela; Zoccolotti, Pierluigi; Angelelli, Paola

    This study examined the ability to master lexical processing and use knowledge of the relative frequency of sound-spelling mappings in both reading and spelling. Twenty-four dyslexic and dysgraphic children and 86 typically developing readers were followed longitudinally in 3rd and 5th grades. Effects of word regularity, word frequency, and probability of sound-spelling mappings were examined in two experimental tasks: (a) spelling to dictation; and (b) orthographic judgment. Dyslexic children showed larger regularity and frequency effects than controls in both tasks. Sensitivity to distributional information of sound-spelling mappings was already detected by third grade, indicating early acquisition even in children with dyslexia. Although with notable differences, knowledge of the relative frequencies of sound-spelling mapping influenced both reading and spelling. Results are discussed in terms of their theoretical and empirical implications.

  7. Expertise with artificial non-speech sounds recruits speech-sensitive cortical regions

    PubMed Central

    Leech, Robert; Holt, Lori L.; Devlin, Joseph T.; Dick, Frederic

    2009-01-01

    Regions of the human temporal lobe show greater activation for speech than for other sounds. These differences may reflect intrinsically specialized domain-specific adaptations for processing speech, or they may be driven by the significant expertise we have in listening to the speech signal. To test the expertise hypothesis, we used a video-game-based paradigm that tacitly trained listeners to categorize acoustically complex, artificial non-linguistic sounds. Before and after training, we used functional MRI to measure how expertise with these sounds modulated temporal lobe activation. Participants’ ability to explicitly categorize the non-speech sounds predicted the change in pre- to post-training activation in speech-sensitive regions of the left posterior superior temporal sulcus, suggesting that emergent auditory expertise may help drive this functional regionalization. Thus, seemingly domain-specific patterns of neural activation in higher cortical regions may be driven in part by experience-based restructuring of high-dimensional perceptual space. PMID:19386919

  8. dTULP, the Drosophila melanogaster Homolog of Tubby, Regulates Transient Receptor Potential Channel Localization in Cilia

    PubMed Central

    Shim, Jaewon; Han, Woongsu; Lee, Jinu; Bae, Yong Chul; Chung, Yun Doo; Kim, Chul Hoon; Moon, Seok Jun

    2013-01-01

    Mechanically gated ion channels convert sound into an electrical signal for the sense of hearing. In Drosophila melanogaster, several transient receptor potential (TRP) channels have been implicated to be involved in this process. TRPN (NompC) and TRPV (Inactive) channels are localized in the distal and proximal ciliary zones of auditory receptor neurons, respectively. This segregated ciliary localization suggests distinct roles in auditory transduction. However, the regulation of this localization is not fully understood. Here we show that the Drosophila Tubby homolog, King tubby (hereafter called dTULP) regulates ciliary localization of TRPs. dTULP-deficient flies show uncoordinated movement and complete loss of sound-evoked action potentials. Inactive and NompC are mislocalized in the cilia of auditory receptor neurons in the dTulp mutants, indicating that dTULP is required for proper cilia membrane protein localization. This is the first demonstration that dTULP regulates TRP channel localization in cilia, and suggests that dTULP is a protein that regulates ciliary neurosensory functions. PMID:24068974

  9. Developing a system for blind acoustic source localization and separation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Raghavendra

    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.

  10. Auditory Perception of Motor Vehicle Travel Paths

    PubMed Central

    Ashmead, Daniel H.; Grantham, D. Wesley; Maloff, Erin S.; Hornsby, Benjamin; Nakamura, Takabun; Davis, Timothy J.; Pampel, Faith; Rushing, Erin G.

    2012-01-01

    Objective These experiments address concerns that motor vehicles in electric engine mode are so quiet that they pose a risk to pedestrians, especially those with visual impairments. Background The “quiet car” issue has focused on hybrid and electric vehicles, although it also applies to internal combustion engine vehicles. Previous research has focused on detectability of vehicles, mostly in quiet settings. Instead, we focused on the functional ability to perceive vehicle motion paths. Method Participants judged whether simulated vehicles were traveling straight or turning, with emphasis on the impact of background traffic sound. Results In quiet, listeners made the straight-or-turn judgment soon enough in the vehicle’s path to be useful for deciding whether to start crossing the street. This judgment is based largely on sound level cues rather than the spatial direction of the vehicle. With even moderate background traffic sound, the ability to tell straight from turn paths is severely compromised. The signal-to-noise ratio needed for the straight-or-turn judgment is much higher than that needed to detect a vehicle. Conclusion Although a requirement for a minimum vehicle sound level might enhance detection of vehicles in quiet settings, it is unlikely that this requirement would contribute to pedestrian awareness of vehicle movements in typical traffic settings with many vehicles present. Application The findings are relevant to deliberations by government agencies and automobile manufacturers about standards for minimum automobile sounds and, more generally, for solutions to pedestrians’ needs for information about traffic, especially for pedestrians with sensory impairments. PMID:22768645

  11. Auditory perception of motor vehicle travel paths.

    PubMed

    Ashmead, Daniel H; Grantham, D Wesley; Maloff, Erin S; Hornsby, Benjamin; Nakamura, Takabun; Davis, Timothy J; Pampel, Faith; Rushing, Erin G

    2012-06-01

    These experiments address concerns that motor vehicles in electric engine mode are so quiet that they pose a risk to pedestrians, especially those with visual impairments. The "quiet car" issue has focused on hybrid and electric vehicles, although it also applies to internal combustion engine vehicles. Previous research has focused on detectability of vehicles, mostly in quiet settings. Instead, we focused on the functional ability to perceive vehicle motion paths. Participants judged whether simulated vehicles were traveling straight or turning, with emphasis on the impact of background traffic sound. In quiet, listeners made the straight-or-turn judgment soon enough in the vehicle's path to be useful for deciding whether to start crossing the street. This judgment is based largely on sound level cues rather than the spatial direction of the vehicle. With even moderate background traffic sound, the ability to tell straight from turn paths is severely compromised. The signal-to-noise ratio needed for the straight-or-turn judgment is much higher than that needed to detect a vehicle. Although a requirement for a minimum vehicle sound level might enhance detection of vehicles in quiet settings, it is unlikely that this requirement would contribute to pedestrian awareness of vehicle movements in typical traffic settings with many vehicles present. The findings are relevant to deliberations by government agencies and automobile manufacturers about standards for minimum automobile sounds and, more generally, for solutions to pedestrians' needs for information about traffic, especially for pedestrians with sensory impairments.

  12. Neural Correlates of Early Sound Encoding and their Relationship to Speech-in-Noise Perception

    PubMed Central

    Coffey, Emily B. J.; Chepesiuk, Alexander M. P.; Herholz, Sibylle C.; Baillet, Sylvain; Zatorre, Robert J.

    2017-01-01

    Speech-in-noise (SIN) perception is a complex cognitive skill that affects social, vocational, and educational activities. Poor SIN ability particularly affects young and elderly populations, yet varies considerably even among healthy young adults with normal hearing. Although SIN skills are known to be influenced by top-down processes that can selectively enhance lower-level sound representations, the complementary role of feed-forward mechanisms and their relationship to musical training is poorly understood. Using a paradigm that minimizes the main top-down factors that have been implicated in SIN performance such as working memory, we aimed to better understand how robust encoding of periodicity in the auditory system (as measured by the frequency-following response) contributes to SIN perception. Using magnetoencephalograpy, we found that the strength of encoding at the fundamental frequency in the brainstem, thalamus, and cortex is correlated with SIN accuracy. The amplitude of the slower cortical P2 wave was previously also shown to be related to SIN accuracy and FFR strength; we use MEG source localization to show that the P2 wave originates in a temporal region anterior to that of the cortical FFR. We also confirm that the observed enhancements were related to the extent and timing of musicianship. These results are consistent with the hypothesis that basic feed-forward sound encoding affects SIN perception by providing better information to later processing stages, and that modifying this process may be one mechanism through which musical training might enhance the auditory networks that subserve both musical and language functions. PMID:28890684

  13. Remote Characterization of Ice Shelf Surface and Basal Processes: Examples from East Antarctica

    NASA Astrophysics Data System (ADS)

    Greenbaum, J. S.; Blankenship, D. D.; Grima, C.; Schroeder, D. M.; Soderlund, K. M.; Young, D. A.; Kempf, S. D.; Siegert, M. J.; Roberts, J. L.; Warner, R. C.; van Ommen, T. D.

    2017-12-01

    The ability to remotely characterize surface and basal processes of ice shelves has important implications for conducting systematic, repeatable, and safe evaluations of their stability in the context of atmospheric and oceanic forcing. Additionally, techniques developed for terrestrial ice shelves can be adapted to orbital radar sounding datasets planned for forthcoming investigations of icy moons. This has been made possible through recent advances in radar signal processing that enable these data to be used to test hypotheses derived from conceptual and numerical models of ice shelf- and ice shell-ocean interactions. Here, we present several examples of radar sounding-derived characterizations of surface and basal processes underway on ice shelves in East Antarctica. These include percolation of near-surface meltwater in warm austral summers, brine infiltration along ice shelf calving fronts, basal melt rate and distribution, and basal freeze distribution. On Europa, near-surface brines and their migration may impact local geological variability, while basal processes likely control the distribution of melt and freeze. Terrestrially, we emphasize radar-sounding records of the Totten Glacier Ice Shelf which hosts each of these processes as well as the highest known density of basal melt channels of any terrestrial ice shelf. Further, with a maximum floating ice thickness of over 2.5 km, the pressure at Totten's basal interface may be similar to that at Europa's ice-ocean interface; therefore, evaluating surface and basal processes of Totten Glacier and other ice shelves could serve as analogs for understanding melting processes of Europa's ice shell.

  14. Concurrent 3-D sonifications enable the head-up monitoring of two interrelated aircraft navigation instruments.

    PubMed

    Towers, John; Burgess-Limerick, Robin; Riek, Stephan

    2014-12-01

    The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.

  15. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness...

  16. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  17. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  18. Efficacy of extracting indices from large-scale acoustic recordings to monitor biodiversity.

    PubMed

    Buxton, Rachel; McKenna, Megan F; Clapp, Mary; Meyer, Erik; Stabenau, Erik; Angeloni, Lisa M; Crooks, Kevin; Wittemyer, George

    2018-04-20

    Passive acoustic monitoring has the potential to be a powerful approach for assessing biodiversity across large spatial and temporal scales. However, extracting meaningful information from recordings can be prohibitively time consuming. Acoustic indices offer a relatively rapid method for processing acoustic data and are increasingly used to characterize biological communities. We examine the ability of acoustic indices to predict the diversity and abundance of biological sounds within recordings. First we reviewed the acoustic index literature and found that over 60 indices have been applied to a range of objectives with varying success. We then implemented a subset of the most successful indices on acoustic data collected at 43 sites in temperate terrestrial and tropical marine habitats across the continental U.S., developing a predictive model of the diversity of animal sounds observed in recordings. For terrestrial recordings, random forest models using a suite of acoustic indices as covariates predicted Shannon diversity, richness, and total number of biological sounds with high accuracy (R 2 > = 0.94, mean squared error MSE < = 170.2). Among the indices assessed, roughness, acoustic activity, and acoustic richness contributed most to the predictive ability of models. Performance of index models was negatively impacted by insect, weather, and anthropogenic sounds. For marine recordings, random forest models predicted Shannon diversity, richness, and total number of biological sounds with low accuracy (R 2 < = 0.40, MSE > = 195), indicating that alternative methods are necessary in marine habitats. Our results suggest that using a combination of relevant indices in a flexible model can accurately predict the diversity of biological sounds in temperate terrestrial acoustic recordings. Thus, acoustic approaches could be an important contribution to biodiversity monitoring in some habitats in the face of accelerating human-caused ecological change. This article is protected by copyright. All rights reserved.

  19. How Internally Coupled Ears Generate Temporal and Amplitude Cues for Sound Localization.

    PubMed

    Vedurmudi, A P; Goulet, J; Christensen-Dalsgaard, J; Young, B A; Williams, R; van Hemmen, J L

    2016-01-15

    In internally coupled ears, displacement of one eardrum creates pressure waves that propagate through air-filled passages in the skull and cause displacement of the opposing eardrum, and conversely. By modeling the membrane, passages, and propagating pressure waves, we show that internally coupled ears generate unique amplitude and temporal cues for sound localization. The magnitudes of both these cues are directionally dependent. The tympanic fundamental frequency segregates a low-frequency regime with constant time-difference magnification from a high-frequency domain with considerable amplitude magnification.

  20. Examination of propeller sound production using large eddy simulation

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Kumar, Praveen; Mahesh, Krishnan

    2018-06-01

    The flow field of a five-bladed marine propeller operating at design condition, obtained using large eddy simulation, is used to calculate the resulting far-field sound. The results of three acoustic formulations are compared, and the effects of the underlying assumptions are quantified. The integral form of the Ffowcs-Williams and Hawkings (FW-H) equation is solved on the propeller surface, which is discretized into a collection of N radial strips. Further assumptions are made to reduce FW-H to a Curle acoustic analogy and a point-force dipole model. Results show that although the individual blades are strongly tonal in the rotor plane, the propeller is acoustically compact at low frequency and the tonal sound interferes destructively in the far field. The propeller is found to be acoustically compact for frequencies up to 100 times the rotation rate. The overall far-field acoustic signature is broadband. Locations of maximum sound of the propeller occur along the axis of rotation both up and downstream. The propeller hub is found to be a source of significant sound to observers in the rotor plane, due to flow separation and interaction with the blade-root wakes. The majority of the propeller sound is generated by localized unsteadiness at the blade tip, which is caused by shedding of the tip vortex. Tonal blade sound is found to be caused by the periodic motion of the loaded blades. Turbulence created in the blade boundary layer is convected past the blade trailing edge leading to generation of broadband noise along the blade. Acoustic energy is distributed among higher frequencies as local Reynolds number increases radially along the blades. Sound source correlation and spectra are examined in the context of noise modeling.

  1. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  2. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  3. Puget Sound Route Learning Test: Examination of the Validity and Reliability of a Novel Route Test in Healthy Older Adults and Alzheimer's Disease Patients

    ERIC Educational Resources Information Center

    Tiernan, Kristine N.; Schenk, Kelli; Swadberg, Danielle; Shimonova, Marianna; Schollaert, Daniel; Boorkman, Patti; Cherrier, Monique M.

    2004-01-01

    The validity and reliability of a novel route learning test were examined to assess the effectiveness of its use in evaluating spatial memory in healthy older adults and patients with Alzheimer's disease (AD). The Puget Sound Route Learning Test was significantly correlated with an existing measure of cognitive ability, the Dementia Rating Scale.…

  4. An Evaluation of the Ability of Navy Hospital Corpsmen to Collect Chest Pain Data from Patients

    DTIC Science & Technology

    1984-01-11

    PREVIOUS CARDIO-RESPIRATOFJY ILLNESS: (significant illenss either cardiovascular or respiratory ) YES (64) NO (65) PREVIOUS MAJOR SURGERY...clavicle to chin - elevated) otherwise circle normal) NORMAL (97) RAISED (98) RESPIRATORY MOVEMENT: (abnormal = the difference between...ABNORMAL (100) HEART SOUNDS: (with a stethoscope listen to the 1st and 2nd heart sounds; normal - lub-dub, lub-dub; abnormal " everything else

  5. Psychomotor Performance Effects Upon Elementary School Children by Sex and Perceptual Speed Ability of Three Compressions of an Instructional Sound Motion Picture.

    ERIC Educational Resources Information Center

    Masterson, James; And Others

    Forty-eight sixth-grade students were studied to determine their response to selected compressions of the narration of an instructional sound motion picture. A 4:10 color film with a 158 wpm recorded narration was shown at 25, 33-1/3 and 50 percent compression rates; performance time and quality were measured immediately and after 12-day…

  6. Imitation of novel conspecific and human speech sounds in the killer whale (Orcinus orca).

    PubMed

    Abramson, José Z; Hernández-Lloreda, Mª Victoria; García, Lino; Colmenares, Fernando; Aboitiz, Francisco; Call, Josep

    2018-01-31

    Vocal imitation is a hallmark of human spoken language, which, along with other advanced cognitive skills, has fuelled the evolution of human culture. Comparative evidence has revealed that although the ability to copy sounds from conspecifics is mostly uniquely human among primates, a few distantly related taxa of birds and mammals have also independently evolved this capacity. Remarkably, field observations of killer whales have documented the existence of group-differentiated vocal dialects that are often referred to as traditions or cultures and are hypothesized to be acquired non-genetically. Here we use a do-as-I-do paradigm to study the abilities of a killer whale to imitate novel sounds uttered by conspecific (vocal imitative learning) and human models (vocal mimicry). We found that the subject made recognizable copies of all familiar and novel conspecific and human sounds tested and did so relatively quickly (most during the first 10 trials and three in the first attempt). Our results lend support to the hypothesis that the vocal variants observed in natural populations of this species can be socially learned by imitation. The capacity for vocal imitation shown in this study may scaffold the natural vocal traditions of killer whales in the wild. © 2018 The Author(s).

  7. Cherenkov sound on a surface of a topological insulator

    NASA Astrophysics Data System (ADS)

    Smirnov, Sergey

    2013-11-01

    Topological insulators are currently of considerable interest due to peculiar electronic properties originating from helical states on their surfaces. Here we demonstrate that the sound excited by helical particles on surfaces of topological insulators has several exotic properties fundamentally different from sound propagating in nonhelical or even isotropic helical systems. Specifically, the sound may have strictly forward propagation absent for isotropic helical states. Its dependence on the anisotropy of the realistic surface states is of distinguished behavior which may be used as an alternative experimental tool to measure the anisotropy strength. Fascinating from the fundamental point of view backward, or anomalous, Cherenkov sound is excited above the critical angle π/2 when the anisotropy exceeds a critical value. Strikingly, at strong anisotropy the sound localizes into a few forward and backward beams propagating along specific directions.

  8. 7 CFR 1779.48 - Collateral.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and ability of project management, soundness of the project, and the borrower's prospective earnings..., water rights, buildings, machinery, equipment, accounts receivable, contracts, cash, or other accounts...

  9. 7 CFR 1779.48 - Collateral.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... and ability of project management, soundness of the project, and the borrower's prospective earnings..., water rights, buildings, machinery, equipment, accounts receivable, contracts, cash, or other accounts...

  10. 7 CFR 1779.48 - Collateral.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and ability of project management, soundness of the project, and the borrower's prospective earnings..., water rights, buildings, machinery, equipment, accounts receivable, contracts, cash, or other accounts...

  11. 7 CFR 1779.48 - Collateral.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and ability of project management, soundness of the project, and the borrower's prospective earnings..., water rights, buildings, machinery, equipment, accounts receivable, contracts, cash, or other accounts...

  12. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  13. Behavior and modeling of two-dimensional precedence effect in head-unrestrained cats

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.

    2015-01-01

    The precedence effect (PE) is an auditory illusion that occurs when listeners localize nearly coincident and similar sounds from different spatial locations, such as a direct sound and its echo. It has mostly been studied in humans and animals with immobile heads in the horizontal plane; speaker pairs were often symmetrically located in the frontal hemifield. The present study examined the PE in head-unrestrained cats for a variety of paired-sound conditions along the horizontal, vertical, and diagonal axes. Cats were trained with operant conditioning to direct their gaze to the perceived sound location. Stereotypical PE-like behaviors were observed for speaker pairs placed in azimuth or diagonally in the frontal hemifield as the interstimulus delay was varied. For speaker pairs in the median sagittal plane, no clear PE-like behavior occurred. Interestingly, when speakers were placed diagonally in front of the cat, certain PE-like behavior emerged along the vertical dimension. However, PE-like behavior was not observed when both speakers were located in the left hemifield. A Hodgkin-Huxley model was used to simulate responses of neurons in the medial superior olive (MSO) to sound pairs in azimuth. The novel simulation incorporated a low-threshold potassium current and frequency mismatches to generate internal delays. The model exhibited distinct PE-like behavior, such as summing localization and localization dominance. The simulation indicated that certain encoding of the PE could have occurred before information reaches the inferior colliculus, and MSO neurons with binaural inputs having mismatched characteristic frequencies may play an important role. PMID:26133795

  14. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  15. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  16. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus

    PubMed Central

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-01-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218

  17. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    PubMed

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. The role of musical aptitude and language skills in preattentive duration processing in school-aged children.

    PubMed

    Milovanov, Riia; Huotilainen, Minna; Esquef, Paulo A A; Alku, Paavo; Välimäki, Vesa; Tervaniemi, Mari

    2009-08-28

    We examined 10-12-year old elementary school children's ability to preattentively process sound durations in music and speech stimuli. In total, 40 children had either advanced foreign language production skills and higher musical aptitude or less advanced results in both musicality and linguistic tests. Event-related potential (ERP) recordings of the mismatch negativity (MMN) show that the duration changes in musical sounds are more prominently and accurately processed than changes in speech sounds. Moreover, children with advanced pronunciation and musicality skills displayed enhanced MMNs to duration changes in both speech and musical sounds. Thus, our study provides further evidence for the claim that musical aptitude and linguistic skills are interconnected and the musical features of the stimuli could have a preponderant role in preattentive duration processing.

  19. Consonant Differentiation Mediates the Discrepancy between Non-verbal and Verbal Abilities in Children with ASD

    ERIC Educational Resources Information Center

    Key, A. P.; Yoder, P. J.; Stone, W. L.

    2016-01-01

    Background: Many children with autism spectrum disorder (ASD) demonstrate verbal communication disorders reflected in lower verbal than non-verbal abilities. The present study examined the extent to which this discrepancy is associated with atypical speech sound differentiation. Methods: Differences in the amplitude of auditory event-related…

  20. Students' Understanding of Genetics Concepts: The Effect of Reasoning Ability and Learning Approaches

    ERIC Educational Resources Information Center

    Kiliç, Didem; Saglam, Necdet

    2014-01-01

    Students tend to learn genetics by rote and may not realise the interrelationships in daily life. Because reasoning abilities are necessary to construct relationships between concepts and rote learning impedes the students' sound understanding, it was predicted that having high level of formal reasoning and adopting meaningful learning orientation…

  1. Speech Discrimination in 11-Month-Old Bilingual and Monolingual Infants: A Magnetoencephalography Study

    ERIC Educational Resources Information Center

    Ferjan Ramírez, Naja; Ramírez, Rey R.; Clarke, Maggie; Taulu, Samu; Kuhl, Patricia K.

    2017-01-01

    Language experience shapes infants' abilities to process speech sounds, with universal phonetic discrimination abilities narrowing in the second half of the first year. Brain measures reveal a corresponding change in neural discrimination as the infant brain becomes selectively sensitive to its native language(s). Whether and how bilingual…

  2. The Impact of New Technologies on the Literacy Attainment of Deaf Children

    ERIC Educational Resources Information Center

    Harris, Margaret

    2015-01-01

    To become successful readers, hearing children require competence in both decoding--the ability to read individual words, underpinned by phonological skills and letter-sound knowledge--and linguistic comprehension--the ability to understand what they read--underpinned by language skills, including vocabulary knowledge. Children who are born with a…

  3. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    PubMed

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  4. Investigation of Liner Characteristics in the NASA Langley Curved Duct Test Rig

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Brown, Martha C.; Watson, Willie R.; Jones, Michael G.

    2007-01-01

    The Curved Duct Test Rig (CDTR), which is designed to investigate propagation of sound in a duct with flow, has been developed at NASA Langley Research Center. The duct incorporates an adaptive control system to generate a tone in the duct at a specific frequency with a target Sound Pressure Level and a target mode shape. The size of the duct, the ability to isolate higher order modes, and the ability to modify the duct configuration make this rig unique among experimental duct acoustics facilities. An experiment is described in which the facility performance is evaluated by measuring the sound attenuation by a sample duct liner. The liner sample comprises one wall of the liner test section. Sound in tones from 500 to 2400 Hz, with modes that are parallel to the liner surface of order 0 to 5, and that are normal to the liner surface of order 0 to 2, can be generated incident on the liner test section. Tests are performed in which sound is generated without axial flow in the duct and with flow at a Mach number of 0.275. The attenuation of the liner is determined by comparing the sound power in a hard wall section downstream of the liner test section to the sound power in a hard wall section upstream of the liner test section. These experimentally determined attenuations are compared to numerically determined attenuations calculated by means of a finite element analysis code. The code incorporates liner impedance values educed from measured data from the NASA Langley Grazing Incidence Tube, a test rig that is used for investigating liner performance with flow and with (0,0) mode incident grazing. The analytical and experimental results compare favorably, indicating the validity of the finite element method and demonstrating that finite element prediction tools can be used together with experiment to characterize the liner attenuation.

  5. Cortical activity patterns predict speech discrimination ability

    PubMed Central

    Engineer, Crystal T; Perez, Claudia A; Chen, YeTing H; Carraway, Ryan S; Reed, Amanda C; Shetake, Jai A; Jakkamsetti, Vikram; Chang, Kevin Q; Kilgard, Michael P

    2010-01-01

    Neural activity in the cerebral cortex can explain many aspects of sensory perception. Extensive psychophysical and neurophysiological studies of visual motion and vibrotactile processing show that the firing rate of cortical neurons averaged across 50–500 ms is well correlated with discrimination ability. In this study, we tested the hypothesis that primary auditory cortex (A1) neurons use temporal precision on the order of 1–10 ms to represent speech sounds shifted into the rat hearing range. Neural discrimination was highly correlated with behavioral performance on 11 consonant-discrimination tasks when spike timing was preserved and was not correlated when spike timing was eliminated. This result suggests that spike timing contributes to the auditory cortex representation of consonant sounds. PMID:18425123

  6. Panels with low-Q-factor resonators with theoretically infinite sound-proofing ability at a single frequency

    NASA Astrophysics Data System (ADS)

    Lazarev, L. A.

    2015-07-01

    An infinite panel with two types of resonators regularly installed on it is theoretically considered. Each resonator is an air-filled cavity hermetically closed by a plate, which executes piston vibrations. The plate and air inside the cavity play the roles of mass and elasticity, respectively. Every other resonator is reversed. At a certain ratio between the parameters of the resonators at the tuning frequency of the entire system, the acoustic-pressure force that directly affects the panel can be fully compensated by the action forces of the resonators. In this case, the sound-proofing ability (transmission loss) tends to infinity. The presented calculations show that a complete transmission-loss effect can be achieved even with low- Q resonators.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    St-Onge, Denis A.

    The two-dimensional Terry–Horton equation is shown to exhibit the Dimits shift when suitably modified to capture both the nonlinear enhancement of zonal/drift-wave interactions and the existence of residual Rosenbluth–Hinton states. This phenomenon persists through numerous simplifications of the equation, including a quasilinear approximation as well as a four-mode truncation. It is shown that the use of an appropriate adiabatic electron response, for which the electrons are not affected by the flux-averaged potential, results in anmore » $$\\boldsymbol{E}\\times \\boldsymbol{B}$$ nonlinearity that can efficiently transfer energy non-locally to length scales of the order of the sound radius. The size of the shift for the nonlinear system is heuristically calculated and found to be in excellent agreement with numerical solutions. The existence of the Dimits shift for this system is then understood as an ability of the unstable primary modes to efficiently couple to stable modes at smaller scales, and the shift ends when these stable modes eventually destabilize as the density gradient is increased. This non-local mechanism of energy transfer is argued to be generically important even for more physically complete systems.« less

  8. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    DTIC Science & Technology

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  9. Lummi Bay Marina, Whatcom County, Washington. Draft Detailed Project Report and Draft Environmental Impact Statement.

    DTIC Science & Technology

    1983-12-01

    observations of qray whales from the waters inside of Wasbington including the eastern Strait of Juan de ruca, the San Juan Islands, Puget Sound , and Hood...waters in winter. in the North Pacific this E.ecies is presently estimated tc number about 17,000 animals. One fin whale was pursued in Puget Sound i...owns submerged lands from tideland elevation -4.5 feet MLLW to deep water in Puget Sound . The Lummi Tribe (local sponsor) owns Reservation lands above

  10. Sound levels and their effects on children in a German primary school.

    PubMed

    Eysel-Gosepath, Katrin; Daut, Tobias; Pinger, Andreas; Lehmacher, Walter; Erren, Thomas

    2012-12-01

    Considerable sound levels are produced in primary schools by voices of children and resonance effects. As a consequence, hearing loss and mental impairment may occur. In a Cologne primary school, sound levels were measured in three different classrooms, each with 24 children, 8-10 years old, and one teacher. Sound dosimeters were positioned in the room and near the teacher's ear. Additional measurements were done in one classroom fully equipped with sound-absorbing materials. A questionnaire containing 12 questions about noise at school was distributed to 100 children, 8-10 years old. Measurements were repeated after children had been taught about noise damage and while "noise lights" were used. Mean sound levels of 5-h per day measuring period were 78 dB (A) near the teacher's ear and 70 dB (A) in the room. The average of all measured maximal sound levels for 1 s was 105 dB (A) for teachers, and 100 dB (A) for rooms. In the soundproofed classroom, Leq was 66 dB (A). The questionnaire revealed certain judgment of the children concerning situations with high sound levels and their ability to develop ideas for noise reduction. However, no clear sound level reduction was identified after noise education and using "noise lights" during lessons. Children and their teachers are equally exposed to high sound levels at school. Early sensitization to noise and the possible installation of sound-absorbing materials can be important means to prevent noise-associated hearing loss and mental impairment.

  11. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    PubMed

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  12. Preliminary laboratory testing on the sound absorption of coupled cavity sonic crystal

    NASA Astrophysics Data System (ADS)

    Kristiani, R.; Yahya, I.; Harjana; Suparmi

    2016-11-01

    This paper focuses on the sound absorption performance of coupled cavity sonic crystal. It constructed by a pair of a cylindrical tube with different values in diameters. A laboratory test procedure after ASTM E1050 has been conducted to measure the sound absorption of the sonic crystal elements. The test procedures were implemented to a single coupled scatterer and also to a pair of similar structure. The results showed that using the paired structure bring a better possibility for increase the sound absorption to a wider absorption range. It also bring a practical advantage for setting the local Helmholtz resonant frequency to certain intended frequency.

  13. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  14. An open-structure sound insulator against low-frequency and wide-band acoustic waves

    NASA Astrophysics Data System (ADS)

    Chen, Zhe; Fan, Li; Zhang, Shu-yi; Zhang, Hui; Li, Xiao-juan; Ding, Jin

    2015-10-01

    To block sound, i.e., the vibration of air, most insulators are based on sealed structures and prevent the flow of the air. In this research, an acoustic metamaterial adopting side structures, loops, and labyrinths, arranged along a main tube, is presented. By combining the accurately designed side structures, an extremely wide forbidden band with a low cut-off frequency of 80 Hz is produced, which demonstrates a powerful low-frequency and wide-band sound insulation ability. Moreover, by virtue of the bypass arrangement, the metamaterial is based on an open structure, and thus air flow is allowed while acoustic waves can be insulated.

  15. Towards Dynamic Contrast Specific Ultrasound Tomography

    NASA Astrophysics Data System (ADS)

    Demi, Libertario; van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-10-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  16. Towards Dynamic Contrast Specific Ultrasound Tomography.

    PubMed

    Demi, Libertario; Van Sloun, Ruud J G; Wijkstra, Hessel; Mischi, Massimo

    2016-10-05

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast.

  17. Towards Dynamic Contrast Specific Ultrasound Tomography

    PubMed Central

    Demi, Libertario; Van Sloun, Ruud J. G.; Wijkstra, Hessel; Mischi, Massimo

    2016-01-01

    We report on the first study demonstrating the ability of a recently-developed, contrast-enhanced, ultrasound imaging method, referred to as cumulative phase delay imaging (CPDI), to image and quantify ultrasound contrast agent (UCA) kinetics. Unlike standard ultrasound tomography, which exploits changes in speed of sound and attenuation, CPDI is based on a marker specific to UCAs, thus enabling dynamic contrast-specific ultrasound tomography (DCS-UST). For breast imaging, DCS-UST will lead to a more practical, faster, and less operator-dependent imaging procedure compared to standard echo-contrast, while preserving accurate imaging of contrast kinetics. Moreover, a linear relation between CPD values and ultrasound second-harmonic intensity was measured (coefficient of determination = 0.87). DCS-UST can find clinical applications as a diagnostic method for breast cancer localization, adding important features to multi-parametric ultrasound tomography of the breast. PMID:27703251

  18. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  19. Bilateral capacity for speech sound processing in auditory comprehension: evidence from Wada procedures.

    PubMed

    Hickok, G; Okada, K; Barr, W; Pa, J; Rogalsky, C; Donnelly, K; Barde, L; Grant, A

    2008-12-01

    Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.

  20. Diversity of fish sound types in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nowacek, Douglas P.; Akamatsu, Tomonari; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang

    2017-01-01

    Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus), and 1 + N19 might be produced by Belanger’s croaker (J. belangerii). Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed. PMID:29085746

Top