Sample records for sound localization

  1. 75 FR 34634 - Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-18

    ...-AA08 Special Local Regulation; Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... Guard is establishing a permanent Special Local Regulation on the navigable waters of Long Island Sound... Sound event. This special local regulation is necessary to provide for the safety of life by protecting...

  2. Object localization using a biosonar beam: how opening your mouth improves localization.

    PubMed

    Arditi, G; Weiss, A J; Yovel, Y

    2015-08-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions.

  3. Object localization using a biosonar beam: how opening your mouth improves localization

    PubMed Central

    Arditi, G.; Weiss, A. J.; Yovel, Y.

    2015-01-01

    Determining the location of a sound source is crucial for survival. Both predators and prey usually produce sound while moving, revealing valuable information about their presence and location. Animals have thus evolved morphological and neural adaptations allowing precise sound localization. Mammals rely on the temporal and amplitude differences between the sound signals arriving at their two ears, as well as on the spectral cues available in the signal arriving at a single ear to localize a sound source. Most mammals rely on passive hearing and are thus limited by the acoustic characteristics of the emitted sound. Echolocating bats emit sound to perceive their environment. They can, therefore, affect the frequency spectrum of the echoes they must localize. The biosonar sound beam of a bat is directional, spreading different frequencies into different directions. Here, we analyse mathematically the spatial information that is provided by the beam and could be used to improve sound localization. We hypothesize how bats could improve sound localization by altering their echolocation signal design or by increasing their mouth gape (the size of the sound emitter) as they, indeed, do in nature. Finally, we also reveal a trade-off according to which increasing the echolocation signal's frequency improves the accuracy of sound localization but might result in undesired large localization errors under low signal-to-noise ratio conditions. PMID:26361552

  4. Directional Hearing and Sound Source Localization in Fishes.

    PubMed

    Sisneros, Joseph A; Rogers, Peter H

    2016-01-01

    Evidence suggests that the capacity for sound source localization is common to mammals, birds, reptiles, and amphibians, but surprisingly it is not known whether fish locate sound sources in the same manner (e.g., combining binaural and monaural cues) or what computational strategies they use for successful source localization. Directional hearing and sound source localization in fishes continues to be important topics in neuroethology and in the hearing sciences, but the empirical and theoretical work on these topics have been contradictory and obscure for decades. This chapter reviews the previous behavioral work on directional hearing and sound source localization in fishes including the most recent experiments on sound source localization by the plainfin midshipman fish (Porichthys notatus), which has proven to be an exceptional species for fish studies of sound localization. In addition, the theoretical models of directional hearing and sound source localization for fishes are reviewed including a new model that uses a time-averaged intensity approach for source localization that has wide applicability with regard to source type, acoustic environment, and time waveform.

  5. 75 FR 16700 - Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-02

    ...-AA08 Special Local Regulation, Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain... permanent Special Local Regulation on the navigable waters of Long Island Sound between Port Jefferson, NY and Captain's Cove Seaport, Bridgeport, CT due to the annual Swim Across the Sound event. The proposed...

  6. Sound source localization identification accuracy: Envelope dependencies.

    PubMed

    Yost, William A

    2017-07-01

    Sound source localization accuracy as measured in an identification procedure in a front azimuth sound field was studied for click trains, modulated noises, and a modulated tonal carrier. Sound source localization accuracy was determined as a function of the number of clicks in a 64 Hz click train and click rate for a 500 ms duration click train. The clicks were either broadband or high-pass filtered. Sound source localization accuracy was also measured for a single broadband filtered click and compared to a similar broadband filtered, short-duration noise. Sound source localization accuracy was determined as a function of sinusoidal amplitude modulation and the "transposed" process of modulation of filtered noises and a 4 kHz tone. Different rates (16 to 512 Hz) of modulation (including unmodulated conditions) were used. Providing modulation for filtered click stimuli, filtered noises, and the 4 kHz tone had, at most, a very small effect on sound source localization accuracy. These data suggest that amplitude modulation, while providing information about interaural time differences in headphone studies, does not have much influence on sound source localization accuracy in a sound field.

  7. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  8. Material sound source localization through headphones

    NASA Astrophysics Data System (ADS)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  9. SoundCompass: A Distributed MEMS Microphone Array-Based Sensor for Sound Source Localization

    PubMed Central

    Tiete, Jelmer; Domínguez, Federico; da Silva, Bruno; Segers, Laurent; Steenhaut, Kris; Touhafi, Abdellah

    2014-01-01

    Sound source localization is a well-researched subject with applications ranging from localizing sniper fire in urban battlefields to cataloging wildlife in rural areas. One critical application is the localization of noise pollution sources in urban environments, due to an increasing body of evidence linking noise pollution to adverse effects on human health. Current noise mapping techniques often fail to accurately identify noise pollution sources, because they rely on the interpolation of a limited number of scattered sound sensors. Aiming to produce accurate noise pollution maps, we developed the SoundCompass, a low-cost sound sensor capable of measuring local noise levels and sound field directionality. Our first prototype is composed of a sensor array of 52 Microelectromechanical systems (MEMS) microphones, an inertial measuring unit and a low-power field-programmable gate array (FPGA). This article presents the SoundCompass’s hardware and firmware design together with a data fusion technique that exploits the sensing capabilities of the SoundCompass in a wireless sensor network to localize noise pollution sources. Live tests produced a sound source localization accuracy of a few centimeters in a 25-m2 anechoic chamber, while simulation results accurately located up to five broadband sound sources in a 10,000-m2 open field. PMID:24463431

  10. Effects of head movement and proprioceptive feedback in training of sound localization

    PubMed Central

    Honda, Akio; Shibata, Hiroshi; Hidaka, Souta; Gyoba, Jiro; Iwaya, Yukio; Suzuki, Yôiti

    2013-01-01

    We investigated the effects of listeners' head movements and proprioceptive feedback during sound localization practice on the subsequent accuracy of sound localization performance. The effects were examined under both restricted and unrestricted head movement conditions in the practice stage. In both cases, the participants were divided into two groups: a feedback group performed a sound localization drill with accurate proprioceptive feedback; a control group conducted it without the feedback. Results showed that (1) sound localization practice, while allowing for free head movement, led to improvement in sound localization performance and decreased actual angular errors along the horizontal plane, and that (2) proprioceptive feedback during practice decreased actual angular errors in the vertical plane. Our findings suggest that unrestricted head movement and proprioceptive feedback during sound localization training enhance perceptual motor learning by enabling listeners to use variable auditory cues and proprioceptive information. PMID:24349686

  11. Behavioral and modeling studies of sound localization in cats: effects of stimulus level and duration

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.; Tollin, Daniel J.

    2013-01-01

    Sound localization accuracy in elevation can be affected by sound spectrum alteration. Correspondingly, any stimulus manipulation that causes a change in the peripheral representation of the spectrum may degrade localization ability in elevation. The present study examined the influence of sound duration and level on localization performance in cats with the head unrestrained. Two cats were trained using operant conditioning to indicate the apparent location of a sound via gaze shift, which was measured with a search-coil technique. Overall, neither sound level nor duration had a notable effect on localization accuracy in azimuth, except at near-threshold levels. In contrast, localization accuracy in elevation improved as sound duration increased, and sound level also had a large effect on localization in elevation. For short-duration noise, the performance peaked at intermediate levels and deteriorated at low and high levels; for long-duration noise, this “negative level effect” at high levels was not observed. Simulations based on an auditory nerve model were used to explain the above observations and to test several hypotheses. Our results indicated that neither the flatness of sound spectrum (before the sound reaches the inner ear) nor the peripheral adaptation influences spectral coding at the periphery for localization in elevation, whereas neural computation that relies on “multiple looks” of the spectral analysis is critical in explaining the effect of sound duration, but not level. The release of negative level effect observed for long-duration sound could not be explained at the periphery and, therefore, is likely a result of processing at higher centers. PMID:23657278

  12. Relation of sound intensity and accuracy of localization.

    PubMed

    Farrimond, T

    1989-08-01

    Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.

  13. Localizing the sources of two independent noises: Role of time varying amplitude differences

    PubMed Central

    Yost, William A.; Brown, Christopher A.

    2013-01-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region. PMID:23556597

  14. Localizing the sources of two independent noises: role of time varying amplitude differences.

    PubMed

    Yost, William A; Brown, Christopher A

    2013-04-01

    Listeners localized the free-field sources of either one or two simultaneous and independently generated noise bursts. Listeners' localization performance was better when localizing one rather than two sound sources. With two sound sources, localization performance was better when the listener was provided prior information about the location of one of them. Listeners also localized two simultaneous noise bursts that had sinusoidal amplitude modulation (AM) applied, in which the modulation envelope was in-phase across the two source locations or was 180° out-of-phase. The AM was employed to investigate a hypothesis as to what process listeners might use to localize multiple sound sources. The results supported the hypothesis that localization of two sound sources might be based on temporal-spectral regions of the combined waveform in which the sound from one source was more intense than that from the other source. The interaural information extracted from such temporal-spectral regions might provide reliable estimates of the sound source location that produced the more intense sound in that temporal-spectral region.

  15. Sound localization and auditory response capabilities in round goby (Neogobius melanostomus)

    NASA Astrophysics Data System (ADS)

    Rollo, Audrey K.; Higgs, Dennis M.

    2005-04-01

    A fundamental role in vertebrate auditory systems is determining the direction of a sound source. While fish show directional responses to sound, sound localization remains in dispute. The species used in the current study, Neogobius melanostomus (round goby) uses sound in reproductive contexts, with both male and female gobies showing directed movement towards a calling male. The two-choice laboratory experiment was used (active versus quiet speaker) to analyze behavior of gobies in response to sound stimuli. When conspecific male spawning sounds were played, gobies moved in a direct path to the active speaker, suggesting true localization to sound. Of the animals that responded to conspecific sounds, 85% of the females and 66% of the males moved directly to the sound source. Auditory playback of natural and synthetic sounds showed differential behavioral specificity. Of gobies that responded, 89% were attracted to the speaker playing Padogobius martensii sounds, 87% to 100 Hz tone, 62% to white noise, and 56% to Gobius niger sounds. Swimming speed, as well as mean path angle to the speaker, will be presented during the presentation. Results suggest a strong localization of the round goby to a sound source, with some differential sound specificity.

  16. Modelling of human low frequency sound localization acuity demonstrates dominance of spatial variation of interaural time difference and suggests uniform just-noticeable differences in interaural time difference.

    PubMed

    Smith, Rosanna C G; Price, Stephen R

    2014-01-01

    Sound source localization is critical to animal survival and for identification of auditory objects. We investigated the acuity with which humans localize low frequency, pure tone sounds using timing differences between the ears. These small differences in time, known as interaural time differences or ITDs, are identified in a manner that allows localization acuity of around 1° at the midline. Acuity, a relative measure of localization ability, displays a non-linear variation as sound sources are positioned more laterally. All species studied localize sounds best at the midline and progressively worse as the sound is located out towards the side. To understand why sound localization displays this variation with azimuthal angle, we took a first-principles, systemic, analytical approach to model localization acuity. We calculated how ITDs vary with sound frequency, head size and sound source location for humans. This allowed us to model ITD variation for previously published experimental acuity data and determine the distribution of just-noticeable differences in ITD. Our results suggest that the best-fit model is one whereby just-noticeable differences in ITDs are identified with uniform or close to uniform sensitivity across the physiological range. We discuss how our results have several implications for neural ITD processing in different species as well as development of the auditory system.

  17. The effect of brain lesions on sound localization in complex acoustic environments.

    PubMed

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  18. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  19. Spatial hearing in Cope’s gray treefrog: I. Open and closed loop experiments on sound localization in the presence and absence of noise

    PubMed Central

    Caldwell, Michael S.; Bee, Mark A.

    2014-01-01

    The ability to reliably locate sound sources is critical to anurans, which navigate acoustically complex breeding choruses when choosing mates. Yet, the factors influencing sound localization performance in frogs remain largely unexplored. We applied two complementary methodologies, open and closed loop playback trials, to identify influences on localization abilities in Cope’s gray treefrog, Hyla chrysoscelis. We examined localization acuity and phonotaxis behavior of females in response to advertisement calls presented from 12 azimuthal angles, at two signal levels, in the presence and absence of noise, and at two noise levels. Orientation responses were consistent with precise localization of sound sources, rather than binary discrimination between sources on either side of the body (lateralization). Frogs were unable to discriminate between sounds arriving from forward and rearward directions, and accurate localization was limited to forward sound presentation angles. Within this region, sound presentation angle had little effect on localization acuity. The presence of noise and low signal-to-noise ratios also did not strongly impair localization ability in open loop trials, but females exhibited reduced phonotaxis performance consistent with impaired localization during closed loop trials. We discuss these results in light of previous work on spatial hearing in anurans. PMID:24504182

  20. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    PubMed

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Auditory and visual localization accuracy in young children and adults.

    PubMed

    Martin, Karen; Johnstone, Patti; Hedrick, Mark

    2015-06-01

    This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Auditory Localization: An Annotated Bibliography

    DTIC Science & Technology

    1983-11-01

    tranverse plane, natural sound localization in ,-- both horizontal and vertical planes can be performed with nearly the same accuracy as real sound sources...important for unscrambling the competing sounds which so often occur in natural environments. A workable sound sensor has been constructed and empirical

  3. Hearing in three dimensions: Sound localization

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1990-01-01

    The ability to localize a source of sound in space is a fundamental component of the three dimensional character of the sound of audio. For over a century scientists have been trying to understand the physical and psychological processes and physiological mechanisms that subserve sound localization. This research has shown that important information about sound source position is provided by interaural differences in time of arrival, interaural differences in intensity and direction-dependent filtering provided by the pinnae. Progress has been slow, primarily because experiments on localization are technically demanding. Control of stimulus parameters and quantification of the subjective experience are quite difficult problems. Recent advances, such as the ability to simulate a three dimensional sound field over headphones, seem to offer potential for rapid progress. Research using the new techniques has already produced new information. It now seems that interaural time differences are a much more salient and dominant localization cue than previously believed.

  4. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  5. Potential sound production by a deep-sea fish

    NASA Astrophysics Data System (ADS)

    Mann, David A.; Jarvis, Susan M.

    2004-05-01

    Swimbladder sonic muscles of deep-sea fishes were described over 35 years ago. Until now, no recordings of probable deep-sea fish sounds have been published. A sound likely produced by a deep-sea fish has been isolated and localized from an analysis of acoustic recordings made at the AUTEC test range in the Tongue of the Ocean, Bahamas, from four deep-sea hydrophones. This sound is typical of a fish sound in that it is pulsed and relatively low frequency (800-1000 Hz). Using time-of-arrival differences, the sound was localized to 548-696-m depth, where the bottom was 1620 m. The ability to localize this sound in real-time on the hydrophone range provides a great advantage for being able to identify the sound-producer using a remotely operated vehicle.

  6. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    PubMed Central

    Kamminga, Jacob; Le, Duc; Havinga, Paul

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5° and a standard deviation of 2.3°. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android. PMID:27649176

  7. Judging sound rotation when listeners and sounds rotate: Sound source localization is a multisystem process.

    PubMed

    Yost, William A; Zhong, Xuan; Najam, Anbar

    2015-11-01

    In four experiments listeners were rotated or were stationary. Sounds came from a stationary loudspeaker or rotated from loudspeaker to loudspeaker around an azimuth array. When either sounds or listeners rotate the auditory cues used for sound source localization change, but in the everyday world listeners perceive sound rotation only when sounds rotate not when listeners rotate. In the everyday world sound source locations are referenced to positions in the environment (a world-centric reference system). The auditory cues for sound source location indicate locations relative to the head (a head-centric reference system), not locations relative to the world. This paper deals with a general hypothesis that the world-centric location of sound sources requires the auditory system to have information about auditory cues used for sound source location and cues about head position. The use of visual and vestibular information in determining rotating head position in sound rotation perception was investigated. The experiments show that sound rotation perception when sources and listeners rotate was based on acoustic, visual, and, perhaps, vestibular information. The findings are consistent with the general hypotheses and suggest that sound source localization is not based just on acoustics. It is a multisystem process.

  8. Difference in precedence effect between children and adults signifies development of sound localization abilities in complex listening tasks

    PubMed Central

    Litovsky, Ruth Y.; Godar, Shelly P.

    2010-01-01

    The precedence effect refers to the fact that humans are able to localize sound in reverberant environments, because the auditory system assigns greater weight to the direct sound (lead) than the later-arriving sound (lag). In this study, absolute sound localization was studied for single source stimuli and for dual source lead-lag stimuli in 4–5 year old children and adults. Lead-lag delays ranged from 5–100 ms. Testing was conducted in free field, with pink noise bursts emitted from loudspeakers positioned on a horizontal arc in the frontal field. Listeners indicated how many sounds were heard and the perceived location of the first- and second-heard sounds. Results suggest that at short delays (up to 10 ms), the lead dominates sound localization strongly at both ages, and localization errors are similar to those with single-source stimuli. At longer delays errors can be large, stemming from over-integration of the lead and lag, interchanging of perceived locations of the first-heard and second-heard sounds due to temporal order confusion, and dominance of the lead over the lag. The errors are greater for children than adults. Results are discussed in the context of maturation of auditory and non-auditory factors. PMID:20968369

  9. Sound Source Localization and Speech Understanding in Complex Listening Environments by Single-sided Deaf Listeners After Cochlear Implantation.

    PubMed

    Zeitler, Daniel M; Dorman, Michael F; Natale, Sarah J; Loiselle, Louise; Yost, William A; Gifford, Rene H

    2015-09-01

    To assess improvements in sound source localization and speech understanding in complex listening environments after unilateral cochlear implantation for single-sided deafness (SSD). Nonrandomized, open, prospective case series. Tertiary referral center. Nine subjects with a unilateral cochlear implant (CI) for SSD (SSD-CI) were tested. Reference groups for the task of sound source localization included young (n = 45) and older (n = 12) normal-hearing (NH) subjects and 27 bilateral CI (BCI) subjects. Unilateral cochlear implantation. Sound source localization was tested with 13 loudspeakers in a 180 arc in front of the subject. Speech understanding was tested with the subject seated in an 8-loudspeaker sound system arrayed in a 360-degree pattern. Directionally appropriate noise, originally recorded in a restaurant, was played from each loudspeaker. Speech understanding in noise was tested using the Azbio sentence test and sound source localization quantified using root mean square error. All CI subjects showed poorer-than-normal sound source localization. SSD-CI subjects showed a bimodal distribution of scores: six subjects had scores near the mean of those obtained by BCI subjects, whereas three had scores just outside the 95th percentile of NH listeners. Speech understanding improved significantly in the restaurant environment when the signal was presented to the side of the CI. Cochlear implantation for SSD can offer improved speech understanding in complex listening environments and improved sound source localization in both children and adults. On tasks of sound source localization, SSD-CI patients typically perform as well as BCI patients and, in some cases, achieve scores at the upper boundary of normal performance.

  10. Sound source localization and segregation with internally coupled ears: the treefrog model

    PubMed Central

    Christensen-Dalsgaard, Jakob

    2016-01-01

    Acoustic signaling plays key roles in mediating many of the reproductive and social behaviors of anurans (frogs and toads). Moreover, acoustic signaling often occurs at night, in structurally complex habitats, such as densely vegetated ponds, and in dense breeding choruses characterized by high levels of background noise and acoustic clutter. Fundamental to anuran behavior is the ability of the auditory system to determine accurately the location from where sounds originate in space (sound source localization) and to assign specific sounds in the complex acoustic milieu of a chorus to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating directions for future research on these animals that will require the collaborative efforts of biologists, physicists, and roboticists. PMID:27730384

  11. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    PubMed

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  12. Sound-localization experiments with barn owls in virtual space: influence of broadband interaural level different on head-turning behavior.

    PubMed

    Poganiatz, I; Wagner, H

    2001-04-01

    Interaural level differences play an important role for elevational sound localization in barn owls. The changes of this cue with sound location are complex and frequency dependent. We exploited the opportunities offered by the virtual space technique to investigate the behavioral relevance of the overall interaural level difference by fixing this parameter in virtual stimuli to a constant value or introducing additional broadband level differences to normal virtual stimuli. Frequency-specific monaural cues in the stimuli were not manipulated. We observed an influence of the broadband interaural level differences on elevational, but not on azimuthal sound localization. Since results obtained with our manipulations explained only part of the variance in elevational turning angle, we conclude that frequency-specific cues are also important. The behavioral consequences of changes of the overall interaural level difference in a virtual sound depended on the combined interaural time difference contained in the stimulus, indicating an indirect influence of temporal cues on elevational sound localization as well. Thus, elevational sound localization is influenced by a combination of many spatial cues including frequency-dependent and temporal features.

  13. Spatial localization deficits and auditory cortical dysfunction in schizophrenia

    PubMed Central

    Perrin, Megan A.; Butler, Pamela D.; DiCostanzo, Joanna; Forchelli, Gina; Silipo, Gail; Javitt, Daniel C.

    2014-01-01

    Background Schizophrenia is associated with deficits in the ability to discriminate auditory features such as pitch and duration that localize to primary cortical regions. Lesions of primary vs. secondary auditory cortex also produce differentiable effects on ability to localize and discriminate free-field sound, with primary cortical lesions affecting variability as well as accuracy of response. Variability of sound localization has not previously been studied in schizophrenia. Methods The study compared performance between patients with schizophrenia (n=21) and healthy controls (n=20) on sound localization and spatial discrimination tasks using low frequency tones generated from seven speakers concavely arranged with 30 degrees separation. Results For the sound localization task, patients showed reduced accuracy (p=0.004) and greater overall response variability (p=0.032), particularly in the right hemifield. Performance was also impaired on the spatial discrimination task (p=0.018). On both tasks, poorer accuracy in the right hemifield was associated with greater cognitive symptom severity. Better accuracy in the left hemifield was associated with greater hallucination severity on the sound localization task (p=0.026), but no significant association was found for the spatial discrimination task. Conclusion Patients show impairments in both sound localization and spatial discrimination of sounds presented free-field, with a pattern comparable to that of individuals with right superior temporal lobe lesions that include primary auditory cortex (Heschl’s gyrus). Right primary auditory cortex dysfunction may protect against hallucinations by influencing laterality of functioning. PMID:20619608

  14. Accurate Sound Localization in Reverberant Environments is Mediated by Robust Encoding of Spatial Cues in the Auditory Midbrain

    PubMed Central

    Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand

    2009-01-01

    In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments. PMID:19376072

  15. Bone Conduction: Anatomy, Physiology, and Communication

    DTIC Science & Technology

    2007-05-01

    78 7.2 Human Localization Capabilities ..................................................................................84...main functions of the pinna are to direct incoming sound toward the EAC and to aid in sound localization . Some animals (e.g., dogs) can move their...pinnae to aid in sound localization , 9 but humans do not typically have this ability. People who may possess the ability to move their pinnae do

  16. Sound localization skills in children who use bilateral cochlear implants and in children with normal acoustic hearing

    PubMed Central

    Grieco-Calub, Tina M.; Litovsky, Ruth Y.

    2010-01-01

    Objectives To measure sound source localization in children who have sequential bilateral cochlear implants (BICIs); to determine if localization accuracy correlates with performance on a right-left discrimination task (i.e., spatial acuity); to determine if there is a measurable bilateral benefit on a sound source identification task (i.e., localization accuracy) by comparing performance under bilateral and unilateral listening conditions; to determine if sound source localization continues to improve with longer durations of bilateral experience. Design Two groups of children participated in this study: a group of 21 children who received BICIs in sequential procedures (5–14 years old) and a group of 7 typically-developing children with normal acoustic hearing (5 years old). Testing was conducted in a large sound-treated booth with loudspeakers positioned on a horizontal arc with a radius of 1.2 m. Children participated in two experiments that assessed spatial hearing skills. Spatial hearing acuity was assessed with a discrimination task in which listeners determined if a sound source was presented on the right or left side of center; the smallest angle at which performance on this task was reliably above chance is the minimum audible angle. Sound localization accuracy was assessed with a sound source identification task in which children identified the perceived position of the sound source from a multi-loudspeaker array (7 or 15); errors are quantified using the root-mean-square (RMS) error. Results Sound localization accuracy was highly variable among the children with BICIs, with RMS errors ranging from 19°–56°. Performance of the NH group, with RMS errors ranging from 9°–29° was significantly better. Within the BICI group, in 11/21 children RMS errors were smaller in the bilateral vs. unilateral listening condition, indicating bilateral benefit. There was a significant correlation between spatial acuity and sound localization accuracy (R2=0.68, p<0.01), suggesting that children who achieve small RMS errors tend to have the smallest MAAs. Although there was large intersubject variability, testing of 11 children in the BICI group at two sequential visits revealed a subset of children who show improvement in spatial hearing skills over time. Conclusions A subset of children who use sequential BICIs can acquire sound localization abilities, even after long intervals between activation of hearing in the first- and second-implanted ears. This suggests that children with activation of the second implant later in life may be capable of developing spatial hearing abilities. The large variability in performance among the children with BICIs suggests that maturation of sound localization abilities in children with BICIs may be dependent on various individual subject factors such as age of implantation and chronological age. PMID:20592615

  17. Embedded System Implementation of Sound Localization in Proximal Region

    NASA Astrophysics Data System (ADS)

    Iwanaga, Nobuyuki; Matsumura, Tomoya; Yoshida, Akihiro; Kobayashi, Wataru; Onoye, Takao

    A sound localization method in the proximal region is proposed, which is based on a low-cost 3D sound localization algorithm with the use of head-related transfer functions (HRTFs). The auditory parallax model is applied to the current algorithm so that more accurate HRTFs can be used for sound localization in the proximal region. In addition, head-shadowing effects based on rigid-sphere model are reproduced in the proximal region by means of a second-order IIR filter. A subjective listening test demonstrates the effectiveness of the proposed method. Embedded system implementation of the proposed method is also described claiming that the proposed method improves sound effects in the proximal region only with 5.1% increase of memory capacity and 8.3% of computational costs.

  18. Adjustment of interaural time difference in head related transfer functions based on listeners' anthropometry and its effect on sound localization

    NASA Astrophysics Data System (ADS)

    Suzuki, Yôiti; Watanabe, Kanji; Iwaya, Yukio; Gyoba, Jiro; Takane, Shouichi

    2005-04-01

    Because the transfer functions governing subjective sound localization (HRTFs) show strong individuality, sound localization systems based on synthesis of HRTFs require suitable HRTFs for individual listeners. However, it is impractical to obtain HRTFs for all listeners based on measurements. Improving sound localization by adjusting non-individualized HRTFs to a specific listener based on that listener's anthropometry might be a practical method. This study first developed a new method to estimate interaural time differences (ITDs) using HRTFs. Then correlations between ITDs and anthropometric parameters were analyzed using the canonical correlation method. Results indicated that parameters relating to head size, and shoulder and ear positions are significant. Consequently, it was attempted to express ITDs based on listener's anthropometric data. In this process, the change of ITDs as a function of azimuth angle was parameterized as a sum of sine functions. Then the parameters were analyzed using multiple regression analysis, in which the anthropometric parameters were used as explanatory variables. The predicted or individualized ITDs were installed in the nonindividualized HRTFs to evaluate sound localization performance. Results showed that individualization of ITDs improved horizontal sound localization.

  19. A Functional Neuroimaging Study of Sound Localization: Visual Cortex Activity Predicts Performance in Early-Blind Individuals

    PubMed Central

    Gougoux, Frédéric; Zatorre, Robert J; Lassonde, Maryse; Voss, Patrice

    2005-01-01

    Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged), the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life. PMID:15678166

  20. Dynamic Spatial Hearing by Human and Robot Listeners

    NASA Astrophysics Data System (ADS)

    Zhong, Xuan

    This study consisted of several related projects on dynamic spatial hearing by both human and robot listeners. The first experiment investigated the maximum number of sound sources that human listeners could localize at the same time. Speech stimuli were presented simultaneously from different loudspeakers at multiple time intervals. The maximum of perceived sound sources was close to four. The second experiment asked whether the amplitude modulation of multiple static sound sources could lead to the perception of auditory motion. On the horizontal and vertical planes, four independent noise sound sources with 60° spacing were amplitude modulated with consecutively larger phase delay. At lower modulation rates, motion could be perceived by human listeners in both cases. The third experiment asked whether several sources at static positions could serve as "acoustic landmarks" to improve the localization of other sources. Four continuous speech sound sources were placed on the horizontal plane with 90° spacing and served as the landmarks. The task was to localize a noise that was played for only three seconds when the listener was passively rotated in a chair in the middle of the loudspeaker array. The human listeners were better able to localize the sound sources with landmarks than without. The other experiments were with the aid of an acoustic manikin in an attempt to fuse binaural recording and motion data to localize sounds sources. A dummy head with recording devices was mounted on top of a rotating chair and motion data was collected. The fourth experiment showed that an Extended Kalman Filter could be used to localize sound sources in a recursive manner. The fifth experiment demonstrated the use of a fitting method for separating multiple sounds sources.

  1. Adaptation in sound localization processing induced by interaural time difference in amplitude envelope at high frequencies.

    PubMed

    Kawashima, Takayuki; Sato, Takao

    2012-01-01

    When a second sound follows a long first sound, its location appears to be perceived away from the first one (the localization/lateralization aftereffect). This aftereffect has often been considered to reflect an efficient neural coding of sound locations in the auditory system. To understand determinants of the localization aftereffect, the current study examined whether it is induced by an interaural temporal difference (ITD) in the amplitude envelope of high frequency transposed tones (over 2 kHz), which is known to function as a sound localization cue. In Experiment 1, participants were required to adjust the position of a pointer to the perceived location of test stimuli before and after adaptation. Test and adapter stimuli were amplitude modulated (AM) sounds presented at high frequencies and their positional differences were manipulated solely by the envelope ITD. Results showed that the adapter's ITD systematically affected the perceived position of test sounds to the directions expected from the localization/lateralization aftereffect when the adapter was presented at ±600 µs ITD; a corresponding significant effect was not observed for a 0 µs ITD adapter. In Experiment 2, the observed adapter effect was confirmed using a forced-choice task. It was also found that adaptation to the AM sounds at high frequencies did not significantly change the perceived position of pure-tone test stimuli in the low frequency region (128 and 256 Hz). The findings in the current study indicate that ITD in the envelope at high frequencies induces the localization aftereffect. This suggests that ITD in the high frequency region is involved in adaptive plasticity of auditory localization processing.

  2. Underwater hearing and sound localization with and without an air interface.

    PubMed

    Shupak, Avi; Sharoni, Zohara; Yanir, Yoav; Keynan, Yoav; Alfie, Yechezkel; Halpern, Pinchas

    2005-01-01

    Underwater hearing acuity and sound localization are improved by the presence of an air interface around the pinnae and inside the external ear canals. Hearing threshold and the ability to localize sound sources are reduced underwater. The resonance frequency of the external ear is lowered when the external ear canal is filled with water, and the impedance-matching ability of the middle ear is significantly reduced due to elevation of the ambient pressure, the water-mass load on the tympanic membrane, and the addition of a fluid-air interface during submersion. Sound lateralization on land is largely explained by the mechanisms of interaural intensity differences and interaural temporal or phase differences. During submersion, these differences are largely lost due to the increase in underwater sound velocity and cancellation of the head's acoustic shadow effect because of the similarity between the impedance of the skull and the surrounding water. Ten scuba divers wearing a regular opaque face mask or an opaque ProEar 2000 (Safe Dive, Ltd., Hofit, Israel) mask that enables the presence of air at ambient pressure in and around the ear made a dive to a depth of 3 m in the open sea. Four underwater speakers arranged on the horizontal plane at 90-degree intervals and at a distance of 5 m from the diver were used for testing pure-tone hearing thresholds (PTHT), the reception threshold for the recorded sound of a rubber-boat engine, and sound localization. For sound localization, the sound of the rubber boat's engine was randomly delivered by one speaker at a time at 40 dB HL above the recorded sound of a rubber-boat engine, and the diver was asked to point to the sound source. The azimuth was measured by the diver's companion using a navigation board. Underwater PTHT with both masks were significantly higher for frequencies of 250 to 6000 Hz when compared with the thresholds on land (p <0.0001). No differences were found in the PTHT or the reception threshold for the recorded sound of a rubber-boat engine for dry or wet ear conditions. There was no difference in the sound localization error between the regular mask and the ProEar 2000 mask. The presence of air around the pinna and inside the external ear canal did not improve underwater hearing sensitivity or sound localization. These results support the argument that bone conduction plays the main role in underwater hearing.

  3. Geometric Constraints on Human Speech Sound Inventories

    PubMed Central

    Dunbar, Ewan; Dupoux, Emmanuel

    2016-01-01

    We investigate the idea that the languages of the world have developed coherent sound systems in which having one sound increases or decreases the chances of having certain other sounds, depending on shared properties of those sounds. We investigate the geometries of sound systems that are defined by the inherent properties of sounds. We document three typological tendencies in sound system geometries: economy, a tendency for the differences between sounds in a system to be definable on a relatively small number of independent dimensions; local symmetry, a tendency for sound systems to have relatively large numbers of pairs of sounds that differ only on one dimension; and global symmetry, a tendency for sound systems to be relatively balanced. The finding of economy corroborates previous results; the two symmetry properties have not been previously documented. We also investigate the relation between the typology of inventory geometries and the typology of individual sounds, showing that the frequency distribution with which individual sounds occur across languages works in favor of both local and global symmetry. PMID:27462296

  4. Intercepting a sound without vision

    PubMed Central

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  5. Issues in Humanoid Audition and Sound Source Localization by Active Audition

    NASA Astrophysics Data System (ADS)

    Nakadai, Kazuhiro; Okuno, Hiroshi G.; Kitano, Hiroaki

    In this paper, we present an active audition system which is implemented on the humanoid robot "SIG the humanoid". The audition system for highly intelligent humanoids localizes sound sources and recognizes auditory events in the auditory scene. Active audition reported in this paper enables SIG to track sources by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noises.The system adaptively cancels motor noises using motor control signals and the cover acoustics. The experimental result demonstrates that active audition by integration of audition, vision, and motor control attains sound source tracking in variety of conditions.onditions.

  6. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    PubMed

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  7. Monaural Sound Localization Revisited

    NASA Technical Reports Server (NTRS)

    Wightman, Frederic L.; Kistler, Doris J.

    1997-01-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called 'monaural spectral cues.' These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  8. Monaural sound localization revisited.

    PubMed

    Wightman, F L; Kistler, D J

    1997-02-01

    Research reported during the past few decades has revealed the importance for human sound localization of the so-called "monaural spectral cues." These cues are the result of the direction-dependent filtering of incoming sound waves accomplished by the pinnae. One point of view about how these cues are extracted places great emphasis on the spectrum of the received sound at each ear individually. This leads to the suggestion that an effective way of studying the influence of these cues is to measure the ability of listeners to localize sounds when one of their ears is plugged. Numerous studies have appeared using this monaural localization paradigm. Three experiments are described here which are intended to clarify the results of the previous monaural localization studies and provide new data on how monaural spectral cues might be processed. Virtual sound sources are used in the experiments in order to manipulate and control the stimuli independently at the two ears. Two of the experiments deal with the consequences of the incomplete monauralization that may have contaminated previous work. The results suggest that even very low sound levels in the occluded ear provide access to interaural localization cues. The presence of these cues complicates the interpretation of the results of nominally monaural localization studies. The third experiment concerns the role of prior knowledge of the source spectrum, which is required if monaural cues are to be useful. The results of this last experiment demonstrate that extraction of monaural spectral cues can be severely disrupted by trial-to-trial fluctuations in the source spectrum. The general conclusion of the experiments is that, while monaural spectral cues are important, the monaural localization paradigm may not be the most appropriate way to study their role.

  9. Sound Source Localization Using Non-Conformal Surface Sound Field Transformation Based on Spherical Harmonic Wave Decomposition

    PubMed Central

    Zhang, Lanyue; Ding, Dandan; Yang, Desen; Wang, Jia; Shi, Jie

    2017-01-01

    Spherical microphone arrays have been paid increasing attention for their ability to locate a sound source with arbitrary incident angle in three-dimensional space. Low-frequency sound sources are usually located by using spherical near-field acoustic holography. The reconstruction surface and holography surface are conformal surfaces in the conventional sound field transformation based on generalized Fourier transform. When the sound source is on the cylindrical surface, it is difficult to locate by using spherical surface conformal transform. The non-conformal sound field transformation by making a transfer matrix based on spherical harmonic wave decomposition is proposed in this paper, which can achieve the transformation of a spherical surface into a cylindrical surface by using spherical array data. The theoretical expressions of the proposed method are deduced, and the performance of the method is simulated. Moreover, the experiment of sound source localization by using a spherical array with randomly and uniformly distributed elements is carried out. Results show that the non-conformal surface sound field transformation from a spherical surface to a cylindrical surface is realized by using the proposed method. The localization deviation is around 0.01 m, and the resolution is around 0.3 m. The application of the spherical array is extended, and the localization ability of the spherical array is improved. PMID:28489065

  10. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic walking behavior in Ormia ochracea. I also quantify the angular resolution of the phonotactic turning behavior. Using a model, I show that the temporal coding properties of the afferents provide most of the information required by the fly to localize a singing cricket.

  11. Localization of sound sources in a room with one microphone

    NASA Astrophysics Data System (ADS)

    Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre

    2017-08-01

    Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.

  12. How does experience modulate auditory spatial processing in individuals with blindness?

    PubMed

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-jia; Li, Jian-jun; Ting, Kin-hung; Wang, Jun; Lee, Tatia M C

    2015-05-01

    Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel "Bat-ears" sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.

  13. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera)

    PubMed Central

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-01-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear. PMID:26543574

  14. Position-dependent hearing in three species of bushcrickets (Tettigoniidae, Orthoptera).

    PubMed

    Lakes-Harlan, Reinhard; Scherberich, Jan

    2015-06-01

    A primary task of auditory systems is the localization of sound sources in space. Sound source localization in azimuth is usually based on temporal or intensity differences of sounds between the bilaterally arranged ears. In mammals, localization in elevation is possible by transfer functions at the ear, especially the pinnae. Although insects are able to locate sound sources, little attention is given to the mechanisms of acoustic orientation to elevated positions. Here we comparatively analyse the peripheral hearing thresholds of three species of bushcrickets in respect to sound source positions in space. The hearing thresholds across frequencies depend on the location of a sound source in the three-dimensional hearing space in front of the animal. Thresholds differ for different azimuthal positions and for different positions in elevation. This position-dependent frequency tuning is species specific. Largest differences in thresholds between positions are found in Ancylecha fenestrata. Correspondingly, A. fenestrata has a rather complex ear morphology including cuticular folds covering the anterior tympanal membrane. The position-dependent tuning might contribute to sound source localization in the habitats. Acoustic orientation might be a selective factor for the evolution of morphological structures at the bushcricket ear and, speculatively, even for frequency fractioning in the ear.

  15. Multiple sound source localization using gammatone auditory filtering and direct sound componence detection

    NASA Astrophysics Data System (ADS)

    Chen, Huaiyu; Cao, Li

    2017-06-01

    In order to research multiple sound source localization with room reverberation and background noise, we analyze the shortcomings of traditional broadband MUSIC and ordinary auditory filtering based broadband MUSIC method, then a new broadband MUSIC algorithm with gammatone auditory filtering of frequency component selection control and detection of ascending segment of direct sound componence is proposed. The proposed algorithm controls frequency component within the interested frequency band in multichannel bandpass filter stage. Detecting the direct sound componence of the sound source for suppressing room reverberation interference is also proposed, whose merits are fast calculation and avoiding using more complex de-reverberation processing algorithm. Besides, the pseudo-spectrum of different frequency channels is weighted by their maximum amplitude for every speech frame. Through the simulation and real room reverberation environment experiments, the proposed method has good performance. Dynamic multiple sound source localization experimental results indicate that the average absolute error of azimuth estimated by the proposed algorithm is less and the histogram result has higher angle resolution.

  16. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    NASA Astrophysics Data System (ADS)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  17. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl

    PubMed Central

    Baxter, Caitlin S.; Takahashi, Terry T.

    2013-01-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643–655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map. PMID:23175801

  18. Dynamic sound localization in cats

    PubMed Central

    Ruhland, Janet L.; Jones, Amy E.

    2015-01-01

    Sound localization in cats and humans relies on head-centered acoustic cues. Studies have shown that humans are able to localize sounds during rapid head movements that are directed toward the target or other objects of interest. We studied whether cats are able to utilize similar dynamic acoustic cues to localize acoustic targets delivered during rapid eye-head gaze shifts. We trained cats with visual-auditory two-step tasks in which we presented a brief sound burst during saccadic eye-head gaze shifts toward a prior visual target. No consistent or significant differences in accuracy or precision were found between this dynamic task (2-step saccade) and the comparable static task (single saccade when the head is stable) in either horizontal or vertical direction. Cats appear to be able to process dynamic auditory cues and execute complex motor adjustments to accurately localize auditory targets during rapid eye-head gaze shifts. PMID:26063772

  19. Characteristic sounds facilitate visual search.

    PubMed

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  20. Spherical loudspeaker array for local active control of sound.

    PubMed

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  1. Simulation and testing of a multichannel system for 3D sound localization

    NASA Astrophysics Data System (ADS)

    Matthews, Edward Albert

    Three-dimensional (3D) audio involves the ability to localize sound anywhere in a three-dimensional space. 3D audio can be used to provide the listener with the perception of moving sounds and can provide a realistic listening experience for applications such as gaming, video conferencing, movies, and concerts. The purpose of this research is to simulate and test 3D audio by incorporating auditory localization techniques in a multi-channel speaker system. The objective is to develop an algorithm that can place an audio event in a desired location by calculating and controlling the gain factors of each speaker. A MATLAB simulation displays the location of the speakers and perceived sound, which is verified through experimentation. The scenario in which the listener is not equidistant from each of the speakers is also investigated and simulated. This research is envisioned to lead to a better understanding of human localization of sound, and will contribute to a more realistic listening experience.

  2. Input-Specific Gain Modulation by Local Sensory Context Shapes Cortical and Thalamic Responses to Complex Sounds.

    PubMed

    Williamson, Ross S; Ahrens, Misha B; Linden, Jennifer F; Sahani, Maneesh

    2016-07-20

    Sensory neurons are customarily characterized by one or more linearly weighted receptive fields describing sensitivity in sensory space and time. We show that in auditory cortical and thalamic neurons, the weight of each receptive field element depends on the pattern of sound falling within a local neighborhood surrounding it in time and frequency. Accounting for this change in effective receptive field with spectrotemporal context improves predictions of both cortical and thalamic responses to stationary complex sounds. Although context dependence varies among neurons and across brain areas, there are strong shared qualitative characteristics. In a spectrotemporally rich soundscape, sound elements modulate neuronal responsiveness more effectively when they coincide with sounds at other frequencies, and less effectively when they are preceded by sounds at similar frequencies. This local-context-driven lability in the representation of complex sounds-a modulation of "input-specific gain" rather than "output gain"-may be a widespread motif in sensory processing. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Underwater auditory localization by a swimming harbor seal (Phoca vitulina).

    PubMed

    Bodson, Anais; Miersch, Lars; Mauck, Bjoern; Dehnhardt, Guido

    2006-09-01

    The underwater sound localization acuity of a swimming harbor seal (Phoca vitulina) was measured in the horizontal plane at 13 different positions. The stimulus was either a double sound (two 6-kHz pure tones lasting 0.5 s separated by an interval of 0.2 s) or a single continuous sound of 1.2 s. Testing was conducted in a 10-m-diam underwater half circle arena with hidden loudspeakers installed at the exterior perimeter. The animal was trained to swim along the diameter of the half circle and to change its course towards the sound source as soon as the signal was given. The seal indicated the sound source by touching its assumed position at the board of the half circle. The deviation of the seals choice from the actual sound source was measured by means of video analysis. In trials with the double sound the seal localized the sound sources with a mean deviation of 2.8 degrees and in trials with the single sound with a mean deviation of 4.5 degrees. In a second experiment minimum audible angles of the stationary animal were found to be 9.8 degrees in front and 9.7 degrees in the back of the seal's head.

  4. Neuromorphic audio-visual sensor fusion on a sound-localizing robot.

    PubMed

    Chan, Vincent Yue-Sek; Jin, Craig T; van Schaik, André

    2012-01-01

    This paper presents the first robotic system featuring audio-visual (AV) sensor fusion with neuromorphic sensors. We combine a pair of silicon cochleae and a silicon retina on a robotic platform to allow the robot to learn sound localization through self motion and visual feedback, using an adaptive ITD-based sound localization algorithm. After training, the robot can localize sound sources (white or pink noise) in a reverberant environment with an RMS error of 4-5° in azimuth. We also investigate the AV source binding problem and an experiment is conducted to test the effectiveness of matching an audio event with a corresponding visual event based on their onset time. Despite the simplicity of this method and a large number of false visual events in the background, a correct match can be made 75% of the time during the experiment.

  5. Characteristic sounds facilitate visual search

    PubMed Central

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2009-01-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds. PMID:18567253

  6. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners With Bilateral and With Hearing-Preservation Cochlear Implants.

    PubMed

    Loiselle, Louise H; Dorman, Michael F; Yost, William A; Cook, Sarah J; Gifford, Rene H

    2016-08-01

    To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Eleven bilateral listeners with MED-EL (Durham, NC) CIs and 8 listeners with hearing-preservation CIs with symmetrical low frequency, acoustic hearing using the MED-EL or Cochlear device were evaluated using 2 tests designed to task binaural hearing, localization, and a simulated cocktail party. Access to interaural cues for localization was constrained by the use of low-pass, high-pass, and wideband noise stimuli. Sound-source localization accuracy for listeners with bilateral CIs in response to the high-pass noise stimulus and sound-source localization accuracy for the listeners with hearing-preservation CIs in response to the low-pass noise stimulus did not differ significantly. Speech understanding in a cocktail party listening environment improved for all listeners when interaural cues, either interaural time difference or interaural level difference, were available. The findings of the current study indicate that similar degrees of benefit to sound-source localization and speech understanding in complex listening environments are possible with 2 very different rehabilitation strategies: the provision of bilateral CIs and the preservation of hearing.

  7. Modeling the utility of binaural cues for underwater sound localization.

    PubMed

    Schneider, Jennifer N; Lloyd, David R; Banks, Patchouly N; Mercado, Eduardo

    2014-06-01

    The binaural cues used by terrestrial animals for sound localization in azimuth may not always suffice for accurate sound localization underwater. The purpose of this research was to examine the theoretical limits of interaural timing and level differences available underwater using computational and physical models. A paired-hydrophone system was used to record sounds transmitted underwater and recordings were analyzed using neural networks calibrated to reflect the auditory capabilities of terrestrial mammals. Estimates of source direction based on temporal differences were most accurate for frequencies between 0.5 and 1.75 kHz, with greater resolution toward the midline (2°), and lower resolution toward the periphery (9°). Level cues also changed systematically with source azimuth, even at lower frequencies than expected from theoretical calculations, suggesting that binaural mechanical coupling (e.g., through bone conduction) might, in principle, facilitate underwater sound localization. Overall, the relatively limited ability of the model to estimate source position using temporal and level difference cues underwater suggests that animals such as whales may use additional cues to accurately localize conspecifics and predators at long distances. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    PubMed Central

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  9. L-type calcium channels refine the neural population code of sound level

    PubMed Central

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  10. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    PubMed

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  11. Binaural Processing of Multiple Sound Sources

    DTIC Science & Technology

    2016-08-18

    Sound Source Localization Identification, and Sound Source Localization When Listeners Move. The CI research was also supported by an NIH grant...8217Cochlear Implant Performance in Realistic Listening Environments,’ Dr. Michael Dorman, Principal Investigator, Dr. William Yost unpaid advisor. The other... Listeners Move. The CI research was also supported by an NIH grant (“Cochlear Implant Performance in Realistic Listening Environments,” Dr. Michael Dorman

  12. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    NASA Astrophysics Data System (ADS)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution. Any existing modelling technique can be included into our framework of mesh decoupling and adaptive sampling to accelerate large-scale 3-D EM inversions.

  13. What is that mysterious booming sound?

    USGS Publications Warehouse

    Hill, David P.

    2011-01-01

    The residents of coastal North Carolina are occasionally treated to sequences of booming sounds of unknown origin. The sounds are often energetic enough to rattle windows and doors. A recent sequence occurred in early January 2011 during clear weather with no evidence of local thunder storms. Queries by a local reporter (Colin Hackman of the NBC affiliate WETC in Wilmington, North Carolina, personal communication 2011) seemed to eliminate common anthropogenic sources such as sonic booms or quarry blasts. So the commonly asked question, “What's making these booming sounds?” remained (and remains) unanswered.

  14. Monaural Sound Localization Based on Structure-Induced Acoustic Resonance

    PubMed Central

    Kim, Keonwook; Kim, Youngwoong

    2015-01-01

    A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. PMID:25668214

  15. Sound localization in the alligator.

    PubMed

    Bierman, Hilary S; Carr, Catherine E

    2015-11-01

    In early tetrapods, it is assumed that the tympana were acoustically coupled through the pharynx and therefore inherently directional, acting as pressure difference receivers. The later closure of the middle ear cavity in turtles, archosaurs, and mammals is a derived condition, and would have changed the ear by decoupling the tympana. Isolation of the middle ears would then have led to selection for structural and neural strategies to compute sound source localization in both archosaurs and mammalian ancestors. In the archosaurs (birds and crocodilians) the presence of air spaces in the skull provided connections between the ears that have been exploited to improve directional hearing, while neural circuits mediating sound localization are well developed. In this review, we will focus primarily on directional hearing in crocodilians, where vocalization and sound localization are thought to be ecologically important, and indicate important issues still awaiting resolution. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Neural Correlates of Sound Localization in Complex Acoustic Environments

    PubMed Central

    Zündorf, Ida C.; Lewald, Jörg; Karnath, Hans-Otto

    2013-01-01

    Listening to and understanding people in a “cocktail-party situation” is a remarkable feature of the human auditory system. Here we investigated the neural correlates of the ability to localize a particular sound among others in an acoustically cluttered environment with healthy subjects. In a sound localization task, five different natural sounds were presented from five virtual spatial locations during functional magnetic resonance imaging (fMRI). Activity related to auditory stream segregation was revealed in posterior superior temporal gyrus bilaterally, anterior insula, supplementary motor area, and frontoparietal network. Moreover, the results indicated critical roles of left planum temporale in extracting the sound of interest among acoustical distracters and the precuneus in orienting spatial attention to the target sound. We hypothesized that the left-sided lateralization of the planum temporale activation is related to the higher specialization of the left hemisphere for analysis of spectrotemporal sound features. Furthermore, the precuneus − a brain area known to be involved in the computation of spatial coordinates across diverse frames of reference for reaching to objects − seems to be also a crucial area for accurately determining locations of auditory targets in an acoustically complex scene of multiple sound sources. The precuneus thus may not only be involved in visuo-motor processes, but may also subserve related functions in the auditory modality. PMID:23691185

  17. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants

    PubMed Central

    Zheng, Yi; Godar, Shelly P.; Litovsky, Ruth Y.

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users. PMID:26288142

  18. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    PubMed

    Zheng, Yi; Godar, Shelly P; Litovsky, Ruth Y

    2015-01-01

    Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs) sound localization is known to improve when bilateral CIs (BiCIs) are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  19. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    NASA Astrophysics Data System (ADS)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  20. L-type calcium channels refine the neural population code of sound level.

    PubMed

    Grimsley, Calum Alex; Green, David Brian; Sivaramakrishnan, Shobhana

    2016-12-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (Ca L : Ca V 1.1-1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of Ca L to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. Ca L is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, Ca L activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, Ca L boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, Ca L either suppresses or enhances firing at sound levels that evoke maximum firing. Ca L multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. Copyright © 2016 the American Physiological Society.

  1. Sound localization by echolocating bats

    NASA Astrophysics Data System (ADS)

    Aytekin, Murat

    Echolocating bats emit ultrasonic vocalizations and listen to echoes reflected back from objects in the path of the sound beam to build a spatial representation of their surroundings. Important to understanding the representation of space through echolocation are detailed studies of the cues used for localization, the sonar emission patterns and how this information is assembled. This thesis includes three studies, one on the directional properties of the sonar receiver, one on the directional properties of the sonar transmitter, and a model that demonstrates the role of action in building a representation of auditory space. The general importance of this work to a broader understanding of spatial localization is discussed. Investigations of the directional properties of the sonar receiver reveal that interaural level difference and monaural spectral notch cues are both dependent on sound source azimuth and elevation. This redundancy allows flexibility that an echolocating bat may need when coping with complex computational demands for sound localization. Using a novel method to measure bat sonar emission patterns from freely behaving bats, I show that the sonar beam shape varies between vocalizations. Consequently, the auditory system of a bat may need to adapt its computations to accurately localize objects using changing acoustic inputs. Extra-auditory signals that carry information about pinna position and beam shape are required for auditory localization of sound sources. The auditory system must learn associations between extra-auditory signals and acoustic spatial cues. Furthermore, the auditory system must adapt to changes in acoustic input that occur with changes in pinna position and vocalization parameters. These demands on the nervous system suggest that sound localization is achieved through the interaction of behavioral control and acoustic inputs. A sensorimotor model demonstrates how an organism can learn space through auditory-motor contingencies. The model also reveals how different aspects of sound localization, such as experience-dependent acquisition, adaptation, and extra-auditory influences, can be brought together under a comprehensive framework. This thesis presents a foundation for understanding the representation of auditory space that builds upon acoustic cues, motor control, and learning dynamic associations between action and auditory inputs.

  2. Psychophysical investigation of an auditory spatial illusion in cats: the precedence effect.

    PubMed

    Tollin, Daniel J; Yin, Tom C T

    2003-10-01

    The precedence effect (PE) describes several spatial perceptual phenomena that occur when similar sounds are presented from two different locations and separated by a delay. The mechanisms that produce the effect are thought to be responsible for the ability to localize sounds in reverberant environments. Although the physiological bases for the PE have been studied, little is known about how these sounds are localized by species other than humans. Here we used the search coil technique to measure the eye positions of cats trained to saccade to the apparent locations of sounds. To study the PE, brief broadband stimuli were presented from two locations, with a delay between their onsets; the delayed sound meant to simulate a single reflection. Although the cats accurately localized single sources, the apparent locations of the paired sources depended on the delay. First, the cats exhibited summing localization, the perception of a "phantom" sound located between the sources, for delays < +/-400 micros for sources positioned in azimuth along the horizontal plane, but not for sources positioned in elevation along the sagittal plane. Second, consistent with localization dominance, for delays from 400 micros to about 10 ms, the cats oriented toward the leading source location only, with little influence of the lagging source, both for horizontally and vertically placed sources. Finally, the echo threshold was reached for delays >10 ms, where the cats first began to orient to the lagging source on some trials. These data reveal that cats experience the PE phenomena similarly to humans.

  3. A longitudinal study of the bilateral benefit in children with bilateral cochlear implants.

    PubMed

    Asp, Filip; Mäki-Torkko, Elina; Karltorp, Eva; Harder, Henrik; Hergils, Leif; Eskilsson, Gunnar; Stenfelt, Stefan

    2015-02-01

    To study the development of the bilateral benefit in children using bilateral cochlear implants by measurements of speech recognition and sound localization. Bilateral and unilateral speech recognition in quiet, in multi-source noise, and horizontal sound localization was measured at three occasions during a two-year period, without controlling for age or implant experience. Longitudinal and cross-sectional analyses were performed. Results were compared to cross-sectional data from children with normal hearing. Seventy-eight children aged 5.1-11.9 years, with a mean bilateral cochlear implant experience of 3.3 years and a mean age of 7.8 years, at inclusion in the study. Thirty children with normal hearing aged 4.8-9.0 years provided normative data. For children with cochlear implants, bilateral and unilateral speech recognition in quiet was comparable whereas a bilateral benefit for speech recognition in noise and sound localization was found at all three test occasions. Absolute performance was lower than in children with normal hearing. Early bilateral implantation facilitated sound localization. A bilateral benefit for speech recognition in noise and sound localization continues to exist over time for children with bilateral cochlear implants, but no relative improvement is found after three years of bilateral cochlear implant experience.

  4. Local inhibition of GABA affects precedence effect in the inferior colliculus

    PubMed Central

    Wang, Yanjun; Wang, Ningyu; Wang, Dan; Jia, Jun; Liu, Jinfeng; Xie, Yan; Wen, Xiaohui; Li, Xiaoting

    2014-01-01

    The precedence effect is a prerequisite for faithful sound localization in a complex auditory environment, and is a physiological phenomenon in which the auditory system selectively suppresses the directional information from echoes. Here we investigated how neurons in the inferior colliculus respond to the paired sounds that produce precedence-effect illusions, and whether their firing behavior can be modulated through inhibition with gamma-aminobutyric acid (GABA). We recorded extracellularly from 36 neurons in rat inferior colliculus under three conditions: no injection, injection with saline, and injection with gamma-aminobutyric acid. The paired sounds that produced precedence effects were two identical 4-ms noise bursts, which were delivered contralaterally or ipsilaterally to the recording site. The normalized neural responses were measured as a function of different inter-stimulus delays and half-maximal interstimulus delays were acquired. Neuronal responses to the lagging sounds were weak when the inter-stimulus delay was short, but increased gradually as the delay was lengthened. Saline injection produced no changes in neural responses, but after local gamma-aminobutyric acid application, responses to the lagging stimulus were suppressed. Application of gamma-aminobutyric acid affected the normalized response to lagging sounds, independently of whether they or the paired sounds were contralateral or ipsilateral to the recording site. These observations suggest that local inhibition by gamma-aminobutyric acid in the rat inferior colliculus shapes the neural responses to lagging sounds, and modulates the precedence effect. PMID:25206830

  5. Sound source tracking device for telematic spatial sound field reproduction

    NASA Astrophysics Data System (ADS)

    Cardenas, Bruno

    This research describes an algorithm that localizes sound sources for use in telematic applications. The localization algorithm is based on amplitude differences between various channels of a microphone array of directional shotgun microphones. The amplitude differences will be used to locate multiple performers and reproduce their voices, which were recorded at close distance with lavalier microphones, spatially corrected using a loudspeaker rendering system. In order to track multiple sound sources in parallel the information gained from the lavalier microphones will be utilized to estimate the signal-to-noise ratio between each performer and the concurrent performers.

  6. Active localization of virtual sounds

    NASA Technical Reports Server (NTRS)

    Loomis, Jack M.; Hebert, C.; Cicinelli, J. G.

    1991-01-01

    We describe a virtual sound display built around a 12 MHz 80286 microcomputer and special purpose analog hardware. The display implements most of the primary cues for sound localization in the ear-level plane. Static information about direction is conveyed by interaural time differences and, for frequencies above 1800 Hz, by head sound shadow (interaural intensity differences) and pinna sound shadow. Static information about distance is conveyed by variation in sound pressure (first power law) for all frequencies, by additional attenuation in the higher frequencies (simulating atmospheric absorption), and by the proportion of direct to reverberant sound. When the user actively locomotes, the changing angular position of the source occasioned by head rotations provides further information about direction and the changing angular velocity produced by head translations (motion parallax) provides further information about distance. Judging both from informal observations by users and from objective data obtained in an experiment on homing to virtual and real sounds, we conclude that simple displays such as this are effective in creating the perception of external sounds to which subjects can home with accuracy and ease.

  7. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    PubMed

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  8. An Overview of the Major Phenomena of the Localization of Sound Sources by Normal-Hearing, Hearing-Impaired, and Aided Listeners

    PubMed Central

    2014-01-01

    Localizing a sound source requires the auditory system to determine its direction and its distance. In general, hearing-impaired listeners do less well in experiments measuring localization performance than normal-hearing listeners, and hearing aids often exacerbate matters. This article summarizes the major experimental effects in direction (and its underlying cues of interaural time differences and interaural level differences) and distance for normal-hearing, hearing-impaired, and aided listeners. Front/back errors and the importance of self-motion are noted. The influence of vision on the localization of real-world sounds is emphasized, such as through the ventriloquist effect or the intriguing link between spatial hearing and visual attention. PMID:25492094

  9. Developmental Changes in Locating Voice and Sound in Space

    PubMed Central

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  10. Salient sounds activate human visual cortex automatically

    PubMed Central

    McDonald, John J.; Störmer, Viola S.; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A.

    2013-01-01

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, the present study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2, 3, and 4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of co-localized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task. PMID:23699530

  11. How the owl tracks its prey – II

    PubMed Central

    Takahashi, Terry T.

    2010-01-01

    Barn owls can capture prey in pitch darkness or by diving into snow, while homing in on the sounds made by their prey. First, the neural mechanisms by which the barn owl localizes a single sound source in an otherwise quiet environment will be explained. The ideas developed for the single source case will then be expanded to environments in which there are multiple sound sources and echoes – environments that are challenging for humans with impaired hearing. Recent controversies regarding the mechanisms of sound localization will be discussed. Finally, the case in which both visual and auditory information are available to the owl will be considered. PMID:20889819

  12. A Spiking Neural Network Model of the Medial Superior Olive Using Spike Timing Dependent Plasticity for Sound Localization

    PubMed Central

    Glackin, Brendan; Wall, Julie A.; McGinnity, Thomas M.; Maguire, Liam P.; McDaid, Liam J.

    2010-01-01

    Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz–1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of ±10° is used. For angular resolutions down to 2.5°, it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance. PMID:20802855

  13. How Do Honeybees Attract Nestmates Using Waggle Dances in Dark and Noisy Hives?

    PubMed Central

    Hasegawa, Yuji; Ikeno, Hidetoshi

    2011-01-01

    It is well known that honeybees share information related to food sources with nestmates using a dance language that is representative of symbolic communication among non-primates. Some honeybee species engage in visually apparent behavior, walking in a figure-eight pattern inside their dark hives. It has been suggested that sounds play an important role in this dance language, even though a variety of wing vibration sounds are produced by honeybee behaviors in hives. It has been shown that dances emit sounds primarily at about 250–300 Hz, which is in the same frequency range as honeybees' flight sounds. Thus the exact mechanism whereby honeybees attract nestmates using waggle dances in such a dark and noisy hive is as yet unclear. In this study, we used a flight simulator in which honeybees were attached to a torque meter in order to analyze the component of bees' orienting response caused only by sounds, and not by odor or by vibrations sensed by their legs. We showed using single sound localization that honeybees preferred sounds around 265 Hz. Furthermore, according to sound discrimination tests using sounds of the same frequency, honeybees preferred rhythmic sounds. Our results demonstrate that frequency and rhythmic components play a complementary role in localizing dance sounds. Dance sounds were presumably developed to share information in a dark and noisy environment. PMID:21603608

  14. Effect of Blast Injury on Auditory Localization in Military Service Members.

    PubMed

    Kubli, Lina R; Brungart, Douglas; Northern, Jerry

    Among the many advantages of binaural hearing are the abilities to localize sounds in space and to attend to one sound in the presence of many sounds. Binaural hearing provides benefits for all listeners, but it may be especially critical for military personnel who must maintain situational awareness in complex tactical environments with multiple speech and noise sources. There is concern that Military Service Members who have been exposed to one or more high-intensity blasts during their tour of duty may have difficulty with binaural and spatial ability due to degradation in auditory and cognitive processes. The primary objective of this study was to assess the ability of blast-exposed Military Service Members to localize speech sounds in quiet and in multisource environments with one or two competing talkers. Participants were presented with one, two, or three topic-related (e.g., sports, food, travel) sentences under headphones and required to attend to, and then locate the source of, the sentence pertaining to a prespecified target topic within a virtual space. The listener's head position was monitored by a head-mounted tracking device that continuously updated the apparent spatial location of the target and competing speech sounds as the subject turned within the virtual space. Measurements of auditory localization ability included mean absolute error in locating the source of the target sentence, the time it took to locate the target sentence within 30 degrees, target/competitor confusion errors, response time, and cumulative head motion. Twenty-one blast-exposed Active-Duty or Veteran Military Service Members (blast-exposed group) and 33 non-blast-exposed Service Members and beneficiaries (control group) were evaluated. In general, the blast-exposed group performed as well as the control group if the task involved localizing the source of a single speech target. However, if the task involved two or three simultaneous talkers, localization ability was compromised for some participants in the blast-exposed group. Blast-exposed participants were less accurate in their localization responses and required more exploratory head movements to find the location of the target talker. Results suggest that blast-exposed participants have more difficulty than non-blast-exposed participants in localizing sounds in complex acoustic environments. This apparent deficit in spatial hearing ability highlights the need to develop new diagnostic tests using complex listening tasks that involve multiple sound sources that require speech segregation and comprehension.

  15. Psychophysics and Neuronal Bases of Sound Localization in Humans

    PubMed Central

    Ahveninen, Jyrki; Kopco, Norbert; Jääskeläinen, Iiro P.

    2013-01-01

    Localization of sound sources is a considerable computational challenge for the human brain. Whereas the visual system can process basic spatial information in parallel, the auditory system lacks a straightforward correspondence between external spatial locations and sensory receptive fields. Consequently, the question how different acoustic features supporting spatial hearing are represented in the central nervous system is still open. Functional neuroimaging studies in humans have provided evidence for a posterior auditory “where” pathway that encompasses non-primary auditory cortex areas, including the planum temporale (PT) and posterior superior temporal gyrus (STG), which are strongly activated by horizontal sound direction changes, distance changes, and movement. However, these areas are also activated by a wide variety of other stimulus features, posing a challenge for the interpretation that the underlying areas are purely spatial. This review discusses behavioral and neuroimaging studies on sound localization, and some of the competing models of representation of auditory space in humans. PMID:23886698

  16. Approaches to the study of neural coding of sound source location and sound envelope in real environments

    PubMed Central

    Kuwada, Shigeyuki; Bishop, Brian; Kim, Duck O.

    2012-01-01

    The major functions of the auditory system are recognition (what is the sound) and localization (where is the sound). Although each of these has received considerable attention, rarely are they studied in combination. Furthermore, the stimuli used in the bulk of studies did not represent sound location in real environments and ignored the effects of reverberation. Another ignored dimension is the distance of a sound source. Finally, there is a scarcity of studies conducted in unanesthetized animals. We illustrate a set of efficient methods that overcome these shortcomings. We use the virtual auditory space method (VAS) to efficiently present sounds at different azimuths, different distances and in different environments. Additionally, this method allows for efficient switching between binaural and monaural stimulation and alteration of acoustic cues singly or in combination to elucidate neural mechanisms underlying localization and recognition. Such procedures cannot be performed with real sound field stimulation. Our research is designed to address the following questions: Are IC neurons specialized to process what and where auditory information? How does reverberation and distance of the sound source affect this processing? How do IC neurons represent sound source distance? Are neural mechanisms underlying envelope processing binaural or monaural? PMID:22754505

  17. Sound-direction identification with bilateral cochlear implants.

    PubMed

    Neuman, Arlene C; Haravon, Anita; Sislian, Nicole; Waltzman, Susan B

    2007-02-01

    The purpose of this study was to compare the accuracy of sound-direction identification in the horizontal plane by bilateral cochlear implant users when localization was measured with pink noise and with speech stimuli. Eight adults who were bilateral users of Nucleus 24 Contour devices participated in the study. All had received implants in both ears in a single surgery. Sound-direction identification was measured in a large classroom by using a nine-loudspeaker array. Localization was tested in three listening conditions (bilateral cochlear implants, left cochlear implant, and right cochlear implant), using two different stimuli (a speech stimulus and pink noise bursts) in a repeated-measures design. Sound-direction identification accuracy was significantly better when using two implants than when using a single implant. The mean root-mean-square error was 29 degrees for the bilateral condition, 54 degrees for the left cochlear implant, and 46.5 degrees for the right cochlear implant condition. Unilateral accuracy was similar for right cochlear implant and left cochlear implant performance. Sound-direction identification performance was similar for speech and pink noise stimuli. The data obtained in this study add to the growing body of evidence that sound-direction identification with bilateral cochlear implants is better than with a single implant. The similarity in localization performance obtained with the speech and pink noise supports the use of either stimulus for measuring sound-direction identification.

  18. Gravitoinertial force magnitude and direction influence head-centric auditory localization

    NASA Technical Reports Server (NTRS)

    DiZio, P.; Held, R.; Lackner, J. R.; Shinn-Cunningham, B.; Durlach, N.

    2001-01-01

    We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60 degrees rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3 degrees right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60 degrees leftward, subjects set the sound 6.8 degrees leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75 degrees of the supine body relative to the normal 1 g GIF led to small shifts, 1--2 degrees, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.

  19. 77 FR 23119 - Annual Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-18

    ... Marine Events in the Eighth Coast Guard District, Smoking the Sound; Biloxi Ship Channel; Biloxi, MS... enforce Special Local Regulations for the Smoking the Sound boat races in the Biloxi Ship Channel, Biloxi... during the Smoking the Sound boat races. During the enforcement period, entry into, transiting or...

  20. Investigation of spherical loudspeaker arrays for local active control of sound.

    PubMed

    Peleg, Tomer; Rafaely, Boaz

    2011-10-01

    Active control of sound can be employed globally to reduce noise levels in an entire enclosure, or locally around a listener's head. Recently, spherical loudspeaker arrays have been studied as multiple-channel sources for local active control of sound, presenting the fundamental theory and several active control configurations. In this paper, important aspects of using a spherical loudspeaker array for local active control of sound are further investigated. First, the feasibility of creating sphere-shaped quiet zones away from the source is studied both theoretically and numerically, showing that these quiet zones are associated with sound amplification and poor system robustness. To mitigate the latter, the design of shell-shaped quiet zones around the source is investigated. A combination of two spherical sources is then studied with the aim of enlarging the quiet zone. The two sources are employed to generate quiet zones that surround a rigid sphere, investigating the application of active control around a listener's head. A significant improvement in performance is demonstrated in this case over a conventional headrest-type system that uses two monopole secondary sources. Finally, several simulations are presented to support the theoretical work and to demonstrate the performance and limitations of the system. © 2011 Acoustical Society of America

  1. Ray-based acoustic localization of cavitation in a highly reverberant environment.

    PubMed

    Chang, Natasha A; Dowling, David R

    2009-05-01

    Acoustic detection and localization of cavitation have inherent advantages over optical techniques because cavitation bubbles are natural sound sources, and acoustic transduction of cavitation sounds does not require optical access to the region of cavitating flow. In particular, near cavitation inception, cavitation bubbles may be visually small and occur infrequently, but may still emit audible sound pulses. In this investigation, direct-path acoustic recordings of cavitation events are made with 16 hydrophones mounted on the periphery of a water tunnel test section containing a low-cavitation-event-rate vortical flow. These recordings are used to localize the events in three dimensions via cross correlations to obtain arrival time differences. Here, bubble localization is hindered by reverberation, background noise, and the fact that both the pulse emission time and waveform are unknown. These hindrances are partially mitigated by a signal-processing scheme that incorporates straight-ray acoustic propagation and Monte-Carlo techniques for compensating ray-path, sound-speed, and hydrophone-location uncertainties. The acoustic localization results are compared to simultaneous optical localization results from dual-camera high-speed digital-video recordings. For 53 bubbles and a peak-signal to noise ratio frequency of 6.7 kHz, the root-mean-square spatial difference between optical and acoustic bubble location results was 1.94 cm. Parametric dependences in acoustic localization performance are also presented.

  2. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders.

    PubMed

    Visser, Eelke; Zwiers, Marcel P; Kan, Cornelis C; Hoekstra, Liesbeth; van Opstal, A John; Buitelaar, Jan K

    2013-11-01

    Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs.

  3. Effect of sound level on virtual and free-field localization of brief sounds in the anterior median plane.

    PubMed

    Marmel, Frederic; Marrufo-Pérez, Miriam I; Heeren, Jan; Ewert, Stephan; Lopez-Poveda, Enrique A

    2018-06-14

    The detection of high-frequency spectral notches has been shown to be worse at 70-80 dB sound pressure level (SPL) than at higher levels up to 100 dB SPL. The performance improvement at levels higher than 70-80 dB SPL has been related to an 'ideal observer' comparison of population auditory nerve spike trains to stimuli with and without high-frequency spectral notches. Insofar as vertical localization partly relies on information provided by pinna-based high-frequency spectral notches, we hypothesized that localization would be worse at 70-80 dB SPL than at higher levels. Results from a first experiment using a virtual localization set-up and non-individualized head-related transfer functions (HRTFs) were consistent with this hypothesis, but a second experiment using a free-field set-up showed that vertical localization deteriorates monotonically with increasing level up to 100 dB SPL. These results suggest that listeners use different cues when localizing sound sources in virtual and free-field conditions. In addition, they confirm that the worsening in vertical localization with increasing level continues beyond 70-80 dB SPL, the highest levels tested by previous studies. Further, they suggest that vertical localization, unlike high-frequency spectral notch detection, does not rely on an 'ideal observer' analysis of auditory nerve spike trains. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Localization of virtual sound at 4 Gz.

    PubMed

    Sandor, Patrick M B; McAnally, Ken I; Pellieux, Lionel; Martin, Russell L

    2005-02-01

    Acceleration directed along the body's z-axis (Gz) leads to misperception of the elevation of visual objects (the "elevator illusion"), most probably as a result of errors in the transformation from eye-centered to head-centered coordinates. We have investigated whether the location of sound sources is misperceived under increased Gz. Visually guided localization responses were made, using a remotely controlled laser pointer, to virtual auditory targets under conditions of 1 and 4 Gz induced in a human centrifuge. As these responses would be expected to be affected by the elevator illusion, we also measured the effect of Gz on the accuracy with which subjects could point to the horizon. Horizon judgments were lower at 4 Gz than at 1 Gz, so sound localization responses at 4 Gz were corrected for this error in the transformation from eye-centered to head-centered coordinates. We found that the accuracy and bias of sound localization are not significantly affected by increased Gz. The auditory modality is likely to provide a reliable means of conveying spatial information to operators in dynamic environments in which Gz can vary.

  5. Influence of double stimulation on sound-localization behavior in barn owls.

    PubMed

    Kettler, Lutz; Wagner, Hermann

    2014-12-01

    Barn owls do not immediately approach a source after they hear a sound, but wait for a second sound before they strike. This represents a gain in striking behavior by avoiding responses to random incidents. However, the first stimulus is also expected to change the threshold for perceiving the subsequent second sound, thus possibly introducing some costs. We mimicked this situation in a behavioral double-stimulus paradigm utilizing saccadic head turns of owls. The first stimulus served as an adapter, was presented in frontal space, and did not elicit a head turn. The second stimulus, emitted from a peripheral source, elicited the head turn. The time interval between both stimuli was varied. Data obtained with double stimulation were compared with data collected with a single stimulus from the same positions as the second stimulus in the double-stimulus paradigm. Sound-localization performance was quantified by the response latency, accuracy, and precision of the head turns. Response latency was increased with double stimuli, while accuracy and precision were decreased. The effect depended on the inter-stimulus interval. These results suggest that waiting for a second stimulus may indeed impose costs on sound localization by adaptation and this reduces the gain obtained by waiting for a second stimulus.

  6. Modeling of reverberant room responses for two-dimensional spatial sound field analysis and synthesis.

    PubMed

    Bai, Mingsian R; Li, Yi; Chiang, Yi-Hao

    2017-10-01

    A unified framework is proposed for analysis and synthesis of two-dimensional spatial sound field in reverberant environments. In the sound field analysis (SFA) phase, an unbaffled 24-element circular microphone array is utilized to encode the sound field based on the plane-wave decomposition. Depending on the sparsity of the sound sources, the SFA stage can be implemented in two manners. For sparse-source scenarios, a one-stage algorithm based on compressive sensing algorithm is utilized. Alternatively, a two-stage algorithm can be used, where the minimum power distortionless response beamformer is used to localize the sources and Tikhonov regularization algorithm is used to extract the source amplitudes. In the sound field synthesis (SFS), a 32-element rectangular loudspeaker array is employed to decode the target sound field using pressure matching technique. To establish the room response model, as required in the pressure matching step of the SFS phase, an SFA technique for nonsparse-source scenarios is utilized. Choice of regularization parameters is vital to the reproduced sound field. In the SFS phase, three SFS approaches are compared in terms of localization performance and voice reproduction quality. Experimental results obtained in a reverberant room are presented and reveal that an accurate room response model is vital to immersive rendering of the reproduced sound field.

  7. First and second sound in a strongly interacting Fermi gas

    NASA Astrophysics Data System (ADS)

    Taylor, E.; Hu, H.; Liu, X.-J.; Pitaevskii, L. P.; Griffin, A.; Stringari, S.

    2009-11-01

    Using a variational approach, we solve the equations of two-fluid hydrodynamics for a uniform and trapped Fermi gas at unitarity. In the uniform case, we find that the first and second sound modes are remarkably similar to those in superfluid helium, a consequence of strong interactions. In the presence of harmonic trapping, first and second sound become degenerate at certain temperatures. At these points, second sound hybridizes with first sound and is strongly coupled with density fluctuations, giving a promising way of observing second sound. We also discuss the possibility of exciting second sound by generating local heat perturbations.

  8. NASA sounding rockets, 1958 - 1968: A historical summary

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1971-01-01

    The development and use of sounding rockets is traced from the Wac Corporal through the present generation of rockets. The Goddard Space Flight Center Sounding Rocket Program is discussed, and the use of sounding rockets during the IGY and the 1960's is described. Advantages of sounding rockets are identified as their simplicity and payload simplicity, low costs, payload recoverability, geographic flexibility, and temporal flexibility. The disadvantages are restricted time of observation, localized coverage, and payload limitations. Descriptions of major sounding rockets, trends in vehicle usage, and a compendium of NASA sounding rocket firings are also included.

  9. The Sound Patterns of Camuno: Description and Explanation in Evolutionary Phonology

    ERIC Educational Resources Information Center

    Cresci, Michela

    2014-01-01

    This dissertation presents a linguistic study of the sound patterns of Camuno framed within Evolutionary Phonology (Blevins, 2004, 2006, to appear). Camuno is a variety of Eastern Lombard, a Romance language of northern Italy, spoken in Valcamonica. Camuno is not a local variety of Italian, but a sister of Italian, a local divergent development of…

  10. Energy Flux in the Cochlea: Evidence Against Power Amplification of the Traveling Wave.

    PubMed

    van der Heijden, Marcel; Versteegh, Corstiaen P C

    2015-10-01

    Traveling waves in the inner ear exhibit an amplitude peak that shifts with frequency. The peaking is commonly believed to rely on motile processes that amplify the wave by inserting energy. We recorded the vibrations at adjacent positions on the basilar membrane in sensitive gerbil cochleae and tested the putative power amplification in two ways. First, we determined the energy flux of the traveling wave at its peak and compared it to the acoustic power entering the ear, thereby obtaining the net cochlear power gain. For soft sounds, the energy flux at the peak was 1 ± 0.6 dB less than the middle ear input power. For more intense sounds, increasingly smaller fractions of the acoustic power actually reached the peak region. Thus, we found no net power amplification of soft sounds and a strong net attenuation of intense sounds. Second, we analyzed local wave propagation on the basilar membrane. We found that the waves slowed down abruptly when approaching their peak, causing an energy densification that quantitatively matched the amplitude peaking, similar to the growth of sea waves approaching the beach. Thus, we found no local power amplification of soft sounds and strong local attenuation of intense sounds. The most parsimonious interpretation of these findings is that cochlear sensitivity is not realized by amplifying acoustic energy, but by spatially focusing it, and that dynamic compression is realized by adjusting the amount of dissipation to sound intensity.

  11. Light-induced vibration in the hearing organ

    PubMed Central

    Ren, Tianying; He, Wenxuan; Li, Yizeng; Grosh, Karl; Fridberger, Anders

    2014-01-01

    The exceptional sensitivity of mammalian hearing organs is attributed to an active process, where force produced by sensory cells boost sound-induced vibrations, making soft sounds audible. This process is thought to be local, with each section of the hearing organ capable of amplifying sound-evoked movement, and nearly instantaneous, since amplification can work for sounds at frequencies up to 100 kHz in some species. To test these fundamental precepts, we developed a method for focally stimulating the living hearing organ with light. Light pulses caused intense and highly damped mechanical responses followed by traveling waves that developed with considerable delay. The delayed response was identical to movements evoked by click-like sounds. This shows that the active process is neither local nor instantaneous, but requires mechanical waves traveling from the cochlear base toward its apex. A physiologically-based mathematical model shows that such waves engage the active process, enhancing hearing sensitivity. PMID:25087606

  12. Sound source localization inspired by the ears of the Ormia ochracea

    NASA Astrophysics Data System (ADS)

    Kuntzman, Michael L.; Hall, Neal A.

    2014-07-01

    The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.

  13. Atypical vertical sound localization and sound-onset sensitivity in people with autism spectrum disorders

    PubMed Central

    Visser, Eelke; Zwiers, Marcel P.; Kan, Cornelis C.; Hoekstra, Liesbeth; van Opstal, A. John; Buitelaar, Jan K.

    2013-01-01

    Background Autism spectrum disorders (ASDs) are associated with auditory hyper- or hyposensitivity; atypicalities in central auditory processes, such as speech-processing and selective auditory attention; and neural connectivity deficits. We sought to investigate whether the low-level integrative processes underlying sound localization and spatial discrimination are affected in ASDs. Methods We performed 3 behavioural experiments to probe different connecting neural pathways: 1) horizontal and vertical localization of auditory stimuli in a noisy background, 2) vertical localization of repetitive frequency sweeps and 3) discrimination of horizontally separated sound stimuli with a short onset difference (precedence effect). Results Ten adult participants with ASDs and 10 healthy control listeners participated in experiments 1 and 3; sample sizes for experiment 2 were 18 adults with ASDs and 19 controls. Horizontal localization was unaffected, but vertical localization performance was significantly worse in participants with ASDs. The temporal window for the precedence effect was shorter in participants with ASDs than in controls. Limitations The study was performed with adult participants and hence does not provide insight into the developmental aspects of auditory processing in individuals with ASDs. Conclusion Changes in low-level auditory processing could underlie degraded performance in vertical localization, which would be in agreement with recently reported changes in the neuroanatomy of the auditory brainstem in individuals with ASDs. The results are further discussed in the context of theories about abnormal brain connectivity in individuals with ASDs. PMID:24148845

  14. Better protection from blasts without sacrificing situational awareness.

    PubMed

    Killion, Mead C; Monroe, Tim; Drambarean, Viorel

    2011-03-01

    A large number of soldiers returning from war report hearing loss and/or tinnitus. Many deployed soldiers decline to wear their hearing protection devices (HPDs) because they feel that earplugs interfere with their ability to detect and localize the enemy and their friends. The detection problem is easily handled in electronic devices with low-noise microphones. The localization problem is not as easy. In this paper, the factors that reduce situational awareness--hearing loss and restricted bandwidth in HPD devices--are discussed in light of available data, followed by a review of the cues to localization. Two electronic blast plug earplugs with 16-kHz bandwidth are described. Both provide subjectively transparent sound with regard to sound quality and localization, i.e., they sound almost as if nothing is in the ears, while protecting the ears from blasts. Finally, two formal experiments are described which investigated localization performance compared to popular existing military HPDs and the open ear. The tested earplugs performed well regarding maintaining situational awareness. Detection-distance and acceptance studies are underway.

  15. Method for creating an aeronautic sound shield having gas distributors arranged on the engines, wings, and nose of an aircraft

    NASA Technical Reports Server (NTRS)

    Corda, Stephen (Inventor); Smith, Mark Stephen (Inventor); Myre, David Daniel (Inventor)

    2008-01-01

    The present invention blocks and/or attenuates the upstream travel of acoustic disturbances or sound waves from a flight vehicle or components of a flight vehicle traveling at subsonic speed using a local injection of a high molecular weight gas. Additional benefit may also be obtained by lowering the temperature of the gas. Preferably, the invention has a means of distributing the high molecular weight gas from the nose, wing, component, or other structure of the flight vehicle into the upstream or surrounding air flow. Two techniques for distribution are direct gas injection and sublimation of the high molecular weight solid material from the vehicle surface. The high molecular weight and low temperature of the gas significantly decreases the local speed of sound such that a localized region of supersonic flow and possibly shock waves are formed, preventing the upstream travel of sound waves from the flight vehicle.

  16. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia.

    PubMed

    Castro-Camacho, Wendy; Peñaloza-López, Yolanda; Pérez-Ruiz, Santiago J; García-Pedroza, Felipe; Padilla-Ortiz, Ana L; Poblano, Adrián; Villarruel-Rivas, Concepción; Romero-Díaz, Alfredo; Careaga-Olvera, Aidé

    2015-04-01

    Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o), under reverberant and no-reverberant conditions; correct answers were compared. Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  17. The Radio Plasma Imager Investigation on the IMAGE Spacecraft

    NASA Technical Reports Server (NTRS)

    Reinisch, Bodo W.; Haines, D. M.; Bibl, K.; Cheney, G.; Galkin, I. A.; Huang, X.; Myers, S. H.; Sales, G. S.; Benson, R. F.; Fung, S. F.

    1999-01-01

    Radio plasma imaging uses total reflection of electromagnetic waves from plasmas whose plasma frequencies equal the radio sounding frequency and whose electron density gradients are parallel to the wave normals. The Radio Plasma Imager (RPI) has two orthogonal 500-m long dipole antennas in the spin plane for near omni-directional transmission. The third antenna is a 20-m dipole. Echoes from the magnetopause, plasmasphere and cusp will be received with three orthogonal antennas, allowing the determination of their angle-of-arrival. Thus it will be possible to create image fragments of the reflecting density structures. The instrument can execute a large variety of programmable measuring programs operating at frequencies between 3 kHz and 3 MHz. Tuning of the transmit antennas provides optimum power transfer from the 10 W transmitter to the antennas. The instrument can operate in three active sounding modes: (1) remote sounding to probe magnetospheric boundaries, (2) local (relaxation) sounding to probe the local plasma, and (3) whistler stimulation sounding. In addition, there is a passive mode to record natural emissions, and to determine the local electron density and temperature by using a thermal noise spectroscopy technique.

  18. Radio Sounding Science at High Powers

    NASA Technical Reports Server (NTRS)

    Green, J. L.; Reinisch, B. W.; Song, P.; Fung, S. F.; Benson, R. F.; Taylor, W. W. L.; Cooper, J. F.; Garcia, L.; Markus, T.; Gallagher, D. L.

    2004-01-01

    Future space missions like the Jupiter Icy Moons Orbiter (JIMO) planned to orbit Callisto, Ganymede, and Europa can fully utilize a variable power radio sounder instrument. Radio sounding at 1 kHz to 10 MHz at medium power levels (10 W to kW) will provide long-range magnetospheric sounding (several Jovian radii) like those first pioneered by the radio plasma imager instrument on IMAGE at low power (less than l0 W) and much shorter distances (less than 5 R(sub E)). A radio sounder orbiting a Jovian icy moon would be able to globally measure time-variable electron densities in the moon ionosphere and the local magnetospheric environment. Near-spacecraft resonance and guided echoes respectively allow measurements of local field magnitude and local field line geometry, perturbed both by direct magnetospheric interactions and by induced components from subsurface oceans. JIMO would allow radio sounding transmissions at much higher powers (approx. 10 kW) making subsurface sounding of the Jovian icy moons possible at frequencies above the ionosphere peak plasma frequency. Subsurface variations in dielectric properties, can be probed for detection of dense and solid-liquid phase boundaries associated with oceans and related structures in overlying ice crusts.

  19. Degradation of Auditory Localization Performance Due to Helmet Ear Coverage: The Effects of Normal Acoustic Reverberation

    DTIC Science & Technology

    2009-07-01

    Therefore, it’s safe to assume that most large errors are due to front-back confusions. Front-back confusions occur in part because the binaural ...two ear) cues that dominate sound localization do not distinguish the front and rear hemispheres. The two binaural cues relied on are interaural...121 (5), 3094–3094. Shinn-Cunningham, B. G.; Kopčo, N.; Martin, T. J. Localizing Nearby Sound Sources in a Classroom: Binaural Room Impulse

  20. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  1. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  2. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  3. 33 CFR 100.1308 - Special Local Regulation; Hydroplane Races within the Captain of the Port Puget Sound Area of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Hydroplane Races within the Captain of the Port Puget Sound Area of Responsibility. 100.1308 Section 100.1308... SAFETY OF LIFE ON NAVIGABLE WATERS § 100.1308 Special Local Regulation; Hydroplane Races within the... race areas for the purpose of reoccurring hydroplane races: (1) Dyes Inlet. West of Port Orchard, WA to...

  4. Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, G.; Lin, T.

    2013-12-01

    Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.

  5. Spatial and identity negative priming in audition: evidence of feature binding in auditory spatial memory.

    PubMed

    Mayr, Susanne; Buchner, Axel; Möller, Malte; Hauke, Robert

    2011-08-01

    Two experiments are reported with identical auditory stimulation in three-dimensional space but with different instructions. Participants localized a cued sound (Experiment 1) or identified a sound at a cued location (Experiment 2). A distractor sound at another location had to be ignored. The prime distractor and the probe target sound were manipulated with respect to sound identity (repeated vs. changed) and location (repeated vs. changed). The localization task revealed a symmetric pattern of partial repetition costs: Participants were impaired on trials with identity-location mismatches between the prime distractor and probe target-that is, when either the sound was repeated but not the location or vice versa. The identification task revealed an asymmetric pattern of partial repetition costs: Responding was slowed down when the prime distractor sound was repeated as the probe target, but at another location; identity changes at the same location were not impaired. Additionally, there was evidence of retrieval of incompatible prime responses in the identification task. It is concluded that feature binding of auditory prime distractor information takes place regardless of whether the task is to identify or locate a sound. Instructions determine the kind of identity-location mismatch that is detected. Identity information predominates over location information in auditory memory.

  6. Numerical calculation of listener-specific head-related transfer functions and sound localization: Microphone model and mesh discretization

    PubMed Central

    Ziegelwanger, Harald; Majdak, Piotr; Kreuzer, Wolfgang

    2015-01-01

    Head-related transfer functions (HRTFs) can be numerically calculated by applying the boundary element method on the geometry of a listener’s head and pinnae. The calculation results are defined by geometrical, numerical, and acoustical parameters like the microphone used in acoustic measurements. The scope of this study was to estimate requirements on the size and position of the microphone model and on the discretization of the boundary geometry as triangular polygon mesh for accurate sound localization. The evaluation involved the analysis of localization errors predicted by a sagittal-plane localization model, the comparison of equivalent head radii estimated by a time-of-arrival model, and the analysis of actual localization errors obtained in a sound-localization experiment. While the average edge length (AEL) of the mesh had a negligible effect on localization performance in the lateral dimension, the localization performance in sagittal planes, however, degraded for larger AELs with the geometrical error as dominant factor. A microphone position at an arbitrary position at the entrance of the ear canal, a microphone size of 1 mm radius, and a mesh with 1 mm AEL yielded a localization performance similar to or better than observed with acoustically measured HRTFs. PMID:26233020

  7. Identifying local characteristic lengths governing sound wave properties in solid foams

    NASA Astrophysics Data System (ADS)

    Tan Hoang, Minh; Perrot, Camille

    2013-02-01

    Identifying microscopic geometric properties and fluid flow through opened-cell and partially closed-cell solid structures is a challenge for material science, in particular, for the design of porous media used as sound absorbers in building and transportation industries. We revisit recent literature data to identify the local characteristic lengths dominating the transport properties and sound absorbing behavior of polyurethane foam samples by performing numerical homogenization simulations. To determine the characteristic sizes of the model, we need porosity and permeability measurements in conjunction with ligament lengths estimates from available scanning electron microscope images. We demonstrate that this description of the porous material, consistent with the critical path picture following from the percolation arguments, is widely applicable. This is an important step towards tuning sound proofing properties of complex materials.

  8. Auditory Space Perception in Left- and Right-Handers

    ERIC Educational Resources Information Center

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  9. Optical and Acoustic Sensor-Based 3D Ball Motion Estimation for Ball Sport Simulators †.

    PubMed

    Seo, Sang-Woo; Kim, Myunggyu; Kim, Yejin

    2018-04-25

    Estimation of the motion of ball-shaped objects is essential for the operation of ball sport simulators. In this paper, we propose an estimation system for 3D ball motion, including speed and angle of projection, by using acoustic vector and infrared (IR) scanning sensors. Our system is comprised of three steps to estimate a ball motion: sound-based ball firing detection, sound source localization, and IR scanning for motion analysis. First, an impulsive sound classification based on the mel-frequency cepstrum and feed-forward neural network is introduced to detect the ball launch sound. An impulsive sound source localization using a 2D microelectromechanical system (MEMS) microphones and delay-and-sum beamforming is presented to estimate the firing position. The time and position of a ball in 3D space is determined from a high-speed infrared scanning method. Our experimental results demonstrate that the estimation of ball motion based on sound allows a wider activity area than similar camera-based methods. Thus, it can be practically applied to various simulations in sports such as soccer and baseball.

  10. A precedence effect resolves phantom sound source illusions in the parasitoid fly Ormia ochracea

    PubMed Central

    Lee, Norman; Elias, Damian O.; Mason, Andrew C.

    2009-01-01

    Localizing individual sound sources under reverberant environmental conditions can be a challenge when the original source and its acoustic reflections arrive at the ears simultaneously from different paths that convey ambiguous directional information. The acoustic parasitoid fly Ormia ochracea (Diptera: Tachinidae) relies on a pair of ears exquisitely sensitive to sound direction to localize the 5-kHz tone pulsatile calling song of their host crickets. In nature, flies are expected to encounter a complex sound field with multiple sources and their reflections from acoustic clutter potentially masking temporal information relevant to source recognition and localization. In field experiments, O. ochracea were lured onto a test arena and subjected to small random acoustic asymmetries between 2 simultaneous sources. Most flies successfully localize a single source but some localize a ‘phantom’ source that is a summed effect of both source locations. Such misdirected phonotaxis can be elicited reliably in laboratory experiments that present symmetric acoustic stimulation. By varying onset delay between 2 sources, we test whether hyperacute directional hearing in O. ochracea can function to exploit small time differences to determine source location. Selective localization depends on both the relative timing and location of competing sources. Flies preferred phonotaxis to a forward source. With small onset disparities within a 10-ms temporal window of attention, flies selectively localize the leading source while the lagging source has minimal influence on orientation. These results demonstrate the precedence effect as a mechanism to overcome phantom source illusions that arise from acoustic reflections or competing sources. PMID:19332794

  11. Diversity of acoustic tracheal system and its role for directional hearing in crickets

    PubMed Central

    2013-01-01

    Background Sound localization in small insects can be a challenging task due to physical constraints in deriving sufficiently large interaural intensity differences (IIDs) between both ears. In crickets, sound source localization is achieved by a complex type of pressure difference receiver consisting of four potential sound inputs. Sound acts on the external side of two tympana but additionally reaches the internal tympanal surface via two external sound entrances. Conduction of internal sound is realized by the anatomical arrangement of connecting trachea. A key structure is a trachea coupling both ears which is characterized by an enlarged part in its midline (i.e., the acoustic vesicle) accompanied with a thin membrane (septum). This facilitates directional sensitivity despite an unfavorable relationship between wavelength of sound and body size. Here we studied the morphological differences of the acoustic tracheal system in 40 cricket species (Gryllidae, Mogoplistidae) and species of outgroup taxa (Gryllotalpidae, Rhaphidophoridae, Gryllacrididae) of the suborder Ensifera comprising hearing and non hearing species. Results We found a surprisingly high variation of acoustic tracheal systems and almost all investigated species using intraspecific acoustic communication were characterized by an acoustic vesicle associated with a medial septum. The relative size of the acoustic vesicle - a structure most crucial for deriving high IIDs - implies an important role for sound localization. Most remarkable in this respect was the size difference of the acoustic vesicle between species; those with a more unfavorable ratio of body size to sound wavelength tend to exhibit a larger acoustic vesicle. On the other hand, secondary loss of acoustic signaling was nearly exclusively associated with the absence of both acoustic vesicle and septum. Conclusion The high diversity of acoustic tracheal morphology observed between species might reflect different steps in the evolution of the pressure difference receiver; with a precursor structure already present in ancestral non-hearing species. In addition, morphological transitions of the acoustic vesicle suggest a possible adaptive role for the generation of binaural directional cues. PMID:24131512

  12. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  13. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... it is deemed necessary to ensure the safety of life or property. (i) For all power boat races listed...

  14. 33 CFR 100.100 - Special Local Regulations; Regattas and Boat Races in the Coast Guard Sector Long Island Sound...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...; Regattas and Boat Races in the Coast Guard Sector Long Island Sound Captain of the Port Zone. 100.100... MARINE PARADES SAFETY OF LIFE ON NAVIGABLE WATERS § 100.100 Special Local Regulations; Regattas and Boat... any time it is deemed necessary to ensure the safety of life or property. (i) For all power boat races...

  15. Olivocochlear Efferent Control in Sound Localization and Experience-Dependent Learning

    PubMed Central

    Irving, Samuel; Moore, David R.; Liberman, M. Charles; Sumner, Christian J.

    2012-01-01

    Efferent auditory pathways have been implicated in sound localization and its plasticity. We examined the role of the olivocochlear system (OC) in horizontal sound localization by the ferret and in localization learning following unilateral earplugging. Under anesthesia, adult ferrets underwent olivocochlear bundle section at the floor of the fourth ventricle, either at the midline or laterally (left). Lesioned and control animals were trained to localize 1 s and 40ms amplitude-roved broadband noise stimuli from one of 12 loudspeakers. Neither type of lesion affected normal localization accuracy. All ferrets then received a left earplug and were tested and trained over 10 d. The plug profoundly disrupted localization. Ferrets in the control and lateral lesion groups improved significantly during subsequent training on the 1 s stimulus. No improvement (learning) occurred in the midline lesion group. Markedly poorer performance and failure to learn was observed with the 40 ms stimulus in all groups. Plug removal resulted in a rapid resumption of normal localization in all animals. Insertion of a subsequent plug in the right ear produced similar results to left earplugging. Learning in the lateral lesion group was independent of the side of the lesion relative to the earplug. Lesions in all reported cases were verified histologically. The results suggest the OC system is not needed for accurate localization, but that it is involved in relearning localization during unilateral conductive hearing loss. PMID:21325517

  16. On the relevance of source effects in geomagnetic pulsations for induction soundings

    NASA Astrophysics Data System (ADS)

    Neska, Anne; Tadeusz Reda, Jan; Leszek Neska, Mariusz; Petrovich Sumaruk, Yuri

    2018-03-01

    This study is an attempt to close a gap between recent research on geomagnetic pulsations and their usage as source signals in electromagnetic induction soundings (i.e., magnetotellurics, geomagnetic depth sounding, and magnetovariational sounding). The plane-wave assumption as a precondition for the proper performance of these methods is partly violated by the local nature of field line resonances which cause a considerable portion of pulsations at mid latitudes. It is demonstrated that and explained why in spite of this, the application of remote reference stations in quasi-global distances for the suppression of local correlated-noise effects in induction arrows is possible in the geomagnetic pulsation range. The important role of upstream waves and of the magnetic equatorial region for such applications is emphasized. Furthermore, the principal difference between application of reference stations for local transfer functions (which result in sounding curves and induction arrows) and for inter-station transfer functions is considered. The preconditions for the latter are much stricter than for the former. Hence a failure to estimate an inter-station transfer function to be interpreted in terms of electromagnetic induction, e.g., because of field line resonances, does not necessarily prohibit use of the station pair for a remote reference estimation of the impedance tensor.

  17. Reverberation enhances onset dominance in sound localization.

    PubMed

    Stecker, G Christopher; Moore, Travis M

    2018-02-01

    Temporal variation in sensitivity to sound-localization cues was measured in anechoic conditions and in simulated reverberation using the temporal weighting function (TWF) paradigm [Stecker and Hafter (2002). J. Acoust. Soc. Am. 112, 1046-1057]. Listeners judged the locations of Gabor click trains (4 kHz center frequency, 5-ms interclick interval) presented from an array of loudspeakers spanning 360° azimuth. Targets ranged ±56.25° across trials. Individual clicks within each train varied by an additional ±11.25° to allow TWF calculation by multiple regression. In separate conditions, sounds were presented directly or in the presence of simulated reverberation: 13 orders of lateral reflection were computed for a 10 m × 10 m room ( RT 60 ≊300 ms) and mapped to the appropriate locations in the loudspeaker array. Results reveal a marked increase in perceptual weight applied to the initial click in reverberation, along with a reduction in the impact of late-arriving sound. In a second experiment, target stimuli were preceded by trains of "conditioner" sounds with or without reverberation. Effects were modest and limited to the first few clicks in a train, suggesting that impacts of reverberant pre-exposure on localization may be limited to the processing of information from early reflections.

  18. [The underwater and airborne horizontal localization of sound by the northern fur seal].

    PubMed

    Babushina, E S; Poliakov, M A

    2004-01-01

    The accuracy of the underwater and airborne horizontal localization of different acoustic signals by the northern fur seal was investigated by the method of instrumental conditioned reflexes with food reinforcement. For pure-tone pulsed signals in the frequency range of 0.5-25 kHz the minimum angles of sound localization at 75% of correct responses corresponded to sound transducer azimuth of 6.5-7.5 degrees +/- 0.1-0.4 degrees underwater (at impulse duration of 3-90 ms) and of 3.5-5.5 degrees +/- 0.05-0.5 degrees in air (at impulse duration of 3-160 ms). The source of pulsed noise signals (of 3-ms duration) was localized with the accuracy of 3.0 degrees +/- 0.2 degrees underwater. The source of continuous (of 1-s duration) narrow band (10% of c.fr.) noise signals was localized in air with the accuracy of 2-5 degrees +/- 0.02-0.4 degrees and of continuous broad band (1-20 kHz) noise, with the accuracy of 4.5 degrees +/- 0.2 degrees.

  19. 77 FR 6954 - Special Local Regulations; Safety and Security Zones; Recurring Events in Captain of the Port...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-10

    ... Events in Captain of the Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Final rule... Sector Long Island Sound Captain of the Port (COTP) Zone. These limited access areas include special... Sector Long Island Sound, telephone 203-468- 4544, email [email protected] . If you have questions...

  20. 33 CFR 100.121 - Swim Across the Sound, Long Island Sound, Port Jefferson, NY to Captain's Cove Seaport...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY REGATTAS AND MARINE PARADES... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Swim Across the Sound, Long... the Federal Register, separate marine broadcasts and local notice to mariners. [USCG-2009-0395, 75 FR...

  1. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    PubMed Central

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  2. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    PubMed

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  3. Method and apparatus for ultrasonic doppler velocimetry using speed of sound and reflection mode pulsed wideband doppler

    DOEpatents

    Shekarriz, Alireza; Sheen, David M.

    2000-01-01

    According to the present invention, a method and apparatus rely upon tomographic measurement of the speed of sound and fluid velocity in a pipe. The invention provides a more accurate profile of velocity within flow fields where the speed of sound varies within the cross-section of the pipe. This profile is obtained by reconstruction of the velocity profile from the local speed of sound measurement simultaneously with the flow velocity. The method of the present invention is real-time tomographic ultrasonic Doppler velocimetry utilizing a to plurality of ultrasonic transmission and reflection measurements along two orthogonal sets of parallel acoustic lines-of-sight. The fluid velocity profile and the acoustic velocity profile are determined by iteration between determining a fluid velocity profile and measuring local acoustic velocity until convergence is reached.

  4. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    ERIC Educational Resources Information Center

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  5. The Impact of Masker Fringe and Masker Sparial Uncertainty on Sound Localization

    DTIC Science & Technology

    2010-09-01

    spatial uncertainty on sound localization and to examine how such effects might be related to binaural detection and informational masking. 2 Methods...results from the binaural detection literature and suggest that a longer duration fringe provides a more robust context against which to judge the...results from the binaural detection literature, which suggest that forward masker fringe provides a greater benefit than backward masker fringe [2]. The

  6. Dragon Ears airborne acoustic array: CSP analysis applied to cross array to compute real-time 2D acoustic sound field

    NASA Astrophysics Data System (ADS)

    Cerwin, Steve; Barnes, Julie; Kell, Scott; Walters, Mark

    2003-09-01

    This paper describes development and application of a novel method to accomplish real-time solid angle acoustic direction finding using two 8-element orthogonal microphone arrays. The developed prototype system was intended for localization and signature recognition of ground-based sounds from a small UAV. Recent advances in computer speeds have enabled the implementation of microphone arrays in many audio applications. Still, the real-time presentation of a two-dimensional sound field for the purpose of audio target localization is computationally challenging. In order to overcome this challenge, a crosspower spectrum phase1 (CSP) technique was applied to each 8-element arm of a 16-element cross array to provide audio target localization. In this paper, we describe the technique and compare it with two other commonly used techniques; Cross-Spectral Matrix2 and MUSIC3. The results show that the CSP technique applied to two 8-element orthogonal arrays provides a computationally efficient solution with reasonable accuracy and tolerable artifacts, sufficient for real-time applications. Additional topics include development of a synchronized 16-channel transmitter and receiver to relay the airborne data to the ground-based processor and presentation of test data demonstrating both ground-mounted operation and airborne localization of ground-based gunshots and loud engine sounds.

  7. Resonant modal group theory of membrane-type acoustical metamaterials for low-frequency sound attenuation

    NASA Astrophysics Data System (ADS)

    Ma, Fuyin; Wu, Jiu Hui; Huang, Meng

    2015-09-01

    In order to overcome the influence of the structural resonance on the continuous structures and obtain a lightweight thin-layer structure which can effectively isolate the low-frequency noises, an elastic membrane structure was proposed. In the low-frequency range below 500 Hz, the sound transmission loss (STL) of this membrane type structure is greatly higher than that of the current sound insulation material EVA (ethylene-vinyl acetate copo) of vehicle, so it is possible to replace the EVA by the membrane-type metamaterial structure in practice engineering. Based on the band structure, modal shapes, as well as the sound transmission simulation, the sound insulation mechanism of the designed membrane-type acoustic metamaterials was analyzed from a new perspective, which had been validated experimentally. It is suggested that in the frequency range above 200 Hz for this membrane-mass type structure, the sound insulation effect was principally not due to the low-level locally resonant mode of the mass block, but the continuous vertical resonant modes of the localized membrane. So based on such a physical property, a resonant modal group theory is initially proposed in this paper. In addition, the sound insulation mechanism of the membrane-type structure and thin plate structure were combined by the membrane/plate resonant theory.

  8. Design of laser monitoring and sound localization system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-long; Xu, Xi-ping; Dai, Yu-ming; Qiao, Yang

    2013-08-01

    In this paper, a novel design of laser monitoring and sound localization system is proposed. It utilizes laser to monitor and locate the position of the indoor conversation. In China most of the laser monitors no matter used in labor in an instrument uses photodiode or phototransistor as a detector at present. At the laser receivers of those facilities, light beams are adjusted to ensure that only part of the window in photodiodes or phototransistors received the beams. The reflection would deviate from its original path because of the vibration of the detected window, which would cause the changing of imaging spots in photodiode or phototransistor. However, such method is limited not only because it could bring in much stray light in receivers but also merely single output of photocurrent could be obtained. Therefore a new method based on quadrant detector is proposed. It utilizes the relation of the optical integral among quadrants to locate the position of imaging spots. This method could eliminate background disturbance and acquired two-dimensional spots vibrating data pacifically. The principle of this whole system could be described as follows. Collimated laser beams are reflected from vibrate-window caused by the vibration of sound source. Therefore reflected beams are modulated by vibration source. Such optical signals are collected by quadrant detectors and then are processed by photoelectric converters and corresponding circuits. Speech signals are eventually reconstructed. In addition, sound source localization is implemented by the means of detecting three different reflected light sources simultaneously. Indoor mathematical models based on the principle of Time Difference Of Arrival (TDOA) are established to calculate the twodimensional coordinate of sound source. Experiments showed that this system is able to monitor the indoor sound source beyond 15 meters with a high quality of speech reconstruction and to locate the sound source position accurately.

  9. Blind people are more sensitive than sighted people to binaural sound-location cues, particularly inter-aural level differences.

    PubMed

    Nilsson, Mats E; Schenkman, Bo N

    2016-02-01

    Blind people use auditory information to locate sound sources and sound-reflecting objects (echolocation). Sound source localization benefits from the hearing system's ability to suppress distracting sound reflections, whereas echolocation would benefit from "unsuppressing" these reflections. To clarify how these potentially conflicting aspects of spatial hearing interact in blind versus sighted listeners, we measured discrimination thresholds for two binaural location cues: inter-aural level differences (ILDs) and inter-aural time differences (ITDs). The ILDs or ITDs were present in single clicks, in the leading component of click pairs, or in the lagging component of click pairs, exploiting processes related to both sound source localization and echolocation. We tested 23 blind (mean age = 54 y), 23 sighted-age-matched (mean age = 54 y), and 42 sighted-young (mean age = 26 y) listeners. The results suggested greater ILD sensitivity for blind than for sighted listeners. The blind group's superiority was particularly evident for ILD-lag-click discrimination, suggesting not only enhanced ILD sensitivity in general but also increased ability to unsuppress lagging clicks. This may be related to the blind person's experience of localizing reflected sounds, for which ILDs may be more efficient than ITDs. On the ITD-discrimination tasks, the blind listeners performed better than the sighted age-matched listeners, but not better than the sighted young listeners. ITD sensitivity declines with age, and the equal performance of the blind listeners compared to a group of substantially younger listeners is consistent with the notion that blind people's experience may offset age-related decline in ITD sensitivity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Displaying Composite and Archived Soundings in the Advanced Weather Interactive Processing System

    NASA Technical Reports Server (NTRS)

    Barrett, Joe H., III; Volkmer, Matthew R.; Blottman, Peter F.; Sharp, David W.

    2008-01-01

    In a previous task, the Applied Meteorology Unit (AMU) developed spatial and temporal climatologies of lightning occurrence based on eight atmospheric flow regimes. The AMU created climatological, or composite, soundings of wind speed and direction, temperature, and dew point temperature at four rawinsonde observation stations at Jacksonville, Tampa, Miami, and Cape Canaveral Air Force Station, for each of the eight flow regimes. The composite soundings were delivered to the National Weather Service (NWS) Melbourne (MLB) office for display using the National version of the Skew-T Hodograph analysis and Research Program (NSHARP) software program. The NWS MLB requested the AMU make the composite soundings available for display in the Advanced Weather Interactive Processing System (AWIPS), so they could be overlaid on current observed soundings. This will allow the forecasters to compare the current state of the atmosphere with climatology. This presentation describes how the AMU converted the composite soundings from NSHARP Archive format to Network Common Data Form (NetCDF) format, so that the soundings could be displayed in AWl PS. The NetCDF is a set of data formats, programming interfaces, and software libraries used to read and write scientific data files. In AWIPS, each meteorological data type, such as soundings or surface observations, has a unique NetCDF format. Each format is described by a NetCDF template file. Although NetCDF files are in binary format, they can be converted to a text format called network Common data form Description Language (CDL). A software utility called ncgen is used to create a NetCDF file from a CDL file, while the ncdump utility is used to create a CDL file from a NetCDF file. An AWIPS receives soundings in Binary Universal Form for the Representation of Meteorological data (BUFR) format (http://dss.ucar.edu/docs/formats/bufr/), and then decodes them into NetCDF format. Only two sounding files are generated in AWIPS per day. One file contains all of the soundings received worldwide between 0000 UTC and 1200 UTC, and the other includes all soundings between 1200 UTC and 0000 UTC. In order to add the composite soundings into AWIPS, a procedure was created to configure, or localize, AWIPS. This involved modifying and creating several configuration text files. A unique fourcharacter site identifier was created for each of the 32 soundings so each could be viewed separately. The first three characters were based on the site identifier of the observed sounding, while the last character was based on the flow regime. While researching the localization process for soundings, the AMU discovered a method of archiving soundings so old soundings would not get purged automatically by AWl PS. This method could provide an alternative way of localizing AWl PS for composite soundings. In addition, this would allow forecasters to use archived soundings in AWIPS for case studies. A test sounding file in NetCDF format was written in order to verify the correct format for soundings in AWIPS. After the file was viewed successfully in AWIPS, the AMU wrote a software program in the Tool Command Language/Tool Kit (Tcl/Tk) language to convert the 32 composite soundings from NSHARP Archive to CDL format. The ncgen utility was then used to convert the CDL file to a NetCDF file. The NetCDF file could then be read and displayed in AWIPS.

  11. Optical microphone

    DOEpatents

    Veligdan, James T.

    2000-01-11

    An optical microphone includes a laser and beam splitter cooperating therewith for splitting a laser beam into a reference beam and a signal beam. A reflecting sensor receives the signal beam and reflects it in a plurality of reflections through sound pressure waves. A photodetector receives both the reference beam and reflected signal beam for heterodyning thereof to produce an acoustic signal for the sound waves. The sound waves vary the local refractive index in the path of the signal beam which experiences a Doppler frequency shift directly analogous with the sound waves.

  12. Performance on Tests of Central Auditory Processing by Individuals Exposed to High-Intensity Blasts

    DTIC Science & Technology

    2012-07-01

    percent (gap detected on at least four of the six presentations), with all longer durations receiving a score greater than 50 percent. Binaural ...Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...Masking Level Difference (MLD) test evaluates the integrity of the earliest sites of binaural comparison and sensitivity to interaural phase in the

  13. Speech Understanding and Sound Source Localization by Cochlear Implant Listeners Using a Pinna-Effect Imitating Microphone and an Adaptive Beamformer.

    PubMed

    Dorman, Michael F; Natale, Sarah; Loiselle, Louise

    2018-03-01

    Sentence understanding scores for patients with cochlear implants (CIs) when tested in quiet are relatively high. However, sentence understanding scores for patients with CIs plummet with the addition of noise. To assess, for patients with CIs (MED-EL), (1) the value to speech understanding of two new, noise-reducing microphone settings and (2) the effect of the microphone settings on sound source localization. Single-subject, repeated measures design. For tests of speech understanding, repeated measures on (1) number of CIs (one, two), (2) microphone type (omni, natural, adaptive beamformer), and (3) type of noise (restaurant, cocktail party). For sound source localization, repeated measures on type of signal (low-pass [LP], high-pass [HP], broadband noise). Ten listeners, ranging in age from 48 to 83 yr (mean = 57 yr), participated in this prospective study. Speech understanding was assessed in two noise environments using monaural and bilateral CIs fit with three microphone types. Sound source localization was assessed using three microphone types. In Experiment 1, sentence understanding scores (in terms of percent words correct) were obtained in quiet and in noise. For each patient, noise was first added to the signal to drive performance off of the ceiling in the bilateral CI-omni microphone condition. The other conditions were then administered at that signal-to-noise ratio in quasi-random order. In Experiment 2, sound source localization accuracy was assessed for three signal types using a 13-loudspeaker array over a 180° arc. The dependent measure was root-mean-score error. Both the natural and adaptive microphone settings significantly improved speech understanding in the two noise environments. The magnitude of the improvement varied between 16 and 19 percentage points for tests conducted in the restaurant environment and between 19 and 36 percentage points for tests conducted in the cocktail party environment. In the restaurant and cocktail party environments, both the natural and adaptive settings, when implemented on a single CI, allowed scores that were as good as, or better, than scores in the bilateral omni test condition. Sound source localization accuracy was unaltered by either the natural or adaptive settings for LP, HP, or wideband noise stimuli. The data support the use of the natural microphone setting as a default setting. The natural setting (1) provides better speech understanding in noise than the omni setting, (2) does not impair sound source localization, and (3) retains low-frequency sensitivity to signals from the rear. Moreover, bilateral CIs equipped with adaptive beamforming technology can engender speech understanding scores in noise that fall only a little short of scores for a single CI in quiet. American Academy of Audiology

  14. Energy localization and frequency analysis in the locust ear.

    PubMed

    Malkin, Robert; McDonagh, Thomas R; Mhatre, Natasha; Scott, Thomas S; Robert, Daniel

    2014-01-06

    Animal ears are exquisitely adapted to capture sound energy and perform signal analysis. Studying the ear of the locust, we show how frequency signal analysis can be performed solely by using the structural features of the tympanum. Incident sound waves generate mechanical vibrational waves that travel across the tympanum. These waves shoal in a tsunami-like fashion, resulting in energy localization that focuses vibrations onto the mechanosensory neurons in a frequency-dependent manner. Using finite element analysis, we demonstrate that two mechanical properties of the locust tympanum, distributed thickness and tension, are necessary and sufficient to generate frequency-dependent energy localization.

  15. Joint inversion for transponder localization and sound-speed profile temporal variation in high-precision acoustic surveys.

    PubMed

    Li, Zhao; Dosso, Stan E; Sun, Dajun

    2016-07-01

    This letter develops a Bayesian inversion for localizing underwater acoustic transponders using a surface ship which compensates for sound-speed profile (SSP) temporal variation during the survey. The method is based on dividing observed acoustic travel-time data into time segments and including depth-independent SSP variations for each segment as additional unknown parameters to approximate the SSP temporal variation. SSP variations are estimated jointly with transponder locations, rather than calculated separately as in existing two-step inversions. Simulation and sea-trial results show this localization/SSP joint inversion performs better than two-step inversion in terms of localization accuracy, agreement with measured SSP variations, and computational efficiency.

  16. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba) as demonstrated by virtual ruff removal.

    PubMed

    Hausmann, Laura; von Campenhausen, Mark; Endler, Frank; Singheiser, Martin; Wagner, Hermann

    2009-11-05

    When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs), which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba) are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD), interaural intensity differences (ILD), and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly) smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. The facial ruff a) improves azimuthal sound localization by increasing the ITD range and b) improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the virtual removal of the ruff. These data provide new insights into the function of external hearing structures and open up the possibility to apply the results on autonomous agents, creation of virtual auditory environments for humans, or in hearing aids.

  17. Atmospheric effects on microphone array analysis of aircraft vortex sound

    DOT National Transportation Integrated Search

    2006-05-08

    This paper provides the basis of a comprehensive analysis of vortex sound propagation : through the atmosphere in order to assess real atmospheric effects on acoustic array : processing. Such effects may impact vortex localization accuracy and detect...

  18. Oceanographic Measurements Program Review.

    DTIC Science & Technology

    1982-03-01

    prototype Advanced Microstructure Profiler (AMP) was completed and the unit was operationally tested in local waters (Lake Washington and Puget Sound ...Expendables ....... ............. ..21 A.W. Green The Developent of an Air-Launched ................ 25 Expendable Sound Velocimeter (AXSV); R. Bixby...8217., ,? , .’,*, ;; .,’...; "’ . :" .* " . .. ". ;’ - ~ ~ ~ ~ ’ V’ 7T W, V a .. -- THE DEVELOPMENT OF AN AIR-LAUNCHED EXPENDABLE SOUND VELOCIMETER (AXSV) Richard Bixby

  19. Atmospheric Propagation

    NASA Technical Reports Server (NTRS)

    Embleton, Tony F. W.; Daigle, Gilles A.

    1991-01-01

    Reviewed here is the current state of knowledge with respect to each basic mechanism of sound propagation in the atmosphere and how each mechanism changes the spectral or temporal characteristics of the sound received at a distance from the source. Some of the basic processes affecting sound wave propagation which are present in any situation are discussed. They are geometrical spreading, molecular absorption, and turbulent scattering. In geometrical spreading, sound levels decrease with increasing distance from the source; there is no frequency dependence. In molecular absorption, sound energy is converted into heat as the sound wave propagates through the air; there is a strong dependence on frequency. In turbulent scattering, local variations in wind velocity and temperature induce fluctuations in phase and amplitude of the sound waves as they propagate through an inhomogeneous medium; there is a moderate dependence on frequency.

  20. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  1. Sound waves and resonances in electron-hole plasma

    NASA Astrophysics Data System (ADS)

    Lucas, Andrew

    2016-06-01

    Inspired by the recent experimental signatures of relativistic hydrodynamics in graphene, we investigate theoretically the behavior of hydrodynamic sound modes in such quasirelativistic fluids near charge neutrality, within linear response. Locally driving an electron fluid at a resonant frequency to such a sound mode can lead to large increases in the electrical response at the edges of the sample, a signature, which cannot be explained using diffusive models of transport. We discuss the robustness of this signal to various effects, including electron-acoustic phonon coupling, disorder, and long-range Coulomb interactions. These long-range interactions convert the sound mode into a collective plasmonic mode at low frequencies unless the fluid is charge neutral. At the smallest frequencies, the response in a disordered fluid is quantitatively what is predicted by a "momentum relaxation time" approximation. However, this approximation fails at higher frequencies (which can be parametrically small), where the classical localization of sound waves cannot be neglected. Experimental observation of such resonances is a clear signature of relativistic hydrodynamics, and provides an upper bound on the viscosity of the electron-hole plasma.

  2. Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution

    PubMed Central

    Park, Yeonseok; Choi, Anthony

    2017-01-01

    The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625

  3. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    PubMed

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  4. On the Locality of Transient Electromagnetic Soundings with a Single-Loop Configuration

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2018-03-01

    The possibilities of reconstructing two-dimensional (2D) cross sections based on the data of the profile soundings by the transient electromagnetic method (TEM) with a single ungrounded loop are illustrated on three-dimensional (3D) models. The process of reconstruction includes three main steps: transformation of the responses in the depth dependence of resistivity ρ(h) measured along the profile, with their subsequent stitching into the 2D pseudo section; point-by-point one-dimensional (1D) inversion of the responses with the starting model constructed based on the transformations; and correction of the 2D cross section with the use of 2.5-dimensional (2.5D) block inversion. It is shown that single-loop TEM soundings allow studying the geological media within a local domain the lateral dimensions of which are commensurate with the depth of the investigation. The structure of the medium beyond this domain insignificantly affects the sounding results. This locality enables the TEM to reconstruct the geoelectrical structure of the medium from the 2D cross sections with the minimal distortions caused by the lack of information beyond the profile of the transient response measurements.

  5. Assessment of auditory and psychosocial handicap associated with unilateral hearing loss among Indian patients.

    PubMed

    Augustine, Ann Mary; Chrysolyte, Shipra B; Thenmozhi, K; Rupa, V

    2013-04-01

    In order to assess psychosocial and auditory handicap in Indian patients with unilateral sensorineural hearing loss (USNHL), a prospective study was conducted on 50 adults with USNHL in the ENT Outpatient clinic of a tertiary care centre. The hearing handicap inventory for adults (HHIA) as well as speech in noise and sound localization tests were administered to patients with USNHL. An equal number of age-matched, normal controls also underwent the speech and sound localization tests. The results showed that HHIA scores ranged from 0 to 60 (mean 20.7). Most patients (84.8 %) had either mild to moderate or no handicap. Emotional subscale scores were higher than social subscale scores (p = 0.01). When the effect of sociodemographic factors on HHIA scores was analysed, educated individuals were found to have higher social subscale scores (p = 0.04). Age, sex, side and duration of hearing loss, occupation and income did not affect HHIA scores. Speech in noise and sound localization were significantly poorer in cases compared to controls (p < 0.001). About 75 % of patients refused a rehabilitative device. We conclude that USNHL in Indian adults does not usually produce severe handicap. When present, the handicap is more emotional than social. USNHL significantly affects sound localization and speech in noise. Yet, affected patients seldom seek a rehabilitative device.

  6. Influence of aging on human sound localization

    PubMed Central

    Dobreva, Marina S.; O'Neill, William E.

    2011-01-01

    Errors in sound localization, associated with age-related changes in peripheral and central auditory function, can pose threats to self and others in a commonly encountered environment such as a busy traffic intersection. This study aimed to quantify the accuracy and precision (repeatability) of free-field human sound localization as a function of advancing age. Head-fixed young, middle-aged, and elderly listeners localized band-passed targets using visually guided manual laser pointing in a darkened room. Targets were presented in the frontal field by a robotically controlled loudspeaker assembly hidden behind a screen. Broadband targets (0.1–20 kHz) activated all auditory spatial channels, whereas low-pass and high-pass targets selectively isolated interaural time and intensity difference cues (ITDs and IIDs) for azimuth and high-frequency spectral cues for elevation. In addition, to assess the upper frequency limit of ITD utilization across age groups more thoroughly, narrowband targets were presented at 250-Hz intervals from 250 Hz up to ∼2 kHz. Young subjects generally showed horizontal overestimation (overshoot) and vertical underestimation (undershoot) of auditory target location, and this effect varied with frequency band. Accuracy and/or precision worsened in older individuals for broadband, high-pass, and low-pass targets, reflective of peripheral but also central auditory aging. In addition, compared with young adults, middle-aged, and elderly listeners showed pronounced horizontal localization deficiencies (imprecision) for narrowband targets within 1,250–1,575 Hz, congruent with age-related central decline in auditory temporal processing. Findings underscore the distinct neural processing of the auditory spatial cues in sound localization and their selective deterioration with advancing age. PMID:21368004

  7. A real-time biomimetic acoustic localizing system using time-shared architecture

    NASA Astrophysics Data System (ADS)

    Nourzad Karl, Marianne; Karl, Christian; Hubbard, Allyn

    2008-04-01

    In this paper a real-time sound source localizing system is proposed, which is based on previously developed mammalian auditory models. Traditionally, following the models, which use interaural time delay (ITD) estimates, the amount of parallel computations needed by a system to achieve real-time sound source localization is a limiting factor and a design challenge for hardware implementations. Therefore a new approach using a time-shared architecture implementation is introduced. The proposed architecture is a purely sample-base-driven digital system, and it follows closely the continuous-time approach described in the models. Rather than having dedicated hardware on a per frequency channel basis, a specialized core channel, shared for all frequency bands is used. Having an optimized execution time, which is much less than the system's sample rate, the proposed time-shared solution allows the same number of virtual channels to be processed as the dedicated channels in the traditional approach. Hence, the time-shared approach achieves a highly economical and flexible implementation using minimal silicon area. These aspects are particularly important in efficient hardware implementation of a real time biomimetic sound source localization system.

  8. Recovery of Neonatal Head Turning to Decreased Sound Pressure Level.

    ERIC Educational Resources Information Center

    Tarquinio, Nancy; And Others

    1990-01-01

    Investigated newborns' responses to decreased sound pressure level (SPL) by means of a localized head turning habituation procedure. Findings, which demonstrated recovery of neonatal head turning to decreased SPL, were inconsistent with the selective receptor adaptation model. (RH)

  9. Estimating the Intended Sound Direction of the User: Toward an Auditory Brain-Computer Interface Using Out-of-Head Sound Localization

    PubMed Central

    Nambu, Isao; Ebisawa, Masashi; Kogure, Masumi; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2013-01-01

    The auditory Brain-Computer Interface (BCI) using electroencephalograms (EEG) is a subject of intensive study. As a cue, auditory BCIs can deal with many of the characteristics of stimuli such as tone, pitch, and voices. Spatial information on auditory stimuli also provides useful information for a BCI. However, in a portable system, virtual auditory stimuli have to be presented spatially through earphones or headphones, instead of loudspeakers. We investigated the possibility of an auditory BCI using the out-of-head sound localization technique, which enables us to present virtual auditory stimuli to users from any direction, through earphones. The feasibility of a BCI using this technique was evaluated in an EEG oddball experiment and offline analysis. A virtual auditory stimulus was presented to the subject from one of six directions. Using a support vector machine, we were able to classify whether the subject attended the direction of a presented stimulus from EEG signals. The mean accuracy across subjects was 70.0% in the single-trial classification. When we used trial-averaged EEG signals as inputs to the classifier, the mean accuracy across seven subjects reached 89.5% (for 10-trial averaging). Further analysis showed that the P300 event-related potential responses from 200 to 500 ms in central and posterior regions of the brain contributed to the classification. In comparison with the results obtained from a loudspeaker experiment, we confirmed that stimulus presentation by out-of-head sound localization achieved similar event-related potential responses and classification performances. These results suggest that out-of-head sound localization enables us to provide a high-performance and loudspeaker-less portable BCI system. PMID:23437338

  10. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    PubMed Central

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in the occipital cortex. PMID:21716600

  11. Sparse representation of Gravitational Sound

    NASA Astrophysics Data System (ADS)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  12. [Functional anatomy of the cochlear nerve and the central auditory system].

    PubMed

    Simon, E; Perrot, X; Mertens, P

    2009-04-01

    The auditory pathways are a system of afferent fibers (through the cochlear nerve) and efferent fibers (through the vestibular nerve), which are not limited to a simple information transmitting system but create a veritable integration of the sound stimulus at the different levels, by analyzing its three fundamental elements: frequency (pitch), intensity, and spatial localization of the sound source. From the cochlea to the primary auditory cortex, the auditory fibers are organized anatomically in relation to the characteristic frequency of the sound signal that they transmit (tonotopy). Coding the intensity of the sound signal is based on temporal recruitment (the number of action potentials) and spatial recruitment (the number of inner hair cells recruited near the cell of the frequency that is characteristic of the stimulus). Because of binaural hearing, commissural pathways at each level of the auditory system and integration of the phase shift and the difference in intensity between signals coming from both ears, spatial localization of the sound source is possible. Finally, through the efferent fibers in the vestibular nerve, higher centers exercise control over the activity of the cochlea and adjust the peripheral hearing organ to external sound conditions, thus protecting the auditory system or increasing sensitivity by the attention given to the signal.

  13. Behavioural sensitivity to binaural spatial cues in ferrets: evidence for plasticity in the duplex theory of sound localization

    PubMed Central

    Keating, Peter; Nodal, Fernando R; King, Andrew J

    2014-01-01

    For over a century, the duplex theory has guided our understanding of human sound localization in the horizontal plane. According to this theory, the auditory system uses interaural time differences (ITDs) and interaural level differences (ILDs) to localize low-frequency and high-frequency sounds, respectively. Whilst this theory successfully accounts for the localization of tones by humans, some species show very different behaviour. Ferrets are widely used for studying both clinical and fundamental aspects of spatial hearing, but it is not known whether the duplex theory applies to this species or, if so, to what extent the frequency range over which each binaural cue is used depends on acoustical or neurophysiological factors. To address these issues, we trained ferrets to lateralize tones presented over earphones and found that the frequency dependence of ITD and ILD sensitivity broadly paralleled that observed in humans. Compared with humans, however, the transition between ITD and ILD sensitivity was shifted toward higher frequencies. We found that the frequency dependence of ITD sensitivity in ferrets can partially be accounted for by acoustical factors, although neurophysiological mechanisms are also likely to be involved. Moreover, we show that binaural cue sensitivity can be shaped by experience, as training ferrets on a 1-kHz ILD task resulted in significant improvements in thresholds that were specific to the trained cue and frequency. Our results provide new insights into the factors limiting the use of different sound localization cues and highlight the importance of sensory experience in shaping the underlying neural mechanisms. PMID:24256073

  14. Local Application of Sodium Salicylate Enhances Auditory Responses in the Rat’s Dorsal Cortex of the Inferior Colliculus

    PubMed Central

    Patel, Chirag R.; Zhang, Huiming

    2014-01-01

    Sodium salicylate (SS) is a widely used medication with side effects on hearing. In order to understand these side effects, we recorded sound-driven local-field potentials in a neural structure, the dorsal cortex of the inferior colliculus (ICd). Using a microiontophoretic technique, we applied SS at sites of recording and studied how auditory responses were affected by the drug. Furthermore, we studied how the responses were affected by combined local application of SS and an agonists/antagonist of the type-A or type-B γ-aminobutyric acid receptor (GABAA or GABAB receptor). Results revealed that SS applied alone enhanced auditory responses in the ICd, indicating that the drug had local targets in the structure. Simultaneous application of the drug and a GABAergic receptor antagonist synergistically enhanced amplitudes of responses. The synergistic interaction between SS and a GABAA receptor antagonist had a relatively early start in reference to the onset of acoustic stimulation and the duration of this interaction was independent of sound intensity. The interaction between SS and a GABAB receptor antagonist had a relatively late start, and the duration of this interaction was dependent on sound intensity. Simultaneous application of the drug and a GABAergic receptor agonist produced an effect different from the sum of effects produced by the two drugs released individually. These differences between simultaneous and individual drug applications suggest that SS modified GABAergic inhibition in the ICd. Our results indicate that SS can affect sound-driven activity in the ICd by modulating local GABAergic inhibition. PMID:25452744

  15. DXL: A Sounding Rocket Mission for the Study of Solar Wind Charge Exchange and Local Hot Bubble X-Ray Emission

    NASA Technical Reports Server (NTRS)

    Galeazzi, M.; Prasai, K.; Uprety, Y.; Chiao, M.; Collier, M. R.; Koutroumpa, D.; Porter, F. S.; Snowden, S.; Cravens, T.; Robertson, I.; hide

    2011-01-01

    The Diffuse X-rays from the Local galaxy (DXL) mission is an approved sounding rocket project with a first launch scheduled around December 2012. Its goal is to identify and separate the X-ray emission generated by solar wind charge exchange from that of the local hot bubble to improve our understanding of both. With 1,000 square centimeters proportional counters and grasp of about 10 square centimeters sr both in the 1/4 and 3/4 keV bands, DXL will achieve in a 5-minute flight what cannot be achieved by current and future X-ray satellites.

  16. Psychoacoustics

    NASA Astrophysics Data System (ADS)

    Moore, Brian C. J.

    Psychoacoustics psychological is concerned with the relationships between the physical characteristics of sounds and their perceptual attributes. This chapter describes: the absolute sensitivity of the auditory system for detecting weak sounds and how that sensitivity varies with frequency; the frequency selectivity of the auditory system (the ability to resolve or hear out the sinusoidal components in a complex sound) and its characterization in terms of an array of auditory filters; the processes that influence the masking of one sound by another; the range of sound levels that can be processed by the auditory system; the perception and modeling of loudness; level discrimination; the temporal resolution of the auditory system (the ability to detect changes over time); the perception and modeling of pitch for pure and complex tones; the perception of timbre for steady and time-varying sounds; the perception of space and sound localization; and the mechanisms underlying auditory scene analysis that allow the construction of percepts corresponding to individual sounds sources when listening to complex mixtures of sounds.

  17. Re-Sonification of Objects, Events, and Environments

    NASA Astrophysics Data System (ADS)

    Fink, Alex M.

    Digital sound synthesis allows the creation of a great variety of sounds. Focusing on interesting or ecologically valid sounds for music, simulation, aesthetics, or other purposes limits the otherwise vast digital audio palette. Tools for creating such sounds vary from arbitrary methods of altering recordings to precise simulations of vibrating objects. In this work, methods of sound synthesis by re-sonification are considered. Re-sonification, herein, refers to the general process of analyzing, possibly transforming, and resynthesizing or reusing recorded sounds in meaningful ways, to convey information. Applied to soundscapes, re-sonification is presented as a means of conveying activity within an environment. Applied to the sounds of objects, this work examines modeling the perception of objects as well as their physical properties and the ability to simulate interactive events with such objects. To create soundscapes to re-sonify geographic environments, a method of automated soundscape design is presented. Using recorded sounds that are classified based on acoustic, social, semantic, and geographic information, this method produces stochastically generated soundscapes to re-sonify selected geographic areas. Drawing on prior knowledge, local sounds and those deemed similar comprise a locale's soundscape. In the context of re-sonifying events, this work examines processes for modeling and estimating the excitations of sounding objects. These include plucking, striking, rubbing, and any interaction that imparts energy into a system, affecting the resultant sound. A method of estimating a linear system's input, constrained to a signal-subspace, is presented and applied toward improving the estimation of percussive excitations for re-sonification. To work toward robust recording-based modeling and re-sonification of objects, new implementations of banded waveguide (BWG) models are proposed for object modeling and sound synthesis. Previous implementations of BWGs use arbitrary model parameters and may produce a range of simulations that do not match digital waveguide or modal models of the same design. Subject to linear excitations, some models proposed here behave identically to other equivalently designed physical models. Under nonlinear interactions, such as bowing, many of the proposed implementations exhibit improvements in the attack characteristics of synthesized sounds.

  18. The opponent channel population code of sound location is an efficient representation of natural binaural sounds.

    PubMed

    Młynarski, Wiktor

    2015-05-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a "panoramic" code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.

  19. Reduced order modeling of head related transfer functions for virtual acoustic displays

    NASA Astrophysics Data System (ADS)

    Willhite, Joel A.; Frampton, Kenneth D.; Grantham, D. Wesley

    2003-04-01

    The purpose of this work is to improve the computational efficiency in acoustic virtual applications by creating and testing reduced order models of the head related transfer functions used in localizing sound sources. State space models of varying order were generated from zero-elevation Head Related Impulse Responses (HRIRs) using Kungs Single Value Decomposition (SVD) technique. The inputs to the models are the desired azimuths of the virtual sound sources (from minus 90 deg to plus 90 deg, in 10 deg increments) and the outputs are the left and right ear impulse responses. Trials were conducted in an anechoic chamber in which subjects were exposed to real sounds that were emitted by individual speakers across a numbered speaker array, phantom sources generated from the original HRIRs, and phantom sound sources generated with the different reduced order state space models. The error in the perceived direction of the phantom sources generated from the reduced order models was compared to errors in localization using the original HRIRs.

  20. The GISS sounding temperature impact test

    NASA Technical Reports Server (NTRS)

    Halem, M.; Ghil, M.; Atlas, R.; Susskind, J.; Quirk, W. J.

    1978-01-01

    The impact of DST 5 and DST 6 satellite sounding data on mid-range forecasting was studied. The GISS temperature sounding technique, the GISS time-continuous four-dimensional assimilation procedure based on optimal statistical analysis, the GISS forecast model, and the verification techniques developed, including impact on local precipitation forecasts are described. It is found that the impact of sounding data was substantial and beneficial for the winter test period, Jan. 29 - Feb. 21. 1976. Forecasts started from initial state obtained with the aid of satellite data showed a mean improvement of about 4 points in the 48 and 772 hours Sub 1 scores as verified over North America and Europe. This corresponds to an 8 to 12 hour forecast improvement in the forecast range at 48 hours. An automated local precipitation forecast model applied to 128 cities in the United States showed on an average 15% improvement when satellite data was used for numerical forecasts. The improvement was 75% in the midwest.

  1. Understanding and mimicking the dual optimality of the fly ear

    NASA Astrophysics Data System (ADS)

    Liu, Haijun; Currano, Luke; Gee, Danny; Helms, Tristan; Yu, Miao

    2013-08-01

    The fly Ormia ochracea has the remarkable ability, given an eardrum separation of only 520 μm, to pinpoint the 5 kHz chirp of its cricket host. Previous research showed that the two eardrums are mechanically coupled, which amplifies the directional cues. We have now performed a mechanics and optimization analysis which reveals that the right coupling strength is key: it results in simultaneously optimized directional sensitivity and directional cue linearity at 5 kHz. We next demonstrated that this dual optimality is replicable in a synthetic device and can be tailored for a desired frequency. Finally, we demonstrated a miniature sensor endowed with this dual-optimality at 8 kHz with unparalleled sound localization. This work provides a quantitative and mechanistic explanation for the fly's sound-localization ability from a new perspective, and it provides a framework for the development of fly-ear inspired sensors to overcoming a previously-insurmountable size constraint in engineered sound-localization systems.

  2. Statistics of natural binaural sounds.

    PubMed

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  3. Statistics of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction. PMID:25285658

  4. Integrating terrestrial and marine records of the LGM in McMurdo Sound, Antarctica: implications for grounded ice expansion, ice flow, and deglaciation of the Ross Sea Embayment

    NASA Astrophysics Data System (ADS)

    Christ, A. J.; Marchant, D. R.

    2017-12-01

    During the LGM, grounded glacier ice filled the Ross Embayment and deposited glacial drift on volcanic islands and peninsulas in McMurdo Sound, as well as along coastal regions of the Transantarctic Mountains (TAM), including the McMurdo Dry Valleys and Royal Society Range. The flow geometry and retreat history of this ice remains debated, with contrasting views yielding divergent implications for both the fundamental cause of Antarctic ice expansion as well as the interaction and behavior of ice derived from East and West Antarctica during late Quaternary time. We present terrestrial geomorphologic evidence that enables the reconstruction of former ice elevations, ice-flow paths, and ice-marginal environments in McMurdo Sound. Radiocarbon dates of fossil algae interbedded with ice-marginal sediments provide a coherent timeline for local ice retreat. These data are integrated with marine-sediment records and multi-beam data to reconstruct late glacial dynamics of grounded ice in McMurdo Sound and the western Ross Sea. The combined dataset suggest a dominance of ice flow toward the TAM in McMurdo Sound during all phases of glaciation, with thick, grounded ice at or near its maximum extent between 19.6 and 12.3 calibrated thousands of years before present (cal. ka). Our data show no significant advance of locally derived ice from the TAM into McMurdo Sound, consistent with the assertion that Late Pleistocene expansion of grounded ice in McMurdo Sound, and throughout the wider Ross Embayment, occurs in response to lower eustatic sea level and the resulting advance of marine-based outlet glaciers and ice streams (and perhaps also reduced oceanic heat flux), rather than local increases in precipitation and ice accumulation. Finally, when combined with allied data across the wider Ross Embayment, which show that widespread deglaciation outside McMurdo Sound did not commence until 13.1 ka, the implication is that retreat of grounded glacier ice in the Ross Embayment did not add significantly to SLR during Meltwater Pulse 1a (14.0-14.5 ka).

  5. A review of the perceptual effects of hearing loss for frequencies above 3 kHz.

    PubMed

    Moore, Brian C J

    2016-12-01

    Hearing loss caused by exposure to intense sounds usually has its greatest effects on audiometric thresholds at 4 and 6 kHz. However, in several countries compensation for occupational noise-induced hearing loss is calculated using the average of audiometric thresholds for selected frequencies up to 3 kHz, based on the implicit assumption that hearing loss for frequencies above 3 kHz has no material adverse consequences. This paper assesses whether this assumption is correct. Studies are reviewed that evaluate the role of hearing for frequencies above 3 kHz. Several studies show that frequencies above 3 kHz are important for the perception of speech, especially when background sounds are present. Hearing at high frequencies is also important for sound localization, especially for resolving front-back confusions. Hearing for frequencies above 3 kHz is important for the ability to understand speech in background sounds and for the ability to localize sounds. The audiometric threshold at 4 kHz and perhaps 6 kHz should be taken into account when assessing hearing in a medico-legal context.

  6. Demonstrations of simple and complex auditory psychophysics for multiple platforms and environments

    NASA Astrophysics Data System (ADS)

    Horowitz, Seth S.; Simmons, Andrea M.; Blue, China

    2005-09-01

    Sound is arguably the most widely perceived and pervasive form of energy in our world, and among the least understood, in part due to the complexity of its underlying principles. A series of interactive displays has been developed which demonstrates that the nature of sound involves the propagation of energy through space, and illustrates the definition of psychoacoustics, which is how listeners map the physical aspects of sound and vibration onto their brains. These displays use auditory illusions and commonly experienced music and sound in novel presentations (using interactive computer algorithms) to show that what you hear is not always what you get. The areas covered in these demonstrations range from simple and complex auditory localization, which illustrate why humans are bad at echolocation but excellent at determining the contents of auditory space, to auditory illusions that manipulate fine phase information and make the listener think their head is changing size. Another demonstration shows how auditory and visual localization coincide and sound can be used to change visual tracking. These demonstrations are designed to run on a wide variety of student accessible platforms including web pages, stand-alone presentations, or even hardware-based systems for museum displays.

  7. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology.

    PubMed

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2013-01-02

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance.

  8. Preface

    USGS Publications Warehouse

    Baum, Rex L.; Godt, Jonathan W.; Highland, Lynn M.

    2008-01-01

    The idea for Landslides and Engineering Geology of the Seattle, Washington, Areagrew out of a major landslide disaster that occurred in the Puget Sound region at the beginning of 1997. Unusually heavy snowfall in late December 1996 followed by warm, intense rainfall on 31 December through 2 January 1997 produced hundreds of damaging landslides in communities surrounding Puget Sound. This disaster resulted in significant efforts of the local geotechnical community and local governments to repair the damage and to mitigate the effects of future landslides. The magnitude of the disaster attracted the attention of the U.S. Geological Survey (USGS), which was just beginning a large multihazards project for Puget Sound. The USGS immediately added a regional study of landslides to that project. Soon a partnership formed between the City of Seattle and the USGS to assess landslide hazards of Seattle.

  9. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    PubMed Central

    Kim, Youngwoong; Kim, Keonwook

    2015-01-01

    The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. PMID:26580618

  10. GABAergic Neural Activity Involved in Salicylate-Induced Auditory Cortex Gain Enhancement

    PubMed Central

    Lu, Jianzhong; Lobarinas, Edward; Deng, Anchun; Goodey, Ronald; Stolzberg, Daniel; Salvi, Richard J.; Sun, Wei

    2011-01-01

    Although high doses of sodium salicylate impair cochlear function, it paradoxically enhances sound-evoked activity in the auditory cortex (AC) and augments acoustic startle reflex responses, neural and behavioral metrics associated with hyperexcitability and hyperacusis. To explore the neural mechanisms underlying salicylate-induced hyperexcitability and “increased central gain”, we examined the effects of γ-aminobutyric acid (GABA) receptor agonists and antagonists on salicylate-induced hyperexcitability in the AC and startle reflex responses. Consistent with our previous findings, local or systemic application of salicylate significantly increased the amplitude of sound-evoked AC neural activity, but generally reduced spontaneous activity in the AC. Systemic injection of salicylate also significantly increased the acoustic startle reflex. S-baclofen or R-baclofen, GABA-B agonists, which suppressed sound-evoked AC neural firing rate and local field potentials, also suppressed the salicylate-induced enhancement of the AC field potential and the acoustic startle reflex. Local application of vigabatrin, which enhances GABA concentration in the brain, suppressed the salicylate-induced enhancement of AC firing rate. Systemic injection of vigabatrin also reduced the salicylate-induced enhancement of acoustic startle reflex. Collectively, these results suggest that the sound-evoked behavioral and neural hyperactivity induced by salicylate may arise from a salicylate-induced suppression GABAergic inhibition in the AC. PMID:21664433

  11. How to generate a sound-localization map in fish

    NASA Astrophysics Data System (ADS)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  12. Experimental localization of an acoustic sound source in a wind-tunnel flow by using a numerical time-reversal technique.

    PubMed

    Padois, Thomas; Prax, Christian; Valeau, Vincent; Marx, David

    2012-10-01

    The possibility of using the time-reversal technique to localize acoustic sources in a wind-tunnel flow is investigated. While the technique is widespread, it has scarcely been used in aeroacoustics up to now. The proposed method consists of two steps: in a first experimental step, the acoustic pressure fluctuations are recorded over a linear array of microphones; in a second numerical step, the experimental data are time-reversed and used as input data for a numerical code solving the linearized Euler equations. The simulation achieves the back-propagation of the waves from the array to the source and takes into account the effect of the mean flow on sound propagation. The ability of the method to localize a sound source in a typical wind-tunnel flow is first demonstrated using simulated data. A generic experiment is then set up in an anechoic wind tunnel to validate the proposed method with a flow at Mach number 0.11. Monopolar sources are first considered that are either monochromatic or have a narrow or wide-band frequency content. The source position estimation is well-achieved with an error inferior to the wavelength. An application to a dipolar sound source shows that this type of source is also very satisfactorily characterized.

  13. Olfaction and Hearing Based Mobile Robot Navigation for Odor/Sound Source Search

    PubMed Central

    Song, Kai; Liu, Qi; Wang, Qi

    2011-01-01

    Bionic technology provides a new elicitation for mobile robot navigation since it explores the way to imitate biological senses. In the present study, the challenging problem was how to fuse different biological senses and guide distributed robots to cooperate with each other for target searching. This paper integrates smell, hearing and touch to design an odor/sound tracking multi-robot system. The olfactory robot tracks the chemical odor plume step by step through information fusion from gas sensors and airflow sensors, while two hearing robots localize the sound source by time delay estimation (TDE) and the geometrical position of microphone array. Furthermore, this paper presents a heading direction based mobile robot navigation algorithm, by which the robot can automatically and stably adjust its velocity and direction according to the deviation between the current heading direction measured by magnetoresistive sensor and the expected heading direction acquired through the odor/sound localization strategies. Simultaneously, one robot can communicate with the other robots via a wireless sensor network (WSN). Experimental results show that the olfactory robot can pinpoint the odor source within the distance of 2 m, while two hearing robots can quickly localize and track the olfactory robot in 2 min. The devised multi-robot system can achieve target search with a considerable success ratio and high stability. PMID:22319401

  14. 33 CFR 154.1125 - Additional response plan requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Prince William Sound, Alaska § 154.1125 Additional response plan requirements. (a) The owner or operator of a TAPAA facility shall include the following information in the Prince William Sound appendix to... for personnel, including local residents and fishermen, from the following locations in Prince William...

  15. dTULP, the Drosophila melanogaster Homolog of Tubby, Regulates Transient Receptor Potential Channel Localization in Cilia

    PubMed Central

    Shim, Jaewon; Han, Woongsu; Lee, Jinu; Bae, Yong Chul; Chung, Yun Doo; Kim, Chul Hoon; Moon, Seok Jun

    2013-01-01

    Mechanically gated ion channels convert sound into an electrical signal for the sense of hearing. In Drosophila melanogaster, several transient receptor potential (TRP) channels have been implicated to be involved in this process. TRPN (NompC) and TRPV (Inactive) channels are localized in the distal and proximal ciliary zones of auditory receptor neurons, respectively. This segregated ciliary localization suggests distinct roles in auditory transduction. However, the regulation of this localization is not fully understood. Here we show that the Drosophila Tubby homolog, King tubby (hereafter called dTULP) regulates ciliary localization of TRPs. dTULP-deficient flies show uncoordinated movement and complete loss of sound-evoked action potentials. Inactive and NompC are mislocalized in the cilia of auditory receptor neurons in the dTulp mutants, indicating that dTULP is required for proper cilia membrane protein localization. This is the first demonstration that dTULP regulates TRP channel localization in cilia, and suggests that dTULP is a protein that regulates ciliary neurosensory functions. PMID:24068974

  16. Developing a system for blind acoustic source localization and separation

    NASA Astrophysics Data System (ADS)

    Kulkarni, Raghavendra

    This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.

  17. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness...

  18. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  19. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations...

  20. How Internally Coupled Ears Generate Temporal and Amplitude Cues for Sound Localization.

    PubMed

    Vedurmudi, A P; Goulet, J; Christensen-Dalsgaard, J; Young, B A; Williams, R; van Hemmen, J L

    2016-01-15

    In internally coupled ears, displacement of one eardrum creates pressure waves that propagate through air-filled passages in the skull and cause displacement of the opposing eardrum, and conversely. By modeling the membrane, passages, and propagating pressure waves, we show that internally coupled ears generate unique amplitude and temporal cues for sound localization. The magnitudes of both these cues are directionally dependent. The tympanic fundamental frequency segregates a low-frequency regime with constant time-difference magnification from a high-frequency domain with considerable amplitude magnification.

  1. Examination of propeller sound production using large eddy simulation

    NASA Astrophysics Data System (ADS)

    Keller, Jacob; Kumar, Praveen; Mahesh, Krishnan

    2018-06-01

    The flow field of a five-bladed marine propeller operating at design condition, obtained using large eddy simulation, is used to calculate the resulting far-field sound. The results of three acoustic formulations are compared, and the effects of the underlying assumptions are quantified. The integral form of the Ffowcs-Williams and Hawkings (FW-H) equation is solved on the propeller surface, which is discretized into a collection of N radial strips. Further assumptions are made to reduce FW-H to a Curle acoustic analogy and a point-force dipole model. Results show that although the individual blades are strongly tonal in the rotor plane, the propeller is acoustically compact at low frequency and the tonal sound interferes destructively in the far field. The propeller is found to be acoustically compact for frequencies up to 100 times the rotation rate. The overall far-field acoustic signature is broadband. Locations of maximum sound of the propeller occur along the axis of rotation both up and downstream. The propeller hub is found to be a source of significant sound to observers in the rotor plane, due to flow separation and interaction with the blade-root wakes. The majority of the propeller sound is generated by localized unsteadiness at the blade tip, which is caused by shedding of the tip vortex. Tonal blade sound is found to be caused by the periodic motion of the loaded blades. Turbulence created in the blade boundary layer is convected past the blade trailing edge leading to generation of broadband noise along the blade. Acoustic energy is distributed among higher frequencies as local Reynolds number increases radially along the blades. Sound source correlation and spectra are examined in the context of noise modeling.

  2. How far away is plug 'n' play? Assessing the near-term potential of sonification and auditory display

    NASA Technical Reports Server (NTRS)

    Bargar, Robin

    1995-01-01

    The commercial music industry offers a broad range of plug 'n' play hardware and software scaled to music professionals and scaled to a broad consumer market. The principles of sound synthesis utilized in these products are relevant to application in virtual environments (VE). However, the closed architectures used in commercial music synthesizers are prohibitive to low-level control during real-time rendering, and the algorithms and sounds themselves are not standardized from product to product. To bring sound into VE requires a new generation of open architectures designed for human-controlled performance from interfaces embedded in immersive environments. This presentation addresses the state of the sonic arts in scientific computing and VE, analyzes research challenges facing sound computation, and offers suggestions regarding tools we might expect to become available during the next few years. A list of classes of audio functionality in VE includes sonification -- the use of sound to represent data from numerical models; 3D auditory display (spatialization and localization, also called externalization); navigation cues for positional orientation and for finding items or regions inside large spaces; voice recognition for controlling the computer; external communications between users in different spaces; and feedback to the user concerning his own actions or the state of the application interface. To effectively convey this considerable variety of signals, we apply principles of acoustic design to ensure the messages are neither confusing nor competing. We approach the design of auditory experience through a comprehensive structure for messages, and message interplay we refer to as an Automated Sound Environment. Our research addresses real-time sound synthesis, real-time signal processing and localization, interactive control of high-dimensional systems, and synchronization of sound and graphics.

  3. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    PubMed Central

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  4. Cherenkov sound on a surface of a topological insulator

    NASA Astrophysics Data System (ADS)

    Smirnov, Sergey

    2013-11-01

    Topological insulators are currently of considerable interest due to peculiar electronic properties originating from helical states on their surfaces. Here we demonstrate that the sound excited by helical particles on surfaces of topological insulators has several exotic properties fundamentally different from sound propagating in nonhelical or even isotropic helical systems. Specifically, the sound may have strictly forward propagation absent for isotropic helical states. Its dependence on the anisotropy of the realistic surface states is of distinguished behavior which may be used as an alternative experimental tool to measure the anisotropy strength. Fascinating from the fundamental point of view backward, or anomalous, Cherenkov sound is excited above the critical angle π/2 when the anisotropy exceeds a critical value. Strikingly, at strong anisotropy the sound localizes into a few forward and backward beams propagating along specific directions.

  5. Evolutionary trends in directional hearing

    PubMed Central

    Carr, Catherine E.; Christensen-Dalsgaard, Jakob

    2016-01-01

    Tympanic hearing is a true evolutionary novelty that arose in parallel within early tetrapods. We propose that in these tetrapods, selection for sound localization in air acted upon pre-existing directionally sensitive brainstem circuits, similar to those in fishes. Auditory circuits in birds and lizards resemble this ancestral, directionally sensitive framework. Despite this anatomically similarity, coding of sound source location differs between birds and lizards. In birds, brainstem circuits compute sound location from interaural cues. Lizards, however, have coupled ears, and do not need to compute source location in the brain. Thus their neural processing of sound direction differs, although all show mechanisms for enhancing sound source directionality. Comparisons with mammals reveal similarly complex interactions between coding strategies and evolutionary history. PMID:27448850

  6. Behavior and modeling of two-dimensional precedence effect in head-unrestrained cats

    PubMed Central

    Ruhland, Janet L.; Yin, Tom C. T.

    2015-01-01

    The precedence effect (PE) is an auditory illusion that occurs when listeners localize nearly coincident and similar sounds from different spatial locations, such as a direct sound and its echo. It has mostly been studied in humans and animals with immobile heads in the horizontal plane; speaker pairs were often symmetrically located in the frontal hemifield. The present study examined the PE in head-unrestrained cats for a variety of paired-sound conditions along the horizontal, vertical, and diagonal axes. Cats were trained with operant conditioning to direct their gaze to the perceived sound location. Stereotypical PE-like behaviors were observed for speaker pairs placed in azimuth or diagonally in the frontal hemifield as the interstimulus delay was varied. For speaker pairs in the median sagittal plane, no clear PE-like behavior occurred. Interestingly, when speakers were placed diagonally in front of the cat, certain PE-like behavior emerged along the vertical dimension. However, PE-like behavior was not observed when both speakers were located in the left hemifield. A Hodgkin-Huxley model was used to simulate responses of neurons in the medial superior olive (MSO) to sound pairs in azimuth. The novel simulation incorporated a low-threshold potassium current and frequency mismatches to generate internal delays. The model exhibited distinct PE-like behavior, such as summing localization and localization dominance. The simulation indicated that certain encoding of the PE could have occurred before information reaches the inferior colliculus, and MSO neurons with binaural inputs having mismatched characteristic frequencies may play an important role. PMID:26133795

  7. Short-Latency, Goal-Directed Movements of the Pinnae to Sounds That Produce Auditory Spatial Illusions

    PubMed Central

    McClaine, Elizabeth M.; Yin, Tom C. T.

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than ±400 μs, cats exhibit summing localization, the perception of a “phantom” sound located between the sources. Consistent with localization dominance, for delays from 400 μs to ∼10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies (∼30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved. PMID:19889848

  8. Short-latency, goal-directed movements of the pinnae to sounds that produce auditory spatial illusions.

    PubMed

    Tollin, Daniel J; McClaine, Elizabeth M; Yin, Tom C T

    2010-01-01

    The precedence effect (PE) is an auditory spatial illusion whereby two identical sounds presented from two separate locations with a delay between them are perceived as a fused single sound source whose position depends on the value of the delay. By training cats using operant conditioning to look at sound sources, we have previously shown that cats experience the PE similarly to humans. For delays less than +/-400 mus, cats exhibit summing localization, the perception of a "phantom" sound located between the sources. Consistent with localization dominance, for delays from 400 mus to approximately 10 ms, cats orient toward the leading source location only, with little influence of the lagging source. Finally, echo threshold was reached for delays >10 ms, where cats first began to orient to the lagging source. It has been hypothesized by some that the neural mechanisms that produce facets of the PE, such as localization dominance and echo threshold, must likely occur at cortical levels. To test this hypothesis, we measured both pinnae position, which were not under any behavioral constraint, and eye position in cats and found that the pinnae orientations to stimuli that produce each of the three phases of the PE illusion was similar to the gaze responses. Although both eye and pinnae movements behaved in a manner that reflected the PE, because the pinnae moved with strikingly short latencies ( approximately 30 ms), these data suggest a subcortical basis for the PE and that the cortex is not likely to be directly involved.

  9. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus

    PubMed Central

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-01-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. PMID:25865218

  10. Bionic Modeling of Knowledge-Based Guidance in Automated Underwater Vehicles.

    DTIC Science & Technology

    1987-06-24

    bugs and their foraging movements are heard by the sound of rustling leaves or rhythmic wing beats . ASYMMETRY OF EARS The faces of owls have captured...sound source without moving. The barn owl has binaural and monaural cues as well as cues that operate in relative motion when either the target or the...owl moves. Table 1 lists the cues. 7 TM No. 87- 2068 fTable 1. Sound Localization Parameters Used by the Barn Owl I BINAURAL PARAMETERS: 1. the

  11. Lummi Bay Marina, Whatcom County, Washington. Draft Detailed Project Report and Draft Environmental Impact Statement.

    DTIC Science & Technology

    1983-12-01

    observations of qray whales from the waters inside of Wasbington including the eastern Strait of Juan de ruca, the San Juan Islands, Puget Sound , and Hood...waters in winter. in the North Pacific this E.ecies is presently estimated tc number about 17,000 animals. One fin whale was pursued in Puget Sound i...owns submerged lands from tideland elevation -4.5 feet MLLW to deep water in Puget Sound . The Lummi Tribe (local sponsor) owns Reservation lands above

  12. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    PubMed

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  13. Preliminary laboratory testing on the sound absorption of coupled cavity sonic crystal

    NASA Astrophysics Data System (ADS)

    Kristiani, R.; Yahya, I.; Harjana; Suparmi

    2016-11-01

    This paper focuses on the sound absorption performance of coupled cavity sonic crystal. It constructed by a pair of a cylindrical tube with different values in diameters. A laboratory test procedure after ASTM E1050 has been conducted to measure the sound absorption of the sonic crystal elements. The test procedures were implemented to a single coupled scatterer and also to a pair of similar structure. The results showed that using the paired structure bring a better possibility for increase the sound absorption to a wider absorption range. It also bring a practical advantage for setting the local Helmholtz resonant frequency to certain intended frequency.

  14. Source and listener directivity for interactive wave-based sound propagation.

    PubMed

    Mehra, Ravish; Antani, Lakulish; Kim, Sujeong; Manocha, Dinesh

    2014-04-01

    We present an approach to model dynamic, data-driven source and listener directivity for interactive wave-based sound propagation in virtual environments and computer games. Our directional source representation is expressed as a linear combination of elementary spherical harmonic (SH) sources. In the preprocessing stage, we precompute and encode the propagated sound fields due to each SH source. At runtime, we perform the SH decomposition of the varying source directivity interactively and compute the total sound field at the listener position as a weighted sum of precomputed SH sound fields. We propose a novel plane-wave decomposition approach based on higher-order derivatives of the sound field that enables dynamic HRTF-based listener directivity at runtime. We provide a generic framework to incorporate our source and listener directivity in any offline or online frequency-domain wave-based sound propagation algorithm. We have integrated our sound propagation system in Valve's Source game engine and use it to demonstrate realistic acoustic effects such as sound amplification, diffraction low-passing, scattering, localization, externalization, and spatial sound, generated by wave-based propagation of directional sources and listener in complex scenarios. We also present results from our preliminary user study.

  15. Distinct Correlation Structure Supporting a Rate-Code for Sound Localization in the Owl’s Auditory Forebrain

    PubMed Central

    2017-01-01

    Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698

  16. Evaluation of Speech Intelligibility and Sound Localization Abilities with Hearing Aids Using Binaural Wireless Technology

    PubMed Central

    Ibrahim, Iman; Parsa, Vijay; Macpherson, Ewan; Cheesman, Margaret

    2012-01-01

    Wireless synchronization of the digital signal processing (DSP) features between two hearing aids in a bilateral hearing aid fitting is a fairly new technology. This technology is expected to preserve the differences in time and intensity between the two ears by co-ordinating the bilateral DSP features such as multichannel compression, noise reduction, and adaptive directionality. The purpose of this study was to evaluate the benefits of wireless communication as implemented in two commercially available hearing aids. More specifically, this study measured speech intelligibility and sound localization abilities of normal hearing and hearing impaired listeners using bilateral hearing aids with wireless synchronization of multichannel Wide Dynamic Range Compression (WDRC). Twenty subjects participated; 8 had normal hearing and 12 had bilaterally symmetrical sensorineural hearing loss. Each individual completed the Hearing in Noise Test (HINT) and a sound localization test with two types of stimuli. No specific benefit from wireless WDRC synchronization was observed for the HINT; however, hearing impaired listeners had better localization with the wireless synchronization. Binaural wireless technology in hearing aids may improve localization abilities although the possible effect appears to be small at the initial fitting. With adaptation, the hearing aids with synchronized signal processing may lead to an improvement in localization and speech intelligibility. Further research is required to demonstrate the effect of adaptation to the hearing aids with synchronized signal processing on different aspects of auditory performance. PMID:26557339

  17. Diversity of fish sound types in the Pearl River Estuary, China

    PubMed Central

    Wang, Zhi-Tao; Nowacek, Douglas P.; Akamatsu, Tomonari; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang

    2017-01-01

    Background Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Methods Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. Results We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N10 might belong to big-snout croaker (Johnius macrorhynus), and 1 + N19 might be produced by Belanger’s croaker (J. belangerii). Discussion Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin (Sousa chinensis) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed. PMID:29085746

  18. Diversity of fish sound types in the Pearl River Estuary, China.

    PubMed

    Wang, Zhi-Tao; Nowacek, Douglas P; Akamatsu, Tomonari; Wang, Ke-Xiong; Liu, Jian-Chang; Duan, Guo-Qin; Cao, Han-Jiang; Wang, Ding

    2017-01-01

    Repetitive species-specific sound enables the identification of the presence and behavior of soniferous species by acoustic means. Passive acoustic monitoring has been widely applied to monitor the spatial and temporal occurrence and behavior of calling species. Underwater biological sounds in the Pearl River Estuary, China, were collected using passive acoustic monitoring, with special attention paid to fish sounds. A total of 1,408 suspected fish calls comprising 18,942 pulses were qualitatively analyzed using a customized acoustic analysis routine. We identified a diversity of 66 types of fish sounds. In addition to single pulse, the sounds tended to have a pulse train structure. The pulses were characterized by an approximate 8 ms duration, with a peak frequency from 500 to 2,600 Hz and a majority of the energy below 4,000 Hz. The median inter-pulsepeak interval (IPPI) of most call types was 9 or 10 ms. Most call types with median IPPIs of 9 ms and 10 ms were observed at times that were exclusive from each other, suggesting that they might be produced by different species. According to the literature, the two section signal types of 1 + 1 and 1 + N 10 might belong to big-snout croaker ( Johnius macrorhynus ), and 1 + N 19 might be produced by Belanger's croaker ( J. belangerii ). Categorization of the baseline ambient biological sound is an important first step in mapping the spatial and temporal patterns of soniferous fishes. The next step is the identification of the species producing each sound. The distribution pattern of soniferous fishes will be helpful for the protection and management of local fishery resources and in marine environmental impact assessment. Since the local vulnerable Indo-Pacific humpback dolphin ( Sousa chinensis ) mainly preys on soniferous fishes, the fine-scale distribution pattern of soniferous fishes can aid in the conservation of this species. Additionally, prey and predator relationships can be observed when a database of species-identified sounds is completed.

  19. 77 FR 37600 - Safety Zone; Arctic Drilling and Support Vessels, Puget Sound, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-22

    ... made local inquiries and chartered a vessel to observe the mobile offshore drilling unit (MODU) KULLUK... 1625-AA00 Safety Zone; Arctic Drilling and Support Vessels, Puget Sound, WA AGENCY: Coast Guard, DHS... nineteen vessels associated with Arctic drilling as well as their lead towing vessels while those vessels...

  20. 76 FR 42542 - Special Local Regulations for Marine Events, Bogue Sound; Morehead City, NC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-19

    .... The likely combination of large numbers of recreational vessels, powerboats traveling at high speeds... Bogue Sound, adjacent to Morehead City from the southern tip of Sugar Loaf Island approximate position...'' N, longitude 076[deg]42'12'' W, thence westerly to the southern tip of Sugar Loaf Island the point...

  1. Acoustic-tactile rendering of visual information

    NASA Astrophysics Data System (ADS)

    Silva, Pubudu Madhawa; Pappas, Thrasyvoulos N.; Atkins, Joshua; West, James E.; Hartmann, William M.

    2012-03-01

    In previous work, we have proposed a dynamic, interactive system for conveying visual information via hearing and touch. The system is implemented with a touch screen that allows the user to interrogate a two-dimensional (2-D) object layout by active finger scanning while listening to spatialized auditory feedback. Sound is used as the primary source of information for object localization and identification, while touch is used both for pointing and for kinesthetic feedback. Our previous work considered shape and size perception of simple objects via hearing and touch. The focus of this paper is on the perception of a 2-D layout of simple objects with identical size and shape. We consider the selection and rendition of sounds for object identification and localization. We rely on the head-related transfer function for rendering sound directionality, and consider variations of sound intensity and tempo as two alternative approaches for rendering proximity. Subjective experiments with visually-blocked subjects are used to evaluate the effectiveness of the proposed approaches. Our results indicate that intensity outperforms tempo as a proximity cue, and that the overall system for conveying a 2-D layout is quite promising.

  2. Complex auditory behaviour emerges from simple reactive steering

    NASA Astrophysics Data System (ADS)

    Hedwig, Berthold; Poulet, James F. A.

    2004-08-01

    The recognition and localization of sound signals is fundamental to acoustic communication. Complex neural mechanisms are thought to underlie the processing of species-specific sound patterns even in animals with simple auditory pathways. In female crickets, which orient towards the male's calling song, current models propose pattern recognition mechanisms based on the temporal structure of the song. Furthermore, it is thought that localization is achieved by comparing the output of the left and right recognition networks, which then directs the female to the pattern that most closely resembles the species-specific song. Here we show, using a highly sensitive method for measuring the movements of female crickets, that when walking and flying each sound pulse of the communication signal releases a rapid steering response. Thus auditory orientation emerges from reactive motor responses to individual sound pulses. Although the reactive motor responses are not based on the song structure, a pattern recognition process may modulate the gain of the responses on a longer timescale. These findings are relevant to concepts of insect auditory behaviour and to the development of biologically inspired robots performing cricket-like auditory orientation.

  3. Integrating sensorimotor systems in a robot model of cricket behavior

    NASA Astrophysics Data System (ADS)

    Webb, Barbara H.; Harrison, Reid R.

    2000-10-01

    The mechanisms by which animals manage sensorimotor integration and coordination of different behaviors can be investigated in robot models. In previous work the first author has build a robot that localizes sound based on close modeling of the auditory and neural system in the cricket. It is known that the cricket combines its response to sound with other sensorimotor activities such as an optomotor reflex and reactions to mechanical stimulation for the antennae and cerci. Behavioral evidence suggests some ways these behaviors may be integrated. We have tested the addition of an optomotor response, using an analog VLSI circuit developed by the second author, to the sound localizing behavior and have shown that it can, as in the cricket, improve the directness of the robot's path to sound. In particular it substantially improves behavior when the robot is subject to a motor disturbance. Our aim is to better understand how the insect brain functions in controlling complex combinations of behavior, with the hope that this will also suggest novel mechanisms for sensory integration on robots.

  4. The natural history of sound localization in mammals--a story of neuronal inhibition.

    PubMed

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  5. Depth dependence of wind-driven, broadband ambient noise in the Philippine Sea.

    PubMed

    Barclay, David R; Buckingham, Michael J

    2013-01-01

    In 2009, as part of PhilSea09, the instrument platform known as Deep Sound was deployed in the Philippine Sea, descending under gravity to a depth of 6000 m, where it released a drop weight, allowing buoyancy to return it to the surface. On the descent and ascent, at a speed of 0.6 m/s, Deep Sound continuously recorded broadband ambient noise on two vertically aligned hydrophones separated by 0.5 m. For frequencies between 1 and 10 kHz, essentially all the noise was found to be downward traveling, exhibiting a depth-independent directional density function having the simple form cos θ, where θ ≤ 90° is the polar angle measured from the zenith. The spatial coherence and cross-spectral density of the noise show no change in character in the vicinity of the critical depth, consistent with a local, wind-driven surface-source distribution. The coherence function accurately matches that predicted by a simple model of deep-water, wind-generated noise, provided that the theoretical coherence is evaluated using the local sound speed. A straightforward inversion procedure is introduced for recovering the sound speed profile from the cross-correlation function of the noise, returning sound speeds with a root-mean-square error relative to an independently measured profile of 8.2 m/s.

  6. The natural history of sound localization in mammals – a story of neuronal inhibition

    PubMed Central

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds. PMID:25324726

  7. The contribution of two ears to the perception of vertical angle in sagittal planes.

    PubMed

    Morimoto, M

    2001-04-01

    Because the input signals to the left and right ears are not identical, it is important to clarify the role of these signals in the perception of the vertical angle of a sound source at any position in the upper hemisphere. To obtain basic findings on upper hemisphere localization, this paper investigates the contribution of each pinna to the perception of vertical angle. Tests measured localization of the vertical angle in five planes parallel to the median plane. In the localization tests, the pinna cavities of one or both ears were occluded. Results showed that pinna cavities of both the near and far ears play a role in determining the perceived vertical angle of a sound source in any plane, including the median plane. As a sound source shifts laterally away from the median plane, the contribution of the near ear increases and, conversely, that of the far ear decreases. For saggital planes at azimuths greater than 60 degrees from midline, the far ear no longer contributes measurably to the determination of vertical angle.

  8. Slow-wave metamaterial open panels for efficient reduction of low-frequency sound transmission

    NASA Astrophysics Data System (ADS)

    Yang, Jieun; Lee, Joong Seok; Lee, Hyeong Rae; Kang, Yeon June; Kim, Yoon Young

    2018-02-01

    Sound transmission reduction is typically governed by the mass law, requiring thicker panels to handle lower frequencies. When open holes must be inserted in panels for heat transfer, ventilation, or other purposes, the efficient reduction of sound transmission through holey panels becomes difficult, especially in the low-frequency ranges. Here, we propose slow-wave metamaterial open panels that can dramatically lower the working frequencies of sound transmission loss. Global resonances originating from slow waves realized by multiply inserted, elaborately designed subwavelength rigid partitions between two thin holey plates contribute to sound transmission reductions at lower frequencies. Owing to the dispersive characteristics of the present metamaterial panels, local resonances that trap sound in the partitions also occur at higher frequencies, exhibiting negative effective bulk moduli and zero effective velocities. As a result, low-frequency broadened sound transmission reduction is realized efficiently in the present metamaterial panels. The theoretical model of the proposed metamaterial open panels is derived using an effective medium approach and verified by numerical and experimental investigations.

  9. Salient sounds activate human visual cortex automatically.

    PubMed

    McDonald, John J; Störmer, Viola S; Martinez, Antigona; Feng, Wenfeng; Hillyard, Steven A

    2013-05-22

    Sudden changes in the acoustic environment enhance perceptual processing of subsequent visual stimuli that appear in close spatial proximity. Little is known, however, about the neural mechanisms by which salient sounds affect visual processing. In particular, it is unclear whether such sounds automatically activate visual cortex. To shed light on this issue, this study examined event-related brain potentials (ERPs) that were triggered either by peripheral sounds that preceded task-relevant visual targets (Experiment 1) or were presented during purely auditory tasks (Experiments 2-4). In all experiments the sounds elicited a contralateral ERP over the occipital scalp that was localized to neural generators in extrastriate visual cortex of the ventral occipital lobe. The amplitude of this cross-modal ERP was predictive of perceptual judgments about the contrast of colocalized visual targets. These findings demonstrate that sudden, intrusive sounds reflexively activate human visual cortex in a spatially specific manner, even during purely auditory tasks when the sounds are not relevant to the ongoing task.

  10. A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)

    1996-01-01

    The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.

  11. Influence of airfoil thickness on convected gust interaction noise

    NASA Technical Reports Server (NTRS)

    Kerschen, E. J.; Tsai, C. T.

    1989-01-01

    The case of a symmetric airfoil at zero angle of attack is considered in order to determine the influence of airfoil thickness on sound generated by interaction with convected gusts. The analysis is based on a linearization of the Euler equations about the subsonic mean flow past the airfoil. Primary sound generation is found to occur in a local region surrounding the leading edge, with the size of the local region scaling on the gust wavelength. For a parabolic leading edge, moderate leading edge thickness is shown to decrease the noise level in the low Mach number limit.

  12. Impedance measurement of non-locally reactive samples and the influence of the assumption of local reaction.

    PubMed

    Brandão, Eric; Mareze, Paulo; Lenzi, Arcanjo; da Silva, Andrey R

    2013-05-01

    In this paper, the measurement of the absorption coefficient of non-locally reactive sample layers of thickness d1 backed by a rigid wall is investigated. The investigation is carried out with the aid of real and theoretical experiments, which assume a monopole sound source radiating sound above an infinite non-locally reactive layer. A literature search revealed that the number of papers devoted to this matter is rather limited in comparison to those which address the measurement of locally reactive samples. Furthermore, the majority of papers published describe the use of two or more microphones whereas this paper focuses on the measurement with the pressure-particle velocity sensor (PU technique). For these reasons, the assumption that the sample is locally reactive is initially explored, so that the associated measurement errors can be quantified. Measurements in the impedance tube and in a semi-anechoic room are presented to validate the theoretical experiment. For samples with a high non-local reaction behavior, for which the measurement errors tend to be high, two different algorithms are proposed in order to minimize the associated errors.

  13. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  14. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  15. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  16. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  17. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  18. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  19. 12 CFR 345.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  20. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  1. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  2. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  3. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  4. 12 CFR 228.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... obtained from community organizations, state, local, and tribal governments, economic development agencies... condition of the bank, the economic climate (national, regional, and local), safety and soundness... in Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  5. 12 CFR 25.21 - Performance tests, standards, and ratings, in general.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... from community organizations, state, local, and tribal governments, economic development agencies, or... of the bank, the economic climate (national, regional, and local), safety and soundness limitations... Lending Act (15 U.S.C. 1650(a)(7)) (including a loan under a state or local education loan program...

  6. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound.

    PubMed

    Menze, Sebastian; Zitterbart, Daniel P; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales ( Balaenoptera musculus intermedia ), fin whales ( Balaenoptera physalus ), Antarctic minke whales ( Balaenoptera bonaerensis ) and leopard seals ( Hydrurga leptonyx ). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  7. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    NASA Astrophysics Data System (ADS)

    Menze, Sebastian; Zitterbart, Daniel P.; van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton.

  8. An ultrasound look at Korotkoff sounds: the role of pulse wave velocity and flow turbulence.

    PubMed

    Benmira, Amir; Perez-Martin, Antonia; Schuster, Iris; Veye, Florent; Triboulet, Jean; Berron, Nicolas; Aichoun, Isabelle; Coudray, Sarah; Laurent, Jérémy; Bereksi-Reguig, Fethi; Dauzat, Michel

    2017-04-01

    The aim of this study was to analyze the temporal relationships between pressure, flow, and Korotkoff sounds, providing clues for their comprehensive interpretation. When measuring blood pressure in a group of 23 volunteers, we used duplex Doppler ultrasonography to assess, under the arm-cuff, the brachial artery flow, diameter changes, and local pulse wave velocity (PWV), while recording Korotkoff sounds 10 cm downstream together with cuff pressure and ECG. The systolic (SBP) and diastolic (DBP) blood pressures were 118.8±17.7 and 65.4±10.4 mmHg, respectively (n=23). The brachial artery lumen started opening when cuff pressure decreased below the SBP and opened for an increasing length of time until cuff pressure reached the DBP, and then remained open but pulsatile. A high-energy low-frequency Doppler signal, starting a few milliseconds before flow, appeared and disappeared together with Korotkoff sounds at the SBP and DBP, respectively. Its median duration was 42.7 versus 41.1 ms for Korotkoff sounds (P=0.54; n=17). There was a 2.20±1.54 ms/mmHg decrement in the time delay between the ECG R-wave and the Korotkoff sounds during cuff deflation (n=18). The PWV was 10±4.48 m/s at null cuff pressure and showed a 0.62% decrement per mmHg when cuff pressure increased (n=13). Korotkoff sounds are associated with a high-energy low-frequency Doppler signal of identical duration, typically resulting from wall vibrations, followed by flow turbulence. Local arterial PWV decreases when cuff pressure increases. Exploiting these changes may help improve SBP assessment, which remains a challenge for oscillometric techniques.

  9. TABLE D - WMO AND LOCAL (NCEP) DESCRIPTORS AS WELL AS THOSE AWAITING

    Science.gov Websites

    sequences common to satellite observations None 3 05 Meteorological or hydrological sequences common to Vertical sounding sequences (conventional data) None 3 10 Vertical sounding sequences (satellite data) None (satellite data) None 3 13 Sequences common to image data None 3 14 Reserved None 3 15 Oceanographic report

  10. A Place for Sound: Raising Children's Awareness of Their Sonic Environment

    ERIC Educational Resources Information Center

    Deans, Jan; Brown, Robert; Dilkes, Helen

    2005-01-01

    This paper reports on an experiential project that involved a group of children aged four to five years and their teachers in an investigation of sounds in their local environment. It describes the key elements of an eight-week teaching and learning program that encouraged children to experience and re-experience their surrounding sound…

  11. Recovery monitoring of pigeon guillemot populations in Prince William Sound, Alaska. Restoration project 94173. Exxon Valdez oil spill restoration project final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, D.L.

    1995-05-01

    The population of pigeon guillemots in Prince William Sound decreased from about 15,000 (1970`s) to about 5,000 (present). Some local populations were affected by the U/V Exxon Valdez oil spill in 1989, but there is evidence suggesting the Sound-wide population was already declining. Predation was the cause of numerous nesting failures. Abandonment of eggs was also high. Changes in the relative proportions of benthic and schooling fish in the diet of guillemot chicks might represent a key change in the ecosystem that is affecting other species of marine birds and mammals in the Sound.

  12. Combined visualization for noise mapping of industrial facilities based on ray-tracing and thin plate splines

    NASA Astrophysics Data System (ADS)

    Ovsiannikov, Mikhail; Ovsiannikov, Sergei

    2017-01-01

    The paper presents the combined approach to noise mapping and visualizing of industrial facilities sound pollution using forward ray tracing method and thin-plate spline interpolation. It is suggested to cauterize industrial area in separate zones with similar sound levels. Equivalent local source is defined for range computation of sanitary zones based on ray tracing algorithm. Computation of sound pressure levels within clustered zones are based on two-dimension spline interpolation of measured data on perimeter and inside the zone.

  13. Relative size of auditory pathways in symmetrically and asymmetrically eared owls.

    PubMed

    Gutiérrez-Ibáñez, Cristián; Iwaniuk, Andrew N; Wylie, Douglas R

    2011-01-01

    Owls are highly efficient predators with a specialized auditory system designed to aid in the localization of prey. One of the most unique anatomical features of the owl auditory system is the evolution of vertically asymmetrical ears in some species, which improves their ability to localize the elevational component of a sound stimulus. In the asymmetrically eared barn owl, interaural time differences (ITD) are used to localize sounds in azimuth, whereas interaural level differences (ILD) are used to localize sounds in elevation. These two features are processed independently in two separate neural pathways that converge in the external nucleus of the inferior colliculus to form an auditory map of space. Here, we present a comparison of the relative volume of 11 auditory nuclei in both the ITD and the ILD pathways of 8 species of symmetrically and asymmetrically eared owls in order to investigate evolutionary changes in the auditory pathways in relation to ear asymmetry. Overall, our results indicate that asymmetrically eared owls have much larger auditory nuclei than owls with symmetrical ears. In asymmetrically eared owls we found that both the ITD and ILD pathways are equally enlarged, and other auditory nuclei, not directly involved in binaural comparisons, are also enlarged. We suggest that the hypertrophy of auditory nuclei in asymmetrically eared owls likely reflects both an improved ability to precisely locate sounds in space and an expansion of the hearing range. Additionally, our results suggest that the hypertrophy of nuclei that compute space may have preceded that of the expansion of the hearing range and evolutionary changes in the size of the auditory system occurred independently of phylogeny. Copyright © 2011 S. Karger AG, Basel.

  14. Mesoscale temperature and moisture fields from satellite infrared soundings

    NASA Technical Reports Server (NTRS)

    Hillger, D. W.; Vonderhaar, T. H.

    1976-01-01

    The combined use of radiosonde and satellite infrared soundings can provide mesoscale temperature and moisture fields at the time of satellite coverage. Radiance data from the vertical temperature profile radiometer on NOAA polar-orbiting satellites can be used along with a radiosonde sounding as an initial guess in an iterative retrieval algorithm. The mesoscale temperature and moisture fields at local 9 - 10 a.m., which are produced by retrieving temperature profiles at each scan spot for the BTPR (every 70 km), can be used for analysis or as a forecasting tool for subsequent weather events during the day. The advantage of better horizontal resolution of satellite soundings can be coupled with the radiosonde temperature and moisture profile both as a best initial guess profile and as a means of eliminating problems due to the limited vertical resolution of satellite soundings.

  15. A Sparsity-Based Approach to 3D Binaural Sound Synthesis Using Time-Frequency Array Processing

    NASA Astrophysics Data System (ADS)

    Cobos, Maximo; Lopez, JoseJ; Spors, Sascha

    2010-12-01

    Localization of sounds in physical space plays a very important role in multiple audio-related disciplines, such as music, telecommunications, and audiovisual productions. Binaural recording is the most commonly used method to provide an immersive sound experience by means of headphone reproduction. However, it requires a very specific recording setup using high-fidelity microphones mounted in a dummy head. In this paper, we present a novel processing framework for binaural sound recording and reproduction that avoids the use of dummy heads, which is specially suitable for immersive teleconferencing applications. The method is based on a time-frequency analysis of the spatial properties of the sound picked up by a simple tetrahedral microphone array, assuming source sparseness. The experiments carried out using simulations and a real-time prototype confirm the validity of the proposed approach.

  16. An observation of LHR noise with banded structure by the sounding rocket S29 Barium-GEOS

    NASA Technical Reports Server (NTRS)

    Koskinen, H. E. J.; Holmgren, G.; Kintner, P. M.

    1982-01-01

    The measurement of electrostatic and obviously locally produced noise near the lower hybrid frequency made by the sounding rocket S29 Barium-GEOS is reported. The noise is strongly related to the spin of the rocket and reaches well below the local lower hybrid resonance frequency. Above the altitude of 300 km the noise shows banded structure roughly organized by the hydrogen cyclotron frequency. Simultaneously with the banded structure, a signal near the hydrogen cyclotron frequency is detected. This signal is also spin related. The characteristics of the noise suggest that it is locally generated by the rocket payload disturbing the plasma. If this interpretation is correct we expect plasma wave experiments on other spacecrafts, e.g., the space shuttle to observe similar phenomena.

  17. Effect of Dual Sensory Loss on Auditory Localization: Implications for Intervention

    PubMed Central

    Simon, Helen J.; Levitt, Harry

    2007-01-01

    Our sensory systems are remarkable in several respects. They are extremely sensitive, they each perform more than one function, and they interact in a complementary way, thereby providing a high degree of redundancy that is particularly helpful should one or more sensory systems be impaired. In this article, the problem of dual hearing and vision loss is addressed. A brief description is provided on the use of auditory cues in vision loss, the use of visual cues in hearing loss, and the additional difficulties encountered when both sensory systems are impaired. A major focus of this article is the use of sound localization by normal hearing, hearing impaired, and blind individuals and the special problem of sound localization in people with dual sensory loss. PMID:18003869

  18. Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds.

    PubMed

    Shiell, Martha M; Hausfeld, Lars; Formisano, Elia

    2018-05-23

    The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene. SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent. Copyright © 2018 the authors 0270-6474/18/384977-08$15.00/0.

  19. The effect of local circulations on the variation of atmospheric pollutants in the northwestern Taiwan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pay-Liam Lin; Hsin-Chih Lai

    1996-12-31

    A field experiment was held in the northwestern Taiwan as a part of a long-term research program for studying Taiwan`s local circulation. The program has been named as Taiwan Regional-circulation Experiment (TREX). The particular goal of this research is to investigate characteristics of boundary layer and local Circulation and their impact on the distribution and Variation of pollutants in the northwestern Taiwan during Mei-Yu season. It has been known for quite sometime that land-sea breeze is very pronounced under hot and humid conditions. Extensive network includes 11 pilot ballon stations, 3 acoustic sounding sites, and 14 surface stations in aboutmore » 20 km by 20 km area centered at National Central University, Chung-Li. In addition, there are ground temperature measurements at 3 sites, Integrated Sounding System (ISS) at NCU, air plane observation, tracer experiment with 10 collecting stations, 3 background upper-air sounding stations, 2 towers etc. NOAA and GMS satellite data, sea surface temperature radar, and precipitation data are collected. The local circulations such as land/sea breezes and mountain/valley winds, induced by thermal and topographical effects often play an important role in transporting, redistributing and transforming atmospheric pollutants. This study documents the effects of the development of local circulations and the accompanying evolution of boundary layer on the distribution and the variation of the atmospheric pollutants in the north western Taiwan during Mei-Yu season.« less

  20. Modulation of electrocortical brain activity by attention in individuals with and without tinnitus.

    PubMed

    Paul, Brandon T; Bruce, Ian C; Bosnyak, Daniel J; Thompson, David C; Roberts, Larry E

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies.

  1. Modulation of Electrocortical Brain Activity by Attention in Individuals with and without Tinnitus

    PubMed Central

    Paul, Brandon T.; Bruce, Ian C.; Bosnyak, Daniel J.; Thompson, David C.; Roberts, Larry E.

    2014-01-01

    Age and hearing-level matched tinnitus and control groups were presented with a 40 Hz AM sound using a carrier frequency of either 5 kHz (in the tinnitus frequency region of the tinnitus subjects) or 500 Hz (below this region). On attended blocks subjects pressed a button after each sound indicating whether a single 40 Hz AM pulse of variable increased amplitude (target, probability 0.67) had or had not occurred. On passive blocks subjects rested and ignored the sounds. The amplitude of the 40 Hz auditory steady-state response (ASSR) localizing to primary auditory cortex (A1) increased with attention in control groups probed at 500 Hz and 5 kHz and in the tinnitus group probed at 500 Hz, but not in the tinnitus group probed at 5 kHz (128 channel EEG). N1 amplitude (this response localizing to nonprimary cortex, A2) increased with attention at both sound frequencies in controls but at neither frequency in tinnitus. We suggest that tinnitus-related neural activity occurring in the 5 kHz but not the 500 Hz region of tonotopic A1 disrupted attentional modulation of the 5 kHz ASSR in tinnitus subjects, while tinnitus-related activity in A1 distributing nontonotopically in A2 impaired modulation of N1 at both sound frequencies. PMID:25024849

  2. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System

    PubMed Central

    Fischer, Brian J.; Peña, Jose L.

    2016-01-01

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. SIGNIFICANCE STATEMENT In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. PMID:26888922

  3. Disruption of Spelling-to-Sound Correspondence Mapping during Single-Word Reading in Patients with Temporal Lobe Epilepsy

    ERIC Educational Resources Information Center

    Ledoux, Kerry; Gordon, Barry

    2011-01-01

    Processing and/or hemispheric differences in the neural bases of word recognition were examined in patients with long-standing, medically-intractable epilepsy localized to the left (N = 18) or right (N = 7) temporal lobe. Participants were asked to read words that varied in the frequency of their spelling-to-sound correspondences. For the right…

  4. 76 FR 36438 - Special Local Regulations; Safety and Security Zones; Recurring Events in Captain of the Port...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... Events in Captain of the Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Notice of proposed... security zone in the Coast Guard Sector Long Island Sound Captain of the Port (COTP) Zone. When these..., call or e-mail Petty Officer Joseph Graun, Waterways Management Division at Coast Guard Sector Long...

  5. Communication Sciences Laboratory Quarterly Progress Report, Volume 9, Number 3: Research Programs of Some of the Newer Members of CSL.

    ERIC Educational Resources Information Center

    Feinstein, Stephen H.; And Others

    The research reported in these papers covers a variety of communication problems. The first paper covers research on sound navigation by the blind and involves echo perception research and relevant aspects of underwater sound localization. The second paper describes a research program in acoustic phonetics and concerns such related issues as…

  6. Observations of shallow water marine ambient sound: the low frequency underwater soundscape of the central Oregon coast.

    PubMed

    Haxel, Joseph H; Dziak, Robert P; Matsumoto, Haru

    2013-05-01

    A year-long experiment (March 2010 to April 2011) measuring ambient sound at a shallow water site (50 m) on the central OR coast near the Port of Newport provides important baseline information for comparisons with future measurements associated with resource development along the inner continental shelf of the Pacific Northwest. Ambient levels in frequencies affected by surf-generated noise (f < 100 Hz) characterize the site as a high-energy end member within the spectrum of shallow water coastal areas influenced by breaking waves. Dominant sound sources include locally generated ship noise (66% of total hours contain local ship noise), breaking surf, wind induced wave breaking and baleen whale vocalizations. Additionally, an increase in spectral levels for frequencies ranging from 35 to 100 Hz is attributed to noise radiated from distant commercial ship commerce. One-second root mean square (rms) sound pressure level (SPLrms) estimates calculated across the 10-840 Hz frequency band for the entire year long deployment show minimum, mean, and maximum values of 84 dB, 101 dB, and 152 dB re 1 μPa.

  7. Localizing nearby sound sources in a classroom: Binaural room impulse responses

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara G.; Kopco, Norbert; Martin, Tara J.

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy. .

  8. Localizing nearby sound sources in a classroom: binaural room impulse responses.

    PubMed

    Shinn-Cunningham, Barbara G; Kopco, Norbert; Martin, Tara J

    2005-05-01

    Binaural room impulse responses (BRIRs) were measured in a classroom for sources at different azimuths and distances (up to 1 m) relative to a manikin located in four positions in a classroom. When the listener is far from all walls, reverberant energy distorts signal magnitude and phase independently at each frequency, altering monaural spectral cues, interaural phase differences, and interaural level differences. For the tested conditions, systematic distortion (comb-filtering) from an early intense reflection is only evident when a listener is very close to a wall, and then only in the ear facing the wall. Especially for a nearby source, interaural cues grow less reliable with increasing source laterality and monaural spectral cues are less reliable in the ear farther from the sound source. Reverberation reduces the magnitude of interaural level differences at all frequencies; however, the direct-sound interaural time difference can still be recovered from the BRIRs measured in these experiments. Results suggest that bias and variability in sound localization behavior may vary systematically with listener location in a room as well as source location relative to the listener, even for nearby sources where there is relatively little reverberant energy.

  9. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  10. Sounds Activate Visual Cortex and Improve Visual Discrimination

    PubMed Central

    Störmer, Viola S.; Martinez, Antigona; McDonald, John J.; Hillyard, Steven A.

    2014-01-01

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. PMID:25031419

  11. NPSNET: Aural cues for virtual world immersion

    NASA Astrophysics Data System (ADS)

    Dahl, Leif A.

    1992-09-01

    NPSNET is a low-cost visual and aural simulation system designed and implemented at the Naval Postgraduate School. NPSNET is an example of a virtual world simulation environment that incorporates real-time aural cues through software-hardware interaction. In the current implementation of NPSNET, a graphics workstation functions in the sound server role which involves sending and receiving networked sound message packets across a Local Area Network, composed of multiple graphics workstations. The network messages contain sound file identification information that is transmitted from the sound server across an RS-422 protocol communication line to a serial to Musical Instrument Digital Interface (MIDI) converter. The MIDI converter, in turn relays the sound byte to a sampler, an electronic recording and playback device. The sampler correlates the hexadecimal input to a specific note or stored sound and sends it as an audio signal to speakers via an amplifier. The realism of a simulation is improved by involving multiple participant senses and removing external distractions. This thesis describes the incorporation of sound as aural cues, and the enhancement they provide in the virtual simulation environment of NPSNET.

  12. Principal cells of the brainstem's interaural sound level detector are temporal differentiators rather than integrators.

    PubMed

    Franken, Tom P; Joris, Philip X; Smith, Philip H

    2018-06-14

    The brainstem's lateral superior olive (LSO) is thought to be crucial for localizing high-frequency sounds by coding interaural sound level differences (ILD). Its neurons weigh contralateral inhibition against ipsilateral excitation, making their firing rate a function of the azimuthal position of a sound source. Since the very first in vivo recordings, LSO principal neurons have been reported to give sustained and temporally integrating 'chopper' responses to sustained sounds. Neurons with transient responses were observed but largely ignored and even considered a sign of pathology. Using the Mongolian gerbil as a model system, we have obtained the first in vivo patch clamp recordings from labeled LSO neurons and find that principal LSO neurons, the most numerous projection neurons of this nucleus, only respond at sound onset and show fast membrane features suggesting an importance for timing. These results provide a new framework to interpret previously puzzling features of this circuit. © 2018, Franken et al.

  13. Local dynamic stability of lower extremity joints in lower limb amputees during slope walking.

    PubMed

    Chen, Jin-Ling; Gu, Dong-Yun

    2013-01-01

    Lower limb amputees have a higher fall risk during slope walking compared with non-amputees. However, studies on amputees' slope walking were not well addressed. The aim of this study was to identify the difference of slope walking between amputees and non-amputees. Lyapunov exponents λS was used to estimate the local dynamic stability of 7 transtibial amputees' and 7 controls' lower extremity joint kinematics during uphill and downhill walking. Compared with the controls, amputees exhibited significantly lower λS in hip (P=0.04) and ankle (P=0.01) joints of the sound limb, and hip joints (P=0.01) of the prosthetic limb during uphill walking, while they exhibited significantly lower λS in knee (P=0.02) and ankle (P=0.03) joints of the sound limb, and hip joints (P=0.03) of the prosthetic limb during downhill walking. Compared with amputees level walking, they exhibited significantly lower λS in ankle joints of the sound limb during both uphill (P=0.01) and downhill walking (P=0.01). We hypothesized that the better local dynamic stability of amputees was caused by compensation strategy during slope walking.

  14. Protons at the speed of sound: Predicting specific biological signaling from physics.

    PubMed

    Fichtl, Bernhard; Shrivastava, Shamit; Schneider, Matthias F

    2016-05-24

    Local changes in pH are known to significantly alter the state and activity of proteins and enzymes. pH variations induced by pulses propagating along soft interfaces (e.g. membranes) would therefore constitute an important pillar towards a physical mechanism of biological signaling. Here we investigate the pH-induced physical perturbation of a lipid interface and the physicochemical nature of the subsequent acoustic propagation. Pulses are stimulated by local acidification and propagate - in analogy to sound - at velocities controlled by the interface's compressibility. With transient local pH changes of 0.6 directly observed at the interface and velocities up to 1.4 m/s this represents hitherto the fastest protonic communication observed. Furthermore simultaneously propagating mechanical and electrical changes in the lipid interface are detected, exposing the thermodynamic nature of these pulses. Finally, these pulses are excitable only beyond a threshold for protonation, determined by the pKa of the lipid head groups. This protonation-transition plus the existence of an enzymatic pH-optimum offer a physical basis for intra- and intercellular signaling via sound waves at interfaces, where not molecular structure and mechano-enyzmatic couplings, but interface thermodynamics and thermodynamic transitions are the origin of the observations.

  15. The Sound Generated by Mid-Ocean Ridge Black Smoker Hydrothermal Vents

    PubMed Central

    Crone, Timothy J.; Wilcock, William S.D.; Barclay, Andrew H.; Parsons, Jeffrey D.

    2006-01-01

    Hydrothermal flow through seafloor black smoker vents is typically turbulent and vigorous, with speeds often exceeding 1 m/s. Although theory predicts that these flows will generate sound, the prevailing view has been that black smokers are essentially silent. Here we present the first unambiguous field recordings showing that these vents radiate significant acoustic energy. The sounds contain a broadband component and narrowband tones which are indicative of resonance. The amplitude of the broadband component shows tidal modulation which is indicative of discharge rate variations related to the mechanics of tidal loading. Vent sounds will provide researchers with new ways to study flow through sulfide structures, and may provide some local organisms with behavioral or navigational cues. PMID:17205137

  16. Simulated seal scarer sounds scare porpoises, but not seals: species-specific responses to 12 kHz deterrence sounds

    PubMed Central

    Hermannsen, Line; Beedholm, Kristian

    2017-01-01

    Acoustic harassment devices (AHD) or ‘seal scarers’ are used extensively, not only to deter seals from fisheries, but also as mitigation tools to deter marine mammals from potentially harmful sound sources, such as offshore pile driving. To test the effectiveness of AHDs, we conducted two studies with similar experimental set-ups on two key species: harbour porpoises and harbour seals. We exposed animals to 500 ms tone bursts at 12 kHz simulating that of an AHD (Lofitech), but with reduced output levels (source peak-to-peak level of 165 dB re 1 µPa). Animals were localized with a theodolite before, during and after sound exposures. In total, 12 sound exposures were conducted to porpoises and 13 exposures to seals. Porpoises were found to exhibit avoidance reactions out to ranges of 525 m from the sound source. Contrary to this, seal observations increased during sound exposure within 100 m of the loudspeaker. We thereby demonstrate that porpoises and seals respond very differently to AHD sounds. This has important implications for application of AHDs in multi-species habitats, as sound levels required to deter less sensitive species (seals) can lead to excessive and unwanted large deterrence ranges on more sensitive species (porpoises). PMID:28791155

  17. The influence of sea ice, wind speed and marine mammals on Southern Ocean ambient sound

    PubMed Central

    van Opzeeland, Ilse; Boebel, Olaf

    2017-01-01

    This paper describes the natural variability of ambient sound in the Southern Ocean, an acoustically pristine marine mammal habitat. Over a 3-year period, two autonomous recorders were moored along the Greenwich meridian to collect underwater passive acoustic data. Ambient sound levels were strongly affected by the annual variation of the sea-ice cover, which decouples local wind speed and sound levels during austral winter. With increasing sea-ice concentration, area and thickness, sound levels decreased while the contribution of distant sources increased. Marine mammal sounds formed a substantial part of the overall acoustic environment, comprising calls produced by Antarctic blue whales (Balaenoptera musculus intermedia), fin whales (Balaenoptera physalus), Antarctic minke whales (Balaenoptera bonaerensis) and leopard seals (Hydrurga leptonyx). The combined sound energy of a group or population vocalizing during extended periods contributed species-specific peaks to the ambient sound spectra. The temporal and spatial variation in the contribution of marine mammals to ambient sound suggests annual patterns in migration and behaviour. The Antarctic blue and fin whale contributions were loudest in austral autumn, whereas the Antarctic minke whale contribution was loudest during austral winter and repeatedly showed a diel pattern that coincided with the diel vertical migration of zooplankton. PMID:28280544

  18. A non-local model of fractional heat conduction in rigid bodies

    NASA Astrophysics Data System (ADS)

    Borino, G.; di Paola, M.; Zingales, M.

    2011-03-01

    In recent years several applications of fractional differential calculus have been proposed in physics, chemistry as well as in engineering fields. Fractional order integrals and derivatives extend the well-known definitions of integer-order primitives and derivatives of the ordinary differential calculus to real-order operators. Engineering applications of fractional operators spread from viscoelastic models, stochastic dynamics as well as with thermoelasticity. In this latter field one of the main actractives of fractional operators is their capability to interpolate between the heat flux and its time-rate of change, that is related to the well-known second sound effect. In other recent studies a fractional, non-local thermoelastic model has been proposed as a particular case of the non-local, integral, thermoelasticity introduced at the mid of the seventies. In this study the autors aim to introduce a different non-local model of extended irreverible thermodynamics to account for second sound effect. Long-range heat flux is defined and it involves the integral part of the spatial Marchaud fractional derivatives of the temperature field whereas the second-sound effect is accounted for introducing time-derivative of the heat flux in the transport equation. It is shown that the proposed model does not suffer of the pathological problems of non-homogenoeus boundary conditions. Moreover the proposed model coalesces with the Povstenko fractional models in unbounded domains.

  19. Sound localization with communications headsets: comparison of passive and active systems.

    PubMed

    Abel, Sharon M; Tsang, Suzanne; Boyne, Stephen

    2007-01-01

    Studies have demonstrated that conventional hearing protectors interfere with sound localization. This research examines possible benefits from advanced communications devices. Horizontal plane sound localization was compared in normal-hearing males with the ears unoccluded and fitted with Peltor H10A passive attenuation earmuffs, Racal Slimgard II communications muffs in active noise reduction (ANR) and talk-through-circuitry (TTC) modes and Nacre QUIETPRO TM communications earplugs in off (passive attenuation) and push-to-talk (PTT) modes. Localization was assessed using an array of eight loudspeakers, two in each spatial quadrant. The stimulus was 75 dB SPL, 300-ms broadband noise. One block of 120 forced-choice loudspeaker identification trials was presented in each condition. Subjects responded using a laptop response box with a set of eight microswitches in the same configuration as the speaker array. A repeated measures ANOVA was applied to the dataset. The results reveal that the overall percent correct response was highest in the unoccluded condition (94%). A significant reduction of 24% was observed for the communications devices in TTC and PTT modes and a reduction of 49% for the passive muff and plug and muff with ANR. Disruption in performance was due to an increase in front-back reversal errors for mirror image spatial positions. The results support the conclusion that communications devices with advanced technologies are less detrimental to directional hearing than conventional, passive, limited amplification and ANR devices.

  20. Simulation of prenatal maternal sounds in NICU incubators: a pilot safety and feasibility study.

    PubMed

    Panagiotidis, John; Lahav, Amir

    2010-10-01

    This pilot study evaluated the safety and feasibility of an innovative audio system for transmitting maternal sounds to NICU incubators. A sample of biological sounds, consisting of voice and heartbeat, were recorded from a mother of a premature infant admitted to our unit. The maternal sounds were then played back inside an unoccupied incubator via a specialized audio system originated and compiled in our lab. We performed a series of evaluations to determine the safety and feasibility of using this system in NICU incubators. The proposed audio system was found to be safe and feasible, meeting criteria for humidity and temperature resistance, as well as for safe noise levels. Simulation of maternal sounds using this system seems achievable and applicable and received local support from medical staff. Further research and technology developments are needed to optimize the design of the NICU incubators to preserve the acoustic environment of the womb.

  1. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    PubMed

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  2. Echolocation of insects using intermittent frequency-modulated sounds.

    PubMed

    Matsuo, Ikuo; Takanashi, Takuma

    2015-09-01

    Using echolocation influenced by Doppler shift, bats can capture flying insects in real three-dimensional space. On the basis of this principle, a model that estimates object locations using frequency modulated (FM) sound was proposed. However, no investigation was conducted to verify whether the model can localize flying insects from their echoes. This study applied the model to estimate the range and direction of flying insects by extracting temporal changes from the time-frequency pattern and interaural range difference, respectively. The results obtained confirm that a living insect's position can be estimated using this model with echoes measured while emitting intermittent FM sounds.

  3. Radar soundings of the ionosphere of Mars.

    PubMed

    Gurnett, D A; Kirchner, D L; Huff, R L; Morgan, D D; Persoon, A M; Averkamp, T F; Duru, F; Nielsen, E; Safaeinili, A; Plaut, J J; Picardi, G

    2005-12-23

    We report the first radar soundings of the ionosphere of Mars with the MARSIS (Mars Advanced Radar for Subsurface and Ionosphere Sounding) instrument on board the orbiting Mars Express spacecraft. Several types of ionospheric echoes are observed, ranging from vertical echoes caused by specular reflection from the horizontally stratified ionosphere to a wide variety of oblique and diffuse echoes. The oblique echoes are believed to arise mainly from ionospheric structures associated with the complex crustal magnetic fields of Mars. Echoes at the electron plasma frequency and the cyclotron period also provide measurements of the local electron density and magnetic field strength.

  4. Sound velocity in five-component air mixtures of various densities

    NASA Astrophysics Data System (ADS)

    Bogdanova, N. V.; Rydalevskaya, M. A.

    2018-05-01

    The local equilibrium flows of five-component air mixtures are considered. Gas dynamic equations are derived from the kinetic equations for aggregate values of collision invariants. It is shown that the traditional formula for sound velocity is true in air mixtures considered with the chemical reactions and the internal degrees of freedom. This formula connects the square of sound velocity with pressure and density. However, the adiabatic coefficient is not constant under existing conditions. The analytical expression for this coefficient is obtained. The examples of its calculation in air mixtures of various densities are presented.

  5. 77 FR 33967 - Special Local Regulations; OPSAIL 2012 Connecticut, Niantic Bay, Long Island Sound, Thames River...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ..., Operation Sail, Inc., is planning to publish information on the event in local newspapers, internet sites... areas for viewing the ``Parade of Sail'' have been established to allow for maximum use of the waterways... sponsoring organization, Operation Sail, Inc., is planning to publish information of the event in local...

  6. Local and large-scale climate forcing of Puget Sound oceanographic properties on seasonal to interdecadal timescales

    Treesearch

    Stephanie K. Moore; Nathan J. Mantua; Jonathan P. Kellogg; Jan A. Newton

    2008-01-01

    The influence of climate on Puget Sound oceanographic properties is investigated on seasonal to interannual timescales using continuous profile data at 16 stations from 1993 to 2002 and records of sea surface temperature (SST) and sea surface salinity (SSS) from 1951 to 2002. Principal components analyses of profile data identify indices representing 42%, 58%, and 56%...

  7. Challenges to the successful implementation of 3-D sound

    NASA Astrophysics Data System (ADS)

    Begault, Durand R.

    1991-11-01

    The major challenges for the successful implementation of 3-D audio systems involve minimizing reversals, intracranially heard sound, and localization error for listeners. Designers of 3-D audio systems are faced with additional challenges in data reduction and low-frequency response characteristics. The relationship of the head-related transfer function (HRTF) to these challenges is shown, along with some preliminary psychoacoustic results gathered at NASA-Ames.

  8. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    DTIC Science & Technology

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  9. Evaluation of auditory functions for Royal Canadian Mounted Police officers.

    PubMed

    Vaillancourt, Véronique; Laroche, Chantal; Giguère, Christian; Beaulieu, Marc-André; Legault, Jean-Pierre

    2011-06-01

    Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers' ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r² = 0.26 to 0.33) and sound localization (r² = 0.03 to 0.28). Only speech in quiet (r² = 0.68 to 0.85) is predicted adequately from threshold data. Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities. American Academy of Audiology.

  10. Subband Approach to Bandlimited Crosstalk Cancellation System in Spatial Sound Reproduction

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lee, Chih-Chung

    2006-12-01

    Crosstalk cancellation system (CCS) plays a vital role in spatial sound reproduction using multichannel loudspeakers. However, this technique is still not of full-blown use in practical applications due to heavy computation loading. To reduce the computation loading, a bandlimited CCS is presented in this paper on the basis of subband filtering approach. A pseudoquadrature mirror filter (QMF) bank is employed in the implementation of CCS filters which are bandlimited to 6 kHz, where human's localization is the most sensitive. In addition, a frequency-dependent regularization scheme is adopted in designing the CCS inverse filters. To justify the proposed system, subjective listening experiments were undertaken in an anechoic room. The experiments include two parts: the source localization test and the sound quality test. Analysis of variance (ANOVA) is applied to process the data and assess statistical significance of subjective experiments. The results indicate that the bandlimited CCS performed comparably well as the fullband CCS, whereas the computation loading was reduced by approximately eighty percent.

  11. Features of the energy structure of acoustic fields in the ocean with two-dimensional random inhomogeneities

    NASA Astrophysics Data System (ADS)

    Gulin, O. E.; Yaroshchuk, I. O.

    2017-03-01

    The paper is devoted to the analytic study and numerical simulation of mid-frequency acoustic signal propagation in a two-dimensional inhomogeneous random shallow-water medium. The study was carried out by the cross section method (local modes). We present original theoretical estimates for the behavior of the average acoustic field intensity and show that at different distances, the features of propagation loss behavior are determined by the intensity of fluctuations and their horizontal scale and depend on the initial regular parameters, such as the emission frequency and size of sound losses in the bottom. We establish analytically that for the considered waveguide and sound frequency parameters, mode coupling effect has a local character and weakly influences the statistics. We establish that the specific form of the spatial spectrum of sound velocity inhomogeneities for the statistical patterns of the field intensity is insignificant during observations in the range of shallow-water distances of practical interest.

  12. Interaural Level Difference Dependent Gain Control and Synaptic Scaling Underlying Binaural Computation

    PubMed Central

    Xiong, Xiaorui R.; Liang, Feixue; Li, Haifu; Mesik, Lukas; Zhang, Ke K.; Polley, Daniel B.; Tao, Huizhong W.; Xiao, Zhongju; Zhang, Li I.

    2013-01-01

    Binaural integration in the central nucleus of inferior colliculus (ICC) plays a critical role in sound localization. However, its arithmetic nature and underlying synaptic mechanisms remain unclear. Here, we showed in mouse ICC neurons that the contralateral dominance is created by a “push-pull”-like mechanism, with contralaterally dominant excitation and more bilaterally balanced inhibition. Importantly, binaural spiking response is generated apparently from an ipsilaterally-mediated scaling of contralateral response, leaving frequency tuning unchanged. This scaling effect is attributed to a divisive attenuation of contralaterally-evoked synaptic excitation onto ICC neurons with their inhibition largely unaffected. Thus, a gain control mediates the linear transformation from monaural to binaural spike responses. The gain value is modulated by interaural level difference (ILD) primarily through scaling excitation to different levels. The ILD-dependent synaptic scaling and gain adjustment allow ICC neurons to dynamically encode interaural sound localization cues while maintaining an invariant representation of other independent sound attributes. PMID:23972599

  13. Impact of Hearing Aid Technology on Outcomes in Daily Life III: Localization.

    PubMed

    Johnson, Jani A; Xu, Jingjing; Cox, Robyn M

    Compared to basic-feature hearing aids, premium-feature hearing aids have more advanced technologies and sophisticated features. The objective of this study was to explore the difference between premium-feature and basic-feature hearing aids in horizontal sound localization in both laboratory and daily life environments. We hypothesized that premium-feature hearing aids would yield better localization performance than basic-feature hearing aids. Exemplars of premium-feature and basic-feature hearing aids from two major manufacturers were evaluated. Forty-five older adults (mean age 70.3 years) with essentially symmetrical mild to moderate sensorineural hearing loss were bilaterally fitted with each of the four pairs of hearing aids. Each pair of hearing aids was worn during a 4-week field trial and then evaluated using laboratory localization tests and a standardized questionnaire. Laboratory localization tests were conducted in a sound-treated room with a 360°, 24-loudspeaker array. Test stimuli were high frequency and low frequency filtered short sentences. The localization test in quiet was designed to assess the accuracy of front/back localization, while the localization test in noise was designed to assess the accuracy of locating sound sources throughout a 360° azimuth in the horizontal plane. Laboratory data showed that unaided localization was not significantly different from aided localization when all hearing aids were combined. Questionnaire data showed that aided localization was significantly better than unaided localization in everyday situations. Regarding the difference between premium-feature and basic-feature hearing aids, laboratory data showed that, overall, the premium-feature hearing aids yielded more accurate localization than the basic-feature hearing aids when high-frequency stimuli were used, and the listening environment was quiet. Otherwise, the premium-feature and basic-feature hearing aids yielded essentially the same performance in other laboratory tests and in daily life. The findings were consistent for both manufacturers. Laboratory tests for two of six major manufacturers showed that premium-feature hearing aids yielded better localization performance than basic-feature hearing aids in one out of four laboratory conditions. There was no difference between the two feature levels in self-reported everyday localization. Effectiveness research with different hearing aid technologies is necessary, and more research with other manufacturers' products is needed. Furthermore, these results confirm previous observations that research findings in laboratory conditions might not translate to everyday life.

  14. Local feedback control of light honeycomb panels.

    PubMed

    Hong, Chinsuk; Elliott, Stephen J

    2007-01-01

    This paper summarizes theoretical and experimental work on the feedback control of sound radiation from honeycomb panels using piezoceramic actuators. It is motivated by the problem of sound transmission in aircraft, specifically the active control of trim panels. Trim panels are generally honeycomb structures designed to meet the design requirement of low weight and high stiffness. They are resiliently mounted to the fuselage for the passive reduction of noise transmission. Local coupling of the closely spaced sensor and actuator was observed experimentally and modeled using a single degree of freedom system. The effect of the local coupling was to roll off the response between the actuator and sensor at high frequencies, so that a feedback control system can have high gain margins. Unfortunately, only relatively poor global performance is then achieved because of localization of reduction around the actuator. This localization prompts the investigation of a multichannel active control system. Globalized reduction was predicted using a model of 12-channel direct velocity feedback control. The multichannel system, however, does not appear to yield a significant improvement in the performance because of decreased gain margin.

  15. Theoretical foundations of the sound analog membrane potential that underlies coincidence detection in the barn owl

    PubMed Central

    Ashida, Go; Funabiki, Kazuo; Carr, Catherine E.

    2013-01-01

    A wide variety of neurons encode temporal information via phase-locked spikes. In the avian auditory brainstem, neurons in the cochlear nucleus magnocellularis (NM) send phase-locked synaptic inputs to coincidence detector neurons in the nucleus laminaris (NL) that mediate sound localization. Previous modeling studies suggested that converging phase-locked synaptic inputs may give rise to a periodic oscillation in the membrane potential of their target neuron. Recent physiological recordings in vivo revealed that owl NL neurons changed their spike rates almost linearly with the amplitude of this oscillatory potential. The oscillatory potential was termed the sound analog potential, because of its resemblance to the waveform of the stimulus tone. The amplitude of the sound analog potential recorded in NL varied systematically with the interaural time difference (ITD), which is one of the most important cues for sound localization. In order to investigate the mechanisms underlying ITD computation in the NM-NL circuit, we provide detailed theoretical descriptions of how phase-locked inputs form oscillating membrane potentials. We derive analytical expressions that relate presynaptic, synaptic, and postsynaptic factors to the signal and noise components of the oscillation in both the synaptic conductance and the membrane potential. Numerical simulations demonstrate the validity of the theoretical formulations for the entire frequency ranges tested (1–8 kHz) and potential effects of higher harmonics on NL neurons with low best frequencies (<2 kHz). PMID:24265616

  16. A randomized trial of nature scenery and sounds versus urban scenery and sounds to reduce pain in adults undergoing bone marrow aspirate and biopsy.

    PubMed

    Lechtzin, Noah; Busse, Anne M; Smith, Michael T; Grossman, Stuart; Nesbit, Suzanne; Diette, Gregory B

    2010-09-01

    Bone marrow aspiration and biopsy (BMAB) is painful when performed with only local anesthetic. Our objective was to determine whether viewing nature scenes and listening to nature sounds can reduce pain during BMAB. This was a randomized, controlled clinical trial. Adult patients undergoing outpatient BMAB with only local anesthetic were assigned to use either a nature scene with accompanying nature sounds, city scene with city sounds, or standard care. The primary outcome was a visual analog scale (0-10) of pain. Prespecified secondary analyses included categorizing pain as mild and moderate to severe and using multiple logistic regression to adjust for potential confounding variables. One hundred and twenty (120) subjects were enrolled: 44 in the Nature arm, 39 in the City arm, and 37 in the Standard Care arm. The mean pain scores, which were the primary outcome, were not significantly different between the three arms. A higher proportion in the Standard Care arm had moderate-to-severe pain (pain rating ≥4) than in the Nature arm (78.4% versus 60.5%), though this was not statistically significant (p = 0.097). This difference was statistically significant after adjusting for differences in the operators who performed the procedures (odds ratio = 3.71, p = 0.02). We confirmed earlier findings showing that BMAB is poorly tolerated. While mean pain scores were not significantly different between the study arms, secondary analyses suggest that viewing a nature scene while listening to nature sounds is a safe, inexpensive method that may reduce pain during BMAB. This approach should be considered to alleviate pain during invasive procedures.

  17. Heart sounds as a result of acoustic dipole radiation of heart valves

    NASA Astrophysics Data System (ADS)

    Kasoev, S. G.

    2005-11-01

    Heart sounds are associated with impulses of force acting on heart valves at the moment they close under the action of blood-pressure difference. A unified model for all the valves represents this impulse as an acoustic dipole. The near pressure field of this dipole creates a distribution of the normal velocity on the breast surface with features typical of auscultation practice: a pronounced localization of heart sound audibility areas, an individual area for each of the valves, and a noncoincidence of these areas with the projections of the valves onto the breast surface. In the framework of the dipole theory, the optimum size of the stethoscope’s bell is found and the spectrum of the heart sounds is estimated. The estimates are compared with the measured spectrum.

  18. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    PubMed

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  19. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear.

    PubMed

    Eric Lupo, J; Koka, Kanthaiah; Thornton, Jennifer L; Tollin, Daniel J

    2011-02-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (±420 μs at 500 Hz, ±310 μs for 1-4 kHz) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10-38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. The effects of experimentally induced conductive hearing loss on spectral and temporal aspects of sound transmission through the ear

    PubMed Central

    Lupo, J. Eric; Koka, Kanthaiah; Thornton, Jennifer L.; Tollin, Daniel J.

    2010-01-01

    Conductive hearing loss (CHL) is known to produce hearing deficits, including deficits in sound localization ability. The differences in sound intensities and timing experienced between the two tympanic membranes are important cues to sound localization (ILD and ITD, respectively). Although much is known about the effect of CHL on hearing levels, little investigation has been conducted into the actual impact of CHL on sound location cues. This study investigated effects of CHL induced by earplugs on cochlear microphonic (CM) amplitude and timing and their corresponding effect on the ILD and ITD location cues. Acoustic and CM measurements were made in 5 chinchillas before and after earplug insertion, and again after earplug removal using pure tones (500 Hz to 24 kHz). ILDs in the unoccluded condition demonstrated position and frequency dependence where peak far-lateral ILDs approached 30 dB for high frequencies. Unoccluded ear ITD cues demonstrated positional and frequency dependence with increased ITD cue for both decreasing frequency (± 420 µs at 500 Hz, ± 310 µs for 1–4 kHz ) and increasingly lateral sound source locations. Occlusion of the ear canal with foam plugs resulted in a mild, frequency-dependent conductive hearing loss of 10–38 dB (mean 31 ± 3.9 dB) leading to a concomitant frequency dependent increase in ILDs at all source locations. The effective ITDs increased in a frequency dependent manner with ear occlusion as a direct result of the acoustic properties of the plugging material, the latter confirmed via acoustical measurements using a model ear canal with varying volumes of acoustic foam. Upon ear plugging with acoustic foam, a mild CHL is induced. Furthermore, the CHL induced by acoustic foam results in substantial changes in the magnitudes of both the ITD and ILD cues to sound location. PMID:21073935

  1. Sounds activate visual cortex and improve visual discrimination.

    PubMed

    Feng, Wenfeng; Störmer, Viola S; Martinez, Antigona; McDonald, John J; Hillyard, Steven A

    2014-07-16

    A recent study in humans (McDonald et al., 2013) found that peripheral, task-irrelevant sounds activated contralateral visual cortex automatically as revealed by an auditory-evoked contralateral occipital positivity (ACOP) recorded from the scalp. The present study investigated the functional significance of this cross-modal activation of visual cortex, in particular whether the sound-evoked ACOP is predictive of improved perceptual processing of a subsequent visual target. A trial-by-trial analysis showed that the ACOP amplitude was markedly larger preceding correct than incorrect pattern discriminations of visual targets that were colocalized with the preceding sound. Dipole modeling of the scalp topography of the ACOP localized its neural generators to the ventrolateral extrastriate visual cortex. These results provide direct evidence that the cross-modal activation of contralateral visual cortex by a spatially nonpredictive but salient sound facilitates the discriminative processing of a subsequent visual target event at the location of the sound. Recordings of event-related potentials to the targets support the hypothesis that the ACOP is a neural consequence of the automatic orienting of visual attention to the location of the sound. Copyright © 2014 the authors 0270-6474/14/349817-08$15.00/0.

  2. Rolling ball sifting algorithm for the augmented visual inspection of carotid bruit auscultation

    NASA Astrophysics Data System (ADS)

    Huang, Adam; Lee, Chung-Wei; Liu, Hon-Man

    2016-07-01

    Carotid bruits are systolic sounds associated with turbulent blood flow through atherosclerotic stenosis in the neck. They are audible intermittent high-frequency (above 200 Hz) sounds mixed with background noise and transmitted low-frequency (below 100 Hz) heart sounds that wax and wane periodically. It is a nontrivial task to extract both bruits and heart sounds with high fidelity for further computer-aided auscultation and diagnosis. In this paper we propose a rolling ball sifting algorithm that is capable to filter signals with a sharper frequency selectivity mechanism in the time domain. By rolling two balls (one above and one below the signal) of a suitable radius, the balls are large enough to roll over bruits and yet small enough to ride on heart sound waveforms. The high-frequency bruits can then be extracted according to a tangibility criterion by using the local extrema touched by the balls. Similarly, the low-frequency heart sounds can be acquired by a larger radius. By visualizing the periodicity information of both the extracted heart sounds and bruits, the proposed visual inspection method can potentially improve carotid bruit diagnosis accuracy.

  3. On cortical coding of vocal communication sounds in primates

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoqin

    2000-10-01

    Understanding how the brain processes vocal communication sounds is one of the most challenging problems in neuroscience. Our understanding of how the cortex accomplishes this unique task should greatly facilitate our understanding of cortical mechanisms in general. Perception of species-specific communication sounds is an important aspect of the auditory behavior of many animal species and is crucial for their social interactions, reproductive success, and survival. The principles of neural representations of these behaviorally important sounds in the cerebral cortex have direct implications for the neural mechanisms underlying human speech perception. Our progress in this area has been relatively slow, compared with our understanding of other auditory functions such as echolocation and sound localization. This article discusses previous and current studies in this field, with emphasis on nonhuman primates, and proposes a conceptual platform to further our exploration of this frontier. It is argued that the prerequisite condition for understanding cortical mechanisms underlying communication sound perception and production is an appropriate animal model. Three issues are central to this work: (i) neural encoding of statistical structure of communication sounds, (ii) the role of behavioral relevance in shaping cortical representations, and (iii) sensory-motor interactions between vocal production and perception systems.

  4. [Expression of NR2A in rat auditory cortex after sound insulation and auditory plasticity].

    PubMed

    Xia, Yin; Long, Haishan; Han, Demin; Gong, Shusheng; Lei, Li; Shi, Jinfeng; Fan, Erzhong; Li, Ying; Zhao, Qing

    2009-06-01

    To study the changes of N-methyl-D-aspartate (NMDA) receptor subunit 2A (NR2A) expression at local synapses in auditory cortices after early postnatal sound insulation and tone exposure. We prepared highly purified synaptosomes from primary auditory cortex by Optiprep flotation gradient centrifugations, and compared the differences of NR2A expression in sound insulation PND14, PND28, PND42 and Tone exposure after sound insulation for 7 days by Western blotting. The results showed that the NR2A protein expression of PND14 and PND28 decreased significantly (P<0.05). Tone exposure after sound insulation for 7 days, mSIe NR2A protein level increased significantly (P<0.05). It showed bidirectional regulation of NR2A protein. No significant effects of sound insulation and lone exposure were found on the relative expression level of NR2A of PND42 (P>0.05). The results indicate that sound insulation and experience can modify the protein expression level of NR2A during the critical period of rat postnatal development. These findings provide important data for the study on the mechanisms of the developmental plasticity of sensory functions.

  5. Opto-Acoustic Telephone Study.

    DTIC Science & Technology

    1983-01-01

    teristics of tie’use:, the response of the TALK acutciertoncusbtwnthlgtadechnism cslf e flat acoss the oihe band a small quantity of suspended carbonized ... carbonized fiber in which sound is .generated by the local gas/fiber expansion and contraction. Sound is coupled from the small absorptive chamber to...matrix of carbonized cotton fibers suspended in air. This combination may be regarded as a .pseudo gas." To model the photo acoustic U effect in the

  6. Ocean Basin Impact of Ambient Noise on Marine Mammal Detectability, Distribution, and Acoustic Communication - YIP

    DTIC Science & Technology

    2013-09-30

    soundscape into frequency categories and sound level percentiles allowed for detailed examination of the acoustic environment that would not have been...patterns and trends across sound level parameters and frequency at a single location, it is recommended that the soundscape of any region be...joined to better understand the contribution and variation in distant shipping noise to local soundscapes (Ainslie & Miksis-Olds, 2013) REFERENCES

  7. Physiological and Psychophysical Modeling of the Precedence Effect

    PubMed Central

    Xia, Jing; Brughera, Andrew; Colburn, H. Steven

    2010-01-01

    Many past studies of sound localization explored the precedence effect (PE), in which a pair of brief, temporally close sounds from different directions is perceived as coming from a location near that of the first-arriving sound. Here, a computational model of low-frequency inferior colliculus (IC) neurons accounts for both physiological and psychophysical responses to PE click stimuli. In the model, IC neurons have physiologically plausible inputs, receiving excitation from the ipsilateral medial superior olive (MSO) and long-lasting inhibition from both ipsilateral and contralateral MSOs, relayed through the dorsal nucleus of the lateral lemniscus. In this model, physiological suppression of the lagging response depends on the inter-stimulus delay (ISD) between the lead and lag as well as their relative locations. Psychophysical predictions are generated from a population of model neurons. At all ISDs, predicted lead localization is good. At short ISDs, the estimated location of the lag is near that of the lead, consistent with subjects perceiving both lead and lag from the lead location. As ISD increases, the estimated lag location moves closer to the true lag location, consistent with listeners’ perception of two sounds from separate locations. Together, these simulations suggest that location-dependent suppression in IC neurons can explain the behavioral phenomenon known as the precedence effect. PMID:20358242

  8. Sound field reconstruction within an entire cavity by plane wave expansions using a spherical microphone array.

    PubMed

    Wang, Yan; Chen, Kean

    2017-10-01

    A spherical microphone array has proved effective in reconstructing an enclosed sound field by a superposition of spherical wave functions in Fourier domain. It allows successful reconstructions surrounding the array, but the accuracy will be degraded at a distance. In order to extend the effective reconstruction to the entire cavity, a plane-wave basis in space domain is used owing to its non-decaying propagating characteristic and compared with the conventional spherical wave function method in a low frequency sound field within a cylindrical cavity. The sensitivity to measurement noise, the effects of the numbers of plane waves, and measurement positions are discussed. Simulations show that under the same measurement conditions, the plane wave function method is superior in terms of reconstruction accuracy and data processing efficiency, that is, the entire sound field imaging can be achieved by only one time calculation instead of translations of local sets of coefficients with respect to every measurement position into a global one. An experiment was conducted inside an aircraft cabin mock-up for validation. Additionally, this method provides an alternative possibility to recover the coefficients of high order spherical wave functions in a global coordinate system without coordinate translations with respect to local origins.

  9. Tonotopic tuning in a sound localization circuit.

    PubMed

    Slee, Sean J; Higgs, Matthew H; Fairhall, Adrienne L; Spain, William J

    2010-05-01

    Nucleus laminaris (NL) neurons encode interaural time difference (ITD), the cue used to localize low-frequency sounds. A physiologically based model of NL input suggests that ITD information is contained in narrow frequency bands around harmonics of the sound frequency. This suggested a theory, which predicts that, for each tone frequency, there is an optimal time course for synaptic inputs to NL that will elicit the largest modulation of NL firing rate as a function of ITD. The theory also suggested that neurons in different tonotopic regions of NL require specialized tuning to take advantage of the input gradient. Tonotopic tuning in NL was investigated in brain slices by separating the nucleus into three regions based on its anatomical tonotopic map. Patch-clamp recordings in each region were used to measure both the synaptic and the intrinsic electrical properties. The data revealed a tonotopic gradient of synaptic time course that closely matched the theoretical predictions. We also found postsynaptic band-pass filtering. Analysis of the combined synaptic and postsynaptic filters revealed a frequency-dependent gradient of gain for the transformation of tone amplitude to NL firing rate modulation. Models constructed from the experimental data for each tonotopic region demonstrate that the tonotopic tuning measured in NL can improve ITD encoding across sound frequencies.

  10. Objective function analysis for electric soundings (VES), transient electromagnetic soundings (TEM) and joint inversion VES/TEM

    NASA Astrophysics Data System (ADS)

    Bortolozo, Cassiano Antonio; Bokhonok, Oleg; Porsani, Jorge Luís; Monteiro dos Santos, Fernando Acácio; Diogo, Liliana Alcazar; Slob, Evert

    2017-11-01

    Ambiguities in geophysical inversion results are always present. How these ambiguities appear in most cases open to interpretation. It is interesting to investigate ambiguities with regard to the parameters of the models under study. Residual Function Dispersion Map (RFDM) can be used to differentiate between global ambiguities and local minima in the objective function. We apply RFDM to Vertical Electrical Sounding (VES) and TEM Sounding inversion results. Through topographic analysis of the objective function we evaluate the advantages and limitations of electrical sounding data compared with TEM sounding data, and the benefits of joint inversion in comparison with the individual methods. The RFDM analysis proved to be a very interesting tool for understanding the joint inversion method of VES/TEM. Also the advantage of the applicability of the RFDM analyses in real data is explored in this paper to demonstrate not only how the objective function of real data behaves but the applicability of the RFDM approach in real cases. With the analysis of the results, it is possible to understand how the joint inversion can reduce the ambiguity of the methods.

  11. Sound transmission through a microperforated-panel structure with subdivided air cavities.

    PubMed

    Toyoda, Masahiro; Takahashi, Daiji

    2008-12-01

    The absorption characteristics of a microperforated-panel (MPP) absorber have been widely investigated, and MPPs are recognized as a next-generation absorbing material due to their fiber-free nature and attractive appearance. Herein, further possibilities of MPPs are investigated theoretically from a sound transmission viewpoint. Employing an analytical model composed of a typical MPP and a back wall with an infinite extent, transmission loss through the structure is obtained. Although MPP structures generally have great potential for sound absorption, an improvement in the transmission loss at midfrequencies, which is important for architectural sound insulation, is not sufficient when using a backing cavity alone. Hence, to improve transmission loss at midfrequencies, an air-cavity-subdivision technique is applied to MPP structures. By subdividing the air cavity with partitions, each cell can create a local one-dimensional sound field as well as lead to a normal incidence into the apertures, which is the most effective condition for Helmholtz-type resonance absorption. Moreover, by providing the same motion as the back wall to the MPP, the sound-insulation performance can be further improved at midfrequencies.

  12. Temporal Organization of Sound Information in Auditory Memory.

    PubMed

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  13. Control of boundary layer transition location and plate vibration in the presence of an external acoustic field

    NASA Technical Reports Server (NTRS)

    Maestrello, L.; Grosveld, F. W.

    1991-01-01

    The experiment is aimed at controlling the boundary layer transition location and the plate vibration when excited by a flow and an upstream sound source. Sound has been found to affect the flow at the leading edge and the response of a flexible plate in a boundary layer. Because the sound induces early transition, the panel vibration is acoustically coupled to the turbulent boundary layer by the upstream radiation. Localized surface heating at the leading edge delays the transition location downstream of the flexible plate. The response of the plate excited by a turbulent boundary layer (without sound) shows that the plate is forced to vibrate at different frequencies and with different amplitudes as the flow velocity changes indicating that the plate is driven by the convective waves of the boundary layer. The acoustic disturbances induced by the upstream sound dominate the response of the plate when the boundary layer is either turbulent or laminar. Active vibration control was used to reduce the sound induced displacement amplitude of the plate.

  14. Extraterrestrial sound for planetaria: A pedagogical study.

    PubMed

    Leighton, T G; Banda, N; Berges, B; Joseph, P F; White, P R

    2016-08-01

    The purpose of this project was to supply an acoustical simulation device to a local planetarium for use in live shows aimed at engaging and inspiring children in science and engineering. The device plays audio simulations of estimates of the sounds produced by natural phenomena to accompany audio-visual presentations and live shows about Venus, Mars, and Titan. Amongst the simulated noise are the sounds of thunder, wind, and cryo-volcanoes. The device can also modify the speech of the presenter (or audience member) in accordance with the underlying physics to reproduce those vocalizations as if they had been produced on the world under discussion. Given that no time series recordings exist of sounds from other worlds, these sounds had to be simulated. The goal was to ensure that the audio simulations were delivered in time for a planetarium's launch show to enable the requested outreach to children. The exercise has also allowed an explanation of the science and engineering behind the creation of the sounds. This has been achieved for young children, and also for older students and undergraduates, who could then debate the limitations of that method.

  15. Auditory orientation in crickets: Pattern recognition controls reactive steering

    NASA Astrophysics Data System (ADS)

    Poulet, James F. A.; Hedwig, Berthold

    2005-10-01

    Many groups of insects are specialists in exploiting sensory cues to locate food resources or conspecifics. To achieve orientation, bees and ants analyze the polarization pattern of the sky, male moths orient along the females' odor plume, and cicadas, grasshoppers, and crickets use acoustic signals to locate singing conspecifics. In comparison with olfactory and visual orientation, where learning is involved, auditory processing underlying orientation in insects appears to be more hardwired and genetically determined. In each of these examples, however, orientation requires a recognition process identifying the crucial sensory pattern to interact with a localization process directing the animal's locomotor activity. Here, we characterize this interaction. Using a sensitive trackball system, we show that, during cricket auditory behavior, the recognition process that is tuned toward the species-specific song pattern controls the amplitude of auditory evoked steering responses. Females perform small reactive steering movements toward any sound patterns. Hearing the male's calling song increases the gain of auditory steering within 2-5 s, and the animals even steer toward nonattractive sound patterns inserted into the speciesspecific pattern. This gain control mechanism in the auditory-to-motor pathway allows crickets to pursue species-specific sound patterns temporarily corrupted by environmental factors and may reflect the organization of recognition and localization networks in insects. localization | phonotaxis

  16. Underwater passive acoustic localization of Pacific walruses in the northeastern Chukchi Sea.

    PubMed

    Rideout, Brendan P; Dosso, Stan E; Hannay, David E

    2013-09-01

    This paper develops and applies a linearized Bayesian localization algorithm based on acoustic arrival times of marine mammal vocalizations at spatially-separated receivers which provides three-dimensional (3D) location estimates with rigorous uncertainty analysis. To properly account for uncertainty in receiver parameters (3D hydrophone locations and synchronization times) and environmental parameters (water depth and sound-speed correction), these quantities are treated as unknowns constrained by prior estimates and prior uncertainties. Unknown scaling factors on both the prior and arrival-time uncertainties are estimated by minimizing Akaike's Bayesian information criterion (a maximum entropy condition). Maximum a posteriori estimates for sound source locations and times, receiver parameters, and environmental parameters are calculated simultaneously using measurements of arrival times for direct and interface-reflected acoustic paths. Posterior uncertainties for all unknowns incorporate both arrival time and prior uncertainties. Monte Carlo simulation results demonstrate that, for the cases considered here, linearization errors are small and the lack of an accurate sound-speed profile does not cause significant biases in the estimated locations. A sequence of Pacific walrus vocalizations, recorded in the Chukchi Sea northwest of Alaska, is localized using this technique, yielding a track estimate and uncertainties with an estimated speed comparable to normal walrus swim speeds.

  17. Audio Spatial Representation Around the Body

    PubMed Central

    Aggius-Vella, Elena; Campus, Claudio; Finocchietti, Sara; Gori, Monica

    2017-01-01

    Studies have found that portions of space around our body are differently coded by our brain. Numerous works have investigated visual and auditory spatial representation, focusing mostly on the spatial representation of stimuli presented at head level, especially in the frontal space. Only few studies have investigated spatial representation around the entire body and its relationship with motor activity. Moreover, it is still not clear whether the space surrounding us is represented as a unitary dimension or whether it is split up into different portions, differently shaped by our senses and motor activity. To clarify these points, we investigated audio localization of dynamic and static sounds at different body levels. In order to understand the role of a motor action in auditory space representation, we asked subjects to localize sounds by pointing with the hand or the foot, or by giving a verbal answer. We found that the audio sound localization was different depending on the body part considered. Moreover, a different pattern of response was observed when subjects were asked to make actions with respect to the verbal responses. These results suggest that the audio space around our body is split in various spatial portions, which are perceived differently: front, back, around chest, and around foot, suggesting that these four areas could be differently modulated by our senses and our actions. PMID:29249999

  18. Solar Wind Charge Exchange and Local Hot Bubble X-Ray Emission with the DXL Sounding Rocket Experiment

    NASA Technical Reports Server (NTRS)

    Galeazzi, M.; Collier, M. R.; Cravens, T.; Koutroumpa, D.; Kuntz, K. D.; Lepri, S.; McCammon, D.; Porter, F. S.; Prasai, K.; Robertson, I.; hide

    2012-01-01

    The Diffuse X-ray emission from the Local Galaxy (DXL) sounding rocket is a NASA approved mission with a scheduled first launch in December 2012. Its goal is to identify and separate the X-ray emission of the SWCX from that of the Local Hot Bubble (LHB) to improve our understanding of both. To separate the SWCX contribution from the LHB. DXL will use the SWCX signature due to the helium focusing cone at 1=185 deg, b=-18 deg, DXL uses large area propostionai counters, with an area of 1.000 sq cm and grasp of about 10 sq cm sr both in the 1/4 and 3/4 keY bands. Thanks to the large grasp, DXL will achieve in a 5 minule flight what cannot be achieved by current and future X-ray satellites.

  19. Instrumentation for measurement of aircraft noise and sonic boom

    NASA Technical Reports Server (NTRS)

    Zuckerwar, A. J. (Inventor)

    1975-01-01

    A jet aircraft noise and sonic boom measuring device which converts sound pressure into electric current is described. An electric current proportional to the sound pressure level at a condenser microphone is produced and transmitted over a cable, amplified by a zero drive amplifier and recorded on magnetic tape. The converter is comprised of a local oscillator, a dual-gate field-effect transistor (FET) mixer and a voltage regulator/impedance translator. A carrier voltage that is applied to one of the gates of the FET mixer is generated by the local oscillator. The microphone signal is mixed with the carrier to produce an electrical current at the frequency of vibration of the microphone diaphragm by the FET mixer. The voltage of the local oscillator and mixer stages is regulated, the carrier at the output is eliminated, and a low output impedance at the cable terminals is provided by the voltage regulator/impedance translator.

  20. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE PAGES

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    2017-02-04

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  1. Effects of hydrokinetic turbine sound on the behavior of four species of fish within an experimental mesocosm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, Michael P.; Bevelhimer, Mark; Scherelis, Constantin

    The development of hydrokinetic energy technologies (e.g., tidal turbines) has raised concern over the potential impacts of underwater sound produced by hydrokinetic turbines on fish species likely to encounter these turbines. To assess the potential for behavioral impacts, we exposed four species of fish to varying intensities of recorded hydrokinetic turbine sound in a semi-natural environment. Although we tested freshwater species (redhorse suckers [Moxostoma spp], freshwater drum [Aplondinotus grunniens], largemouth bass [Micropterus salmoides], and rainbow trout [Oncorhynchus mykiss]), these species are also representative of the hearing physiology and sensitivity of estuarine species that would be affected at tidal energy sites.more » Here, we evaluated changes in fish position relative to different intensities of turbine sound as well as trends in location over time with linear mixed-effects and generalized additive mixed models. We also evaluated changes in the proportion of near-source detections relative to sound intensity and exposure time with generalized linear mixed models and generalized additive models. Models indicated that redhorse suckers may respond to sustained turbine sound by increasing distance from the sound source. Freshwater drum models suggested a mixed response to turbine sound, and largemouth bass and rainbow trout models did not indicate any likely responses to turbine sound. Lastly, findings highlight the importance for future research to utilize accurate localization systems, different species, validated sound transmission distances, and to consider different types of behavioral responses to different turbine designs and to the cumulative sound of arrays of multiple turbines.« less

  2. Contribution of the AIRS Shortwave Sounding Channels to Retrieval Accuracy

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis

    2006-01-01

    AIRS contains 2376 high spectral resolution channels between 650/cm and 2665/cm, including channels in both the 15 micron (near 667/cm) and 4.2 micron (near 2400/cm) COP sounding bands. Use of temperature sounding channels in the 15 micron CO2 band has considerable heritage in infra-red remote sensing. Channels in the 4.2 micron CO2 band have potential advantages for temperature sounding purposes because they are essentially insensitive to absorption by water vapor and ozone, and also have considerably sharper lower tropospheric temperature sounding weighting functions than do the 15 micron temperature sounding channels. Potential drawbacks with regard to use of 4.2 micron channels arise from effects on the observed radiances of solar radiation reflected by the surface and clouds, as well as effects of non-local thermodynamic equilibrium on shortwave observations during the day. These are of no practical consequences, however, when properly accounted for. We show results of experiments performed utilizing different spectral regions of AIRS, conducted with the AIRS Science Team candidate Version 5 algorithm. Experiments were performed using temperature sounding channels within the entire AIRS spectral coverage, within only the spectral region 650/cm to 1614 /cm; and within only the spectral region 1000/cm-2665/cm. These show the relative importance of utilizing only 15 micron temperature sounding channels, only the 4.2 micron temperature sounding channels, and both, with regards to sounding accuracy. The spectral region 2380/cm to 2400/cm is shown to contribute significantly to improve sounding accuracy in the lower troposphere, both day and night.

  3. Forebrain pathway for auditory space processing in the barn owl.

    PubMed

    Cohen, Y E; Miller, G L; Knudsen, E I

    1998-02-01

    The forebrain plays an important role in many aspects of sound localization behavior. Yet, the forebrain pathway that processes auditory spatial information is not known for any species. Using standard anatomic labeling techniques, we used a "top-down" approach to trace the flow of auditory spatial information from an output area of the forebrain sound localization pathway (the auditory archistriatum, AAr), back through the forebrain, and into the auditory midbrain. Previous work has demonstrated that AAr units are specialized for auditory space processing. The results presented here show that the AAr receives afferent input from Field L both directly and indirectly via the caudolateral neostriatum. Afferent input to Field L originates mainly in the auditory thalamus, nucleus ovoidalis, which, in turn, receives input from the central nucleus of the inferior colliculus. In addition, we confirmed previously reported projections of the AAr to the basal ganglia, the external nucleus of the inferior colliculus (ICX), the deep layers of the optic tectum, and various brain stem nuclei. A series of inactivation experiments demonstrated that the sharp tuning of AAr sites for binaural spatial cues depends on Field L input but not on input from the auditory space map in the midbrain ICX: pharmacological inactivation of Field L eliminated completely auditory responses in the AAr, whereas bilateral ablation of the midbrain ICX had no appreciable effect on AAr responses. We conclude, therefore, that the forebrain sound localization pathway can process auditory spatial information independently of the midbrain localization pathway.

  4. Open-Fit Domes and Children with Bilateral High-Frequency Sensorineural Hearing Loss: Benefits and Outcomes.

    PubMed

    Johnstone, Patti M; Yeager, Kelly R; Pomeroy, Marnie L; Hawk, Nicole

    2018-04-01

    Open-fit domes (OFDs) coupled with behind-the-ear (BTE) hearing aids were designed for adult listeners with moderate-to-severe bilateral high-frequency hearing loss (BHFL) with little to no concurrent loss in the lower frequencies. Adult research shows that BHFL degrades sound localization accuracy (SLA) and that BTE hearing aids with conventional earmolds (CEs) make matters worse. In contrast, research has shown that OFDs enhance spatial hearing percepts in adults with BHFL. Although the benefits of OFDs have been studied in adults with BHFL, no published studies to date have investigated the use of OFDs in children with the same hearing loss configuration. This study seeks to use SLA measurements to assess efficacy of bilateral OFDs in children with BHFL. To measure SLA in children with BHFL to determine the extent to which hearing loss, age, duration of CE use, and OFDs affect localization accuracy. A within-participant experimental design using repeated measures was used to determine the effect of OFDs on localization accuracy in children with BHFL. A between-participant experimental design was used to compare localization accuracy between children with BHFL and age-matched controls with normal hearing (NH). Eighteen children with BHFL who used CE and 18 age-matched NH controls. Children in both groups were divided into two age groups: older children (10-16 yr) and younger children (6-9 yr). All testing was done in a sound-treated booth with a horizontal array of 15 loudspeakers (radius of 1 m). The stimulus was a spondee word, "baseball": the level averaged 60 dB SPL and randomly roved (±8 dB). Each child was asked to identify the location of a sound source. Localization error was calculated across the loudspeaker array for each listening condition. A significant interaction was found between immediate benefit from OFD and duration of CE usage. Longer CE usage was associated with degraded localization accuracy using OFDs. Regardless of chronological age, children who had used CEs for <6 yr showed immediate localization benefit using OFDs, whereas children who had used CEs for >6 yr showed immediate localization interference using OFDs. Development, however, may play a role in SLA in children with BHFL. When unaided, older children had significantly better localization acuity than younger children with BHFL. When compared to age-matched controls, children with BHFL of all ages showed greater localization error. Nearly all (94% [17/18]) children with BHFL spontaneously reported immediate own-voice improvement when using OFDs. OFDs can provide sound localization benefit to younger children with BHFL. However, immediate benefit from OFDs is reduced by prolonged use of CEs. Although developmental factors may play a role in improving localization abilities over time, children with BHFL will rarely equal that of peers without early use of minimally disruptive hearing aid technology. Also, the occlusion effect likely impacts children far more than currently thought. American Academy of Audiology.

  5. Sound exposure during outdoor music festivals.

    PubMed

    Tronstad, Tron V; Gelderblom, Femke B

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  6. Cross-modal orienting of visual attention.

    PubMed

    Hillyard, Steven A; Störmer, Viola S; Feng, Wenfeng; Martinez, Antigona; McDonald, John J

    2016-03-01

    This article reviews a series of experiments that combined behavioral and electrophysiological recording techniques to explore the hypothesis that salient sounds attract attention automatically and facilitate the processing of visual stimuli at the sound's location. This cross-modal capture of visual attention was found to occur even when the attracting sound was irrelevant to the ongoing task and was non-predictive of subsequent events. A slow positive component in the event-related potential (ERP) that was localized to the visual cortex was found to be closely coupled with the orienting of visual attention to a sound's location. This neural sign of visual cortex activation was predictive of enhanced perceptual processing and was paralleled by a desynchronization (blocking) of the ongoing occipital alpha rhythm. Further research is needed to determine the nature of the relationship between the slow positive ERP evoked by the sound and the alpha desynchronization and to understand how these electrophysiological processes contribute to improved visual-perceptual processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Cross-Polarization Optical Coherence Tomography with Active Maintenance of the Circular Polarization of a Sounding Wave in a Common Path System

    NASA Astrophysics Data System (ADS)

    Gelikonov, V. M.; Romashov, V. N.; Shabanov, D. V.; Ksenofontov, S. Yu.; Terpelov, D. A.; Shilyagin, P. A.; Gelikonov, G. V.; Vitkin, I. A.

    2018-05-01

    We consider a cross-polarization optical coherence tomography system with a common path for the sounding and reference waves and active maintenance of the circular polarization of a sounding wave. The system is based on the formation of birefringent characteristics of the total optical path, which are equivalent to a quarter-wave plate with a 45° orientation of its optical axes with respect to the linearly polarized reference wave. Conditions under which any light-polarization state can be obtained using a two-element phase controller are obtained. The dependence of the local cross-scattering coefficient of light in a model medium and biological tissue on the sounding-wave polarization state is demonstrated. The necessity of active maintenance of the circular polarization of a sounding wave in this common path system (including a flexible probe) is shown to realize uniform optimal conditions for cross-polarization studies of biological tissue.

  8. Sound Exposure During Outdoor Music Festivals

    PubMed Central

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  9. The privileged status of locality in consonant harmony

    PubMed Central

    Finley, Sara

    2011-01-01

    While the vast majority of linguistic processes apply locally, consonant harmony appears to be an exception. In this phonological process, consonants share the same value of a phonological feature, such as secondary place of articulation. In sibilant harmony, [s] and [ʃ] (‘sh’) alternate such that if a word contains the sound [ʃ], all [s] sounds become [ʃ]. This can apply locally as a first-order or non-locally as a second-order pattern. In the first-order case, no consonants intervene between the two sibilants (e.g., [pisasu], [piʃaʃu]). In second-order case, a consonant may intervene (e.g., [sipasu], [ʃipaʃu]). The fact that there are languages that allow second-order non-local agreement of consonant features has led some to question whether locality constraints apply to consonant harmony. This paper presents the results from two artificial grammar learning experiments that demonstrate the privileged role of locality constraints, even in patterns that allow second-order non-local interactions. In Experiment 1, we show that learners do not extend first-order non-local relationships in consonant harmony to second-order nonlocal relationships. In Experiment 2, we show that learners will extend a consonant harmony pattern with second-order long distance relationships to a consonant harmony with first-order long distance relationships. Because second-order non-local application implies first-order non-local application, but first-order non-local application does not imply second-order non-local application, we establish that local constraints are privileged even in consonant harmony. PMID:21686094

  10. DCL System Using Deep Learning Approaches for Land-Based or Ship-Based Real-Time Recognition and Localization of Marine Mammals

    DTIC Science & Technology

    2013-09-30

    method has been successfully implemented to automatically detect and recognize pulse trains from minke whales ( songs ) and sperm whales (Physeter...workshops, conferences and data challenges 2. Enhancements of the ASR algorithm for frequency-modulated sounds: Right Whale Study 3...Enhancements of the ASR algorithm for pulse trains: Minke Whale Study 4. Mining Big Data Sound Archives using High Performance Computing software and hardware

  11. Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.

    PubMed

    Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves

    2016-09-01

    This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.

  12. Observations of LHR noise with banded structure by the sounding rocket S29 barium-GEOS

    NASA Technical Reports Server (NTRS)

    Koskinen, H. E. J.; Holmgren, G.; Kintner, P. M.

    1983-01-01

    The measurement of electrostatic noise near the lower hybrid frequency made by the sounding rocket S29 barium-GEOS is reported. The noise is related to the spin of the rocket and reaches well below the local lower hybrid resonance frequency. Above the altitude of 300 km the noise shows banded structure roughly organized by the hydrogen cyclotron frequency. Simultaneously with the banded structure a signal near the hydrogen cyclotron frequency is detected. This signal is also spin modulated. The character of the noise strongly suggests that it is locally generated by the rocket payload disturbing the plasma. If this interpretation is correct, plasma wave experiments on other spacecrafts are expected to observe similar phenomena.

  13. Intermittent large amplitude internal waves observed in Port Susan, Puget Sound

    NASA Astrophysics Data System (ADS)

    Harris, J. C.; Decker, L.

    2017-07-01

    A previously unreported internal tidal bore, which evolves into solitary internal wave packets, was observed in Port Susan, Puget Sound, and the timing, speed, and amplitude of the waves were measured by CTD and visual observation. Acoustic Doppler current profiler (ADCP) measurements were attempted, but unsuccessful. The waves appear to be generated with the ebb flow along the tidal flats of the Stillaguamish River, and the speed and width of the resulting waves can be predicted from second-order KdV theory. Their eventual dissipation may contribute significantly to surface mixing locally, particularly in comparison with the local dissipation due to the tides. Visually the waves appear in fair weather as a strong foam front, which is less visible the farther they propagate.

  14. Using the structure of natural scenes and sounds to predict neural response properties in the brain

    NASA Astrophysics Data System (ADS)

    Deweese, Michael

    2014-03-01

    The natural scenes and sounds we encounter in the world are highly structured. The fact that animals and humans are so efficient at processing these sensory signals compared with the latest algorithms running on the fastest modern computers suggests that our brains can exploit this structure. We have developed a sparse mathematical representation of speech that minimizes the number of active model neurons needed to represent typical speech sounds. The model learns several well-known acoustic features of speech such as harmonic stacks, formants, onsets and terminations, but we also find more exotic structures in the spectrogra representation of sound such as localized checkerboard patterns and frequency-modulated excitatory subregions flanked by suppressive sidebands. Moreover, several of these novel features resemble neuronal receptive fields reported in the Inferior Colliculus (IC), as well as auditory thalamus (MGBv) and primary auditory cortex (A1), and our model neurons exhibit the same tradeoff in spectrotemporal resolution as has been observed in IC. To our knowledge, this is the first demonstration that receptive fields of neurons in the ascending mammalian auditory pathway beyond the auditory nerve can be predicted based on coding principles and the statistical properties of recorded sounds. We have also developed a biologically-inspired neural network model of primary visual cortex (V1) that can learn a sparse representation of natural scenes using spiking neurons and strictly local plasticity rules. The representation learned by our model is in good agreement with measured receptive fields in V1, demonstrating that sparse sensory coding can be achieved in a realistic biological setting.

  15. Restoration of spatial hearing in adult cochlear implant users with single-sided deafness.

    PubMed

    Litovsky, Ruth Y; Moua, Keng; Godar, Shelly; Kan, Alan; Misurelli, Sara M; Lee, Daniel J

    2018-04-14

    In recent years, cochlear implants (CIs) have been provided in growing numbers to people with not only bilateral deafness but also to people with unilateral hearing loss, at times in order to alleviate tinnitus. This study presents audiological data from 15 adult participants (ages 48 ± 12 years) with single sided deafness. Results are presented from 9/15 adults, who received a CI (SSD-CI) in the deaf ear and were tested in Acoustic or Acoustic + CI hearing modes, and 6/15 adults who are planning to receive a CI, and were tested in the unilateral condition only. Testing included (1) audiometric measures of threshold, (2) speech understanding for CNC words and AzBIO sentences, (3) tinnitus handicap inventory, (4) sound localization with stationary sound sources, and (5) perceived auditory motion. Results showed that when listening to sentences in quiet, performance was excellent in the Acoustic and Acoustic + CI conditions. In noise, performance was similar between Acoustic and Acoustic + CI conditions in 4/6 participants tested, and slightly worse in the Acoustic + CI in 2/6 participants. In some cases, the CI provided reduced tinnitus handicap scores. When testing sound localization ability, the Acoustic + CI condition resulted in improved sound localization RMS error of 29.2° (SD: ±6.7°) compared to 56.6° (SD: ±16.5°) in the Acoustic-only condition. Preliminary results suggest that the perception of motion direction, whereby subjects are required to process and compare directional cues across multiple locations, is impaired when compared with that of normal hearing subjects. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    NASA Astrophysics Data System (ADS)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  17. Electrophysiological correlates of cocktail-party listening.

    PubMed

    Lewald, Jörg; Getzmann, Stephan

    2015-10-01

    Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Different spatio-temporal electroencephalography features drive the successful decoding of binaural and monaural cues for sound localization.

    PubMed

    Bednar, Adam; Boland, Francis M; Lalor, Edmund C

    2017-03-01

    The human ability to localize sound is essential for monitoring our environment and helps us to analyse complex auditory scenes. Although the acoustic cues mediating sound localization have been established, it remains unknown how these cues are represented in human cortex. In particular, it is still a point of contention whether binaural and monaural cues are processed by the same or distinct cortical networks. In this study, participants listened to a sequence of auditory stimuli from different spatial locations while we recorded their neural activity using electroencephalography (EEG). The stimuli were presented over a loudspeaker array, which allowed us to deliver realistic, free-field stimuli in both the horizontal and vertical planes. Using a multivariate classification approach, we showed that it is possible to decode sound source location from scalp-recorded EEG. Robust and consistent decoding was shown for stimuli that provide binaural cues (i.e. Left vs. Right stimuli). Decoding location when only monaural cues were available (i.e. Front vs. Rear and elevational stimuli) was successful for a subset of subjects and showed less consistency. Notably, the spatio-temporal pattern of EEG features that facilitated decoding differed based on the availability of binaural and monaural cues. In particular, we identified neural processing of binaural cues at around 120 ms post-stimulus and found that monaural cues are processed later between 150 and 200 ms. Furthermore, different spatial activation patterns emerged for binaural and monaural cue processing. These spatio-temporal dissimilarities suggest the involvement of separate cortical mechanisms in monaural and binaural acoustic cue processing. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Egocentric and allocentric representations in auditory cortex

    PubMed Central

    Brimijoin, W. Owen; Bizley, Jennifer K.

    2017-01-01

    A key function of the brain is to provide a stable representation of an object’s location in the world. In hearing, sound azimuth and elevation are encoded by neurons throughout the auditory system, and auditory cortex is necessary for sound localization. However, the coordinate frame in which neurons represent sound space remains undefined: classical spatial receptive fields in head-fixed subjects can be explained either by sensitivity to sound source location relative to the head (egocentric) or relative to the world (allocentric encoding). This coordinate frame ambiguity can be resolved by studying freely moving subjects; here we recorded spatial receptive fields in the auditory cortex of freely moving ferrets. We found that most spatially tuned neurons represented sound source location relative to the head across changes in head position and direction. In addition, we also recorded a small number of neurons in which sound location was represented in a world-centered coordinate frame. We used measurements of spatial tuning across changes in head position and direction to explore the influence of sound source distance and speed of head movement on auditory cortical activity and spatial tuning. Modulation depth of spatial tuning increased with distance for egocentric but not allocentric units, whereas, for both populations, modulation was stronger at faster movement speeds. Our findings suggest that early auditory cortex primarily represents sound source location relative to ourselves but that a minority of cells can represent sound location in the world independent of our own position. PMID:28617796

  20. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    PubMed

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  1. Interactive Sound Propagation using Precomputation and Statistical Approximations

    NASA Astrophysics Data System (ADS)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  2. Source levels of social sounds in migrating humpback whales (Megaptera novaeangliae).

    PubMed

    Dunlop, Rebecca A; Cato, Douglas H; Noad, Michael J; Stokes, Dale M

    2013-07-01

    The source level of an animal sound is important in communication, since it affects the distance over which the sound is audible. Several measurements of source levels of whale sounds have been reported, but the accuracy of many is limited because the distance to the source and the acoustic transmission loss were estimated rather than measured. This paper presents measurements of source levels of social sounds (surface-generated and vocal sounds) of humpback whales from a sample of 998 sounds recorded from 49 migrating humpback whale groups. Sources were localized using a wide baseline five hydrophone array and transmission loss was measured for the site. Social vocalization source levels were found to range from 123 to 183 dB re 1 μPa @ 1 m with a median of 158 dB re 1 μPa @ 1 m. Source levels of surface-generated social sounds ("breaches" and "slaps") were narrower in range (133 to 171 dB re 1 μPa @ 1 m) but slightly higher in level (median of 162 dB re 1 μPa @ 1 m) compared to vocalizations. The data suggest that group composition has an effect on group vocalization source levels in that singletons and mother-calf-singing escort groups tend to vocalize at higher levels compared to other group compositions.

  3. Bottom-up approach for microstructure optimization of sound absorbing materials.

    PubMed

    Perrot, Camille; Chevillotte, Fabien; Panneton, Raymond

    2008-08-01

    Results from a numerical study examining micro-/macrorelations linking local geometry parameters to sound absorption properties are presented. For a hexagonal structure of solid fibers, the porosity phi, the thermal characteristic length Lambda('), the static viscous permeability k(0), the tortuosity alpha(infinity), the viscous characteristic length Lambda, and the sound absorption coefficient are computed. Numerical solutions of the steady Stokes and electrical equations are employed to provide k(0), alpha(infinity), and Lambda. Hybrid estimates based on direct numerical evaluation of phi, Lambda('), k(0), alpha(infinity), Lambda, and the analytical model derived by Johnson, Allard, and Champoux are used to relate varying (i) throat size, (ii) pore size, and (iii) fibers' cross-section shapes to the sound absorption spectrum. The result of this paper tends to demonstrate the important effect of throat size in the sound absorption level, cell size in the sound absorption frequency selectivity, and fibers' cross-section shape in the porous material weight reduction. In a hexagonal porous structure with solid fibers, the sound absorption level will tend to be maximized with a 48+/-10 microm throat size corresponding to an intermediate resistivity, a 13+/-8 microm fiber radius associated with relatively small interfiber distances, and convex triangular cross-section shape fibers allowing weight reduction.

  4. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    PubMed

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  5. Music Basic Skills.

    ERIC Educational Resources Information Center

    Kentucky State Dept. of Education, Frankfort.

    This document is a statement of the basic music skills that Kentucky students should develop. This skills list does not replace any locally developed curriculum. It is intended as a guide for local school districts in Kentucky in their development of a detailed K-12 curriculum. The skills presented are considered basic to a sound education program…

  6. 77 FR 55138 - Special Local Regulation: Hydroplane Races in Lake Sammamish, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... Local Regulation: Hydroplane Races in Lake Sammamish, WA AGENCY: Coast Guard, DHS. ACTION: Notice of... Races within the Captain of the Port Puget Sound Area of Responsibility for the 2012 Fall Championship... September 30, 2012. This action is necessary to restrict vessel movement in the vicinity of the race courses...

  7. Usefulness of chest radiographs in first asthma attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gershel, J.C.; Goldman, H.S.; Stein, R.E.K.

    1983-08-11

    To assess the value of routine chest radiography during acute first attacks of asthma, we studied 371 consecutive children over one year of age who presented with an initial episode of wheezing. Three hundred fifty children (94.3%) had radiographic findings that were compatible with uncomplicated asthma and were considered negative. Twenty-one (5.7%) had positive findings: atelectasis and pneumonia were noted in seven, segmental atelectasis in six, pneumonia in five, multiple areas of subsegmental atelectasis in two, and pneumomediastinum in one. The patients with positive films were more likely to have a respiratory rate above 60 or a pulse rate abovemore » 160 (P < 0.001), localized rales or localized decreased breath sounds before treatment (P < 0.01), and localized rales (P < 0.005) and localized wheezing (P < 0.02) after treatment; also, these patients were admitted to the hospital more often (P < 0.001). Ninety-five percent (20 of 21) of the children with positive films could be identified before treatment on the basis of a combination of tachypnea, tachycardia, fever, and localized rales or localized decreased breath sounds. Most first-time wheezers will not have positive radiographs; careful clinical evaluation should reveal which patients will have abnormal radiographs and will therefore benefit from the procedure. 20 references, 3 tables.« less

  8. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  9. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    PubMed Central

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  10. Auditory performance in an open sound field

    NASA Astrophysics Data System (ADS)

    Fluitt, Kim F.; Letowski, Tomasz; Mermagen, Timothy

    2003-04-01

    Detection and recognition of acoustic sources in an open field are important elements of situational awareness on the battlefield. They are affected by many technical and environmental conditions such as type of sound, distance to a sound source, terrain configuration, meteorological conditions, hearing capabilities of the listener, level of background noise, and the listener's familiarity with the sound source. A limited body of knowledge about auditory perception of sources located over long distances makes it difficult to develop models predicting auditory behavior on the battlefield. The purpose of the present study was to determine the listener's abilities to detect, recognize, localize, and estimate distances to sound sources from 25 to 800 m from the listing position. Data were also collected for meteorological conditions (wind direction and strength, temperature, atmospheric pressure, humidity) and background noise level for each experimental trial. Forty subjects (men and women, ages 18 to 25) participated in the study. Nine types of sounds were presented from six loudspeakers in random order; each series was presented four times. Partial results indicate that both detection and recognition declined at distances greater than approximately 200 m and distance estimation was grossly underestimated by listeners. Specific results will be presented.

  11. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied significantly among different nights in each study area. Our results confirm the potential of using acoustic methods to localize wolves with high accuracy, in different natural environments and at large spatial scales. Passive acoustic methods are suitable for monitoring the dynamics of grey wolf recolonization and so, will contribute to enhance conservation and management plans.

  12. Aversive learning shapes neuronal orientation tuning in human visual cortex.

    PubMed

    McTeague, Lisa M; Gruss, L Forest; Keil, Andreas

    2015-07-28

    The responses of sensory cortical neurons are shaped by experience. As a result perceptual biases evolve, selectively facilitating the detection and identification of sensory events that are relevant for adaptive behaviour. Here we examine the involvement of human visual cortex in the formation of learned perceptual biases. We use classical aversive conditioning to associate one out of a series of oriented gratings with a noxious sound stimulus. After as few as two grating-sound pairings, visual cortical responses to the sound-paired grating show selective amplification. Furthermore, as learning progresses, responses to the orientations with greatest similarity to the sound-paired grating are increasingly suppressed, suggesting inhibitory interactions between orientation-selective neuronal populations. Changes in cortical connectivity between occipital and fronto-temporal regions mirror the changes in visuo-cortical response amplitudes. These findings suggest that short-term behaviourally driven retuning of human visual cortical neurons involves distal top-down projections as well as local inhibitory interactions.

  13. Evaluation of a localization training program for hearing impaired listeners.

    PubMed

    Kuk, Francis; Keenan, Denise M; Lau, Chi; Crose, Bryan; Schumacher, Jennifer

    2014-01-01

    To evaluate the effectiveness of a home-based and a laboratory-based localization training program. This study examined the effectiveness of a localization training program on improving the localization ability of 15 participants with a mild-to-moderately severe hearing loss. These participants had worn the study hearing aids in a previous study. The training consisted of laboratory-based training and home-based training. The participants were divided into three groups: a control group, a group that performed the laboratory training first followed by the home training, and a group that completed the home training first followed by the laboratory training. The participants were evaluated before any training (baseline), at 2 weeks, 1 month, 2 months, and 3 months after baseline testing. All training was completed by the second month. The participants only wore the study hearing aids between the second month and the third month. Localization testing and laboratory training were conducted in a sound-treated room with a 360 degree, 12 loudspeaker array. There were three stimuli each randomly presented three times from each loudspeaker (nine test items from each loudspeaker) for a total of 108 items on each test or training trial. The stimuli, including a continuous noise, a telephone ring, and a speech passage "Search for the sound from this speaker" were high-pass filtered above 2000 Hz. The test stimuli had a duration of 300 ms, whereas the training stimuli had five durations (3 s, 2 s, 1 s, 500 ms, and 300 ms) and four back attenuation (-8, -4, -2, and 0 dB re: front presentation) values. All stimuli were presented at 30 dB SL or the most comfortable listening level of the participants. Each participant completed 6 to 8, 2 hr laboratory-based training within a month. The home training required a two-loudspeaker computer system using 30 different sounds of various durations (5) by attenuation (4) combinations. The participants were required to use the home training program for 30 min per day, 5 days per week for 4 weeks. Localization data were evaluated using a 30 degree error criterion. There was a significant difference in localization scores for sounds that originated from the back between baseline and 3 months for the two groups that received training. The performance of the control group remained the same across the 3 month period. Generalization to other stimuli and in the unaided condition was also seen. There were no significant differences in localization performance from other directions between baseline and 3 months. These results indicated that the training program was effective in improving the localization skills of these listeners under the current test set-up. The current study demonstrated that hearing aid wearers can be trained on their front/back localization skills using either laboratory-based or home-based training program. The effectiveness of the training was generalized to other acoustic stimuli and the unaided conditions when the stimulus levels were fixed.

  14. Humpback whale bioacoustics: From form to function

    NASA Astrophysics Data System (ADS)

    Mercado, Eduardo, III

    This thesis investigates how humpback whales produce, perceive, and use sounds from a comparative and computational perspective. Biomimetic models are developed within a systems-theoretic framework and then used to analyze the properties of humpback whale sounds. First, sound transmission is considered in terms of possible production mechanisms and the propagation characteristics of shallow water environments frequented by humpback whales. A standard source-filter model (used to describe human sound production) is shown to be well suited for characterizing sound production by humpback whales. Simulations of sound propagation based on normal mode theory reveal that optimal frequencies for long range propagation are higher than the frequencies used most often by humpbacks, and that sounds may contain spectral information indicating how far they have propagated. Next, sound reception is discussed. A model of human auditory processing is modified to emulate humpback whale auditory processing as suggested by cochlear anatomical dimensions. This auditory model is used to generate visual representations of humpback whale sounds that more clearly reveal what features are likely to be salient to listening whales. Additionally, the possibility that an unusual sensory organ (the tubercle) plays a role in acoustic processing is assessed. Spatial distributions of tubercles are described that suggest tubercles may be useful for localizing sound sources. Finally, these models are integrated with self-organizing feature maps to create a biomimetic sound classification system, and a detailed analysis of individual sounds and sound patterns in humpback whale 'songs' is performed. This analysis provides evidence that song sounds and sound patterns vary substantially in terms of detectability and propagation potential, suggesting that they do not all serve the same function. New quantitative techniques are also presented that allow for more objective characterizations of the long term acoustic features of songs. The quantitative framework developed in this thesis provides a basis for theoretical consideration of how humpback whales (and other cetaceans) might use sound. Evidence is presented suggesting that vocalizing humpbacks could use sounds not only to convey information to other whales, but also to collect information about other whales. In particular, it is suggested that some sounds currently believed to be primarily used as communicative signals, might be primarily used as sonar signals. This theoretical framework is shown to be generalizable to other baleen whales and to toothed whales.

  15. Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"

    NASA Astrophysics Data System (ADS)

    Yang, Yen Chun

    There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.

  16. Characterizing the 3-D atmosphere with NUCAPS sounding products from multiple platforms

    NASA Astrophysics Data System (ADS)

    Barnet, C. D.; Smith, N.; Gambacorta, A.; Wheeler, A. A.; Sjoberg, W.; Goldberg, M.

    2017-12-01

    The JPSS Proving Ground and Risk Reduction (PGRR) Program launched the Sounding Initiative in 2014 to develop operational applications that use 3-D satellite soundings. These are near global daily swaths of vertical atmospheric profiles of temperature, moisture and trace gas species. When high vertical resolution satellite soundings first became available, their assimilation into user applications was slow: forecasters familiar with 2-D satellite imagery or 1-D radiosondes did not have the technical capability nor product knowledge to readily ingest satellite soundings. Similarly, the satellite sounding developer community lacked wherewithal to understand the many challenges forecasters face in their real time decision-making. It took the PGRR Sounding Initiative to bring these two communities together and develop novel applications that now depend on NUCAPS soundings. NUCAPS - the NOAA Unique Combined Atmospheric Processing System - is platform agnostic and generates satellite soundings from measurements made by infrared and microwave sounder pairs on the MetOp (IASI/AMSU) and Suomi NPP (CrIS/ATMS) polar-orbiting platforms. We highlight here three new applications developed under the PGRR Sounding Initiative. They are, (i) aviation: NUCAPS identifies cold air "blobs" that causes jet fuel to freeze, (ii) severe weather: NUCAPS identifies areas of convective initiation, and (iii) air quality: NUCAPS identifies stratospheric intrusions and tracks long-range transport of biomass burning plumes. The value of NUCAPS being platform agnostic will become apparent with the JPSS-1 launch. NUCAPS soundings from Suomi NPP and JPSS-1, being 50 min apart, could capture fast-changing weather events and together with NUCAPS soundings from the two MetOp platforms ( 4 hours earlier in the day than JPSS) could characterize diurnal cycles. In this paper, we will summarize key accomplishments and assess whether NUCAPS maintains enough continuity in its sounding products from multiple platforms to sufficiently characterize atmospheric evolution at localized scales. With this we will address one of the primary data requirements that emerged in the Sounding Initiative, namely the need for a time sequence of satellite sounding products.

  17. Comparing near-regional and local measurements of infrasound from Mount Erebus, Antarctica: Implications for monitoring

    NASA Astrophysics Data System (ADS)

    Dabrowa, A. L.; Green, D. N.; Johnson, J. B.; Phillips, J. C.; Rust, A. C.

    2014-11-01

    Local (100 s of metres from vent) monitoring of volcanic infrasound is a common tool at volcanoes characterized by frequent low-magnitude eruptions, but it is generally not safe or practical to have sensors so close to the vent during more intense eruptions. To investigate the potential and limitations of monitoring at near-regional ranges (10 s of km) we studied infrasound detection and propagation at Mount Erebus, Antarctica. This site has both a good local monitoring network and an additional International Monitoring System infrasound array, IS55, located 25 km away. We compared data recorded at IS55 with a set of 117 known Strombolian events that were recorded with the local network in January 2006. 75% of these events were identified at IS55 by an analyst looking for a pressure transient coincident with an F-statistic detection, which identifies coherent infrasound signals. With the data from January 2006, we developed and calibrated an automated signal-detection algorithm based on threshold values of both the F-statistic and the correlation coefficient. Application of the algorithm across IS55 data for all of 2006 identified infrasonic signals expected to be Strombolian explosions, and proved reliable for indicating trends in eruption frequency. However, detectability at IS55 of known Strombolian events depended strongly on the local signal amplitude: 90% of events with local amplitudes > 25 Pa were identified at IS55, compared to only 26% of events with local amplitudes < 25 Pa. Event detection was also affected by considerable variation in amplitude decay rates between the local and near-regional sensors. Amplitudes recorded at IS55 varied between 3% and 180% of the amplitude expected assuming hemispherical spreading, indicating that amplitudes recorded at near-regional ranges to Erebus are unreliable indicators of event magnitude. Comparing amplitude decay rates with locally collected radiosonde data indicates a close relationship between recorded amplitude and lower atmosphere effective sound speed structure. At times of increased sound speed gradient, higher amplitude decay rates are observed, consistent with increased upward refraction of acoustic energy along the propagation path. This study indicates that whilst monitoring activity levels at near-regional ranges can be successful, variable amplitude decay rate means quantitative analysis of infrasound data for eruption intensity and magnitude is not advisable without the consideration of local atmospheric sound speed structure.

  18. Deep-sea fan deposition of the lower Tertiary Orca Group, eastern Prince William Sound, Alaska

    USGS Publications Warehouse

    Winkler, Gary R.

    1976-01-01

    The Orca Group is a thick, complexly deformed, sparsely fossiliferous sequence of flysch-like sedimentary and tholeiitic volcanic rocks of middle or late Paleocene age that crops out over an area of. roughly 21,000 km2 in the Prince William Sound region and the adjacent Chugach Mountains. The Orca Group also probably underlies a large part of the Gulf of Alaska Tertiary province and the continental shelf south of the outcrop belt; coextensive rocks to the southwest on Kodiak Island are called the Ghost Rocks and Sitkalidak Formations. The Orca Group was pervasively faulted, tightly folded, and metamorphosed regionally to laumontite and prehnite-pumpellyite facies prior to, and perhaps concurrently with, intrusion of early Eocene granodiorite and quartz monzonite plutons. In eastern Prince William Sound, 95% of the Orca sedimentary rocks are interbedded feldspathic and lithofeldspathic sandstone, siltstone, and mudstone turbidites. Lithic components vary widely in abundance and composition, but labile sedimentary and volcanic grains dominate. A widespread yet minor amount of the mudstone is hemipelagic or pelagic, with scattered foraminifers. Pebbly mudstone with rounded clasts of exotic lithologies and locally conglomerate with angular blocks of deformed sandstone identical to the enclosing matrix are interbedded with the turbidites. Thick and thin tabular bodies of altered tholeiitic basalt are locally and regionally conformable with the sedimentary rocks, and constitute 15-20% of Orca outcrops in eastern Prince William Sound. The basalt consists chiefly of pillowed and nonpillowed flows, but also includes minor pillow breccia, tuff, and intrusive rocks. Nonvolcanic turbidites are interbedded with the basalt; lenticular bioclastic limestone, red and green mudstone, chert, and conglomerate locally overlie the basalt, but are supplanted upward by turbidites. From west to east, basalts within the Orca Group become increasingly fragmental and amygdaloidal. Such textural changes probably indicate shallower water to the east. A radial distribution of paleocurrents and distinctive associations of turbidite facies within the sedimentary rocks suggest that the Orca Group in eastern Prince William Sound was deposited on a westward-sloping, complex deep-sea fan. Detritus was derived primarily from 'tectonized' sedimentary, volcanic, and plutonic rocks. Coeval submarine volcanism resulted in intercalation of basalt within prisms of terrigenous sediment.

  19. Corneal-Reflection Eye-Tracking Technique for the Assessment of Horizontal Sound Localization Accuracy from 6 Months of Age.

    PubMed

    Asp, Filip; Olofsson, Åke; Berninger, Erik

    2016-01-01

    The evaluation of sound localization accuracy (SLA) requires precise behavioral responses from the listener. Such responses are not always possible to elicit in infants and young children, and procedures for the assessment of SLA are time consuming. The aim of this study was to develop a fast, valid, and objective method for the assessment of SLA from 6 months of age. To this end, pupil positions toward spatially distributed continuous auditory and visual stimuli were recorded. Twelve children (29 to 157 weeks of age) who passed the universal newborn hearing screening and eight adults (18 to 40 years of age) who had pure-tone thresholds ≤20 dB HL in both ears participated in this study. Horizontal SLA was measured in a sound field with 12 loudspeaker/display (LD)-pairs placed in an audiological test room at 10 degrees intervals in the frontal horizontal plane (±55 degrees azimuth). An ongoing auditory-visual stimulus was presented at 63 dB SPL(A) and shifted to randomized loudspeakers simultaneously with pauses of the visual stimulus. The visual stimulus was automatically reintroduced at the azimuth of the sounding loudspeaker after a sound-only period of 1.6 sec. A corneal-reflection eye-tracking technique allowed the acquisition of the subjects' pupil positions relative to the LD-pairs. The perceived azimuth was defined as the median of the intersections between gaze and LD-pairs during the final 500 msec of the sound-only period. Overall SLA was quantified by an Error Index (EI), where EI = 0 corresponded to perfect match between perceived and presented azimuths, whereas EI = 1 corresponded to chance. SLA was rapidly measured in children (mean = 168 sec, n = 12) and adults (mean = 162 sec, n = 8). Visual inspection of gaze data indicated that gaze shifts occurred in sound-only periods. The medians of the perceived sound-source azimuths either coincided with the presenting sound-source azimuth or were offset by a maximum of 20 degrees in children. In contrast, adults revealed a perfect match from -55 to 55 degrees, except at 15 degrees azimuth (median = 20 degrees), with 9/12 of the quartile ranges = 0 degrees. Children showed a mean (SD) EI of 0.42 (0.17), which was significantly higher than that in adults (p < 0.0001). However, children revealed a distinct age-related EI improvement of 16 percentage points per year (r = -0.68, p = 0.015, n = 12), suggesting an ongoing maturation of SLA in the studied age range (29 to 157 weeks). The eight adults showed high SLA and high reliability as demonstrated by the low mean (SD) EI (0.054 [0.021]) and the low variability in test-retest differences (95% confidence interval = -0.020 to 0.046). Corneal-reflection eye-tracking provides an objective and fast assessment of horizontal SLA from about 6 months of age and may enable gaze to be used as an objective measure for sound localization in this age group. Infant SLA is immature and improvements are related to increasing age. Adults show high overall SLA and low intra- and intersubject variability in SLA. The technique may be used as a clinical tool for the evaluation of very early intervention in a young, preverbal population and throughout the life span.

  20. Consistent modelling of wind turbine noise propagation from source to receiver.

    PubMed

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; Dag, Kaya O; Moriarty, Patrick

    2017-11-01

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. The local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.

  1. Air Temperature Distribution Measurement Using Asynchronous-Type Sound Probe

    NASA Astrophysics Data System (ADS)

    Katano, Yosuke; Wakatsuki, Naoto; Mizutani, Koichi

    2009-07-01

    In conventional temperature measurement using a sound probe, the operation beginnings of two acoustic sensors must be completely synchronized to measure time of flight (TOF), tf, because the precision of synchronization determines TOF measurement accuracy. A wireless local area network (LAN) is convenient for constructing a sensing grid; however, it causes a fluctuation in the delay of millisecond order. Therefore, it cannot provide sufficient precision for synchronizing acoustic sensors. In previous studies, synchronization was achieved by a trigger line using a coaxial cable; however, the cable reduces the flexibility of a wireless sensing grid especially in larger-scale measurement. In this study, an asynchronous-type sound probe is devised to compensate for the effect of the delay of millisecond order caused by the network. The validity of the probe was examined, and the air temperature distribution was measured using this means. A matrix method is employed to obtain the distribution. Similar results were observed using both asynchronous-type sound probes and thermocouples. This shows the validity of the use of a sensing grid with an asynchronous-type sound probe for temperature distribution measurement even if the trigger line is omitted.

  2. Time frequency analysis of sound from a maneuvering rotorcraft

    NASA Astrophysics Data System (ADS)

    Stephenson, James H.; Tinney, Charles E.; Greenwood, Eric; Watts, Michael E.

    2014-10-01

    The acoustic signatures produced by a full-scale, Bell 430 helicopter during steady-level-flight and transient roll-right maneuvers are analyzed by way of time-frequency analysis. The roll-right maneuvers comprise both a medium and a fast roll rate. Data are acquired using a single ground based microphone that are analyzed by way of the Morlet wavelet transform to extract the spectral properties and sound pressure levels as functions of time. The findings show that during maneuvering operations of the helicopter, both the overall sound pressure level and the blade-vortex interaction sound pressure level are greatest when the roll rate of the vehicle is at its maximum. The reduced inflow in the region of the rotor disk where blade-vortex interaction noise originates is determined to be the cause of the increase in noise. A local decrease in inflow reduces the miss distance of the tip vortex and thereby increases the BVI noise signature. Blade loading and advance ratios are also investigated as possible mechanisms for increased sound production, but are shown to be fairly constant throughout the maneuvers.

  3. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE PAGES

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong; ...

    2017-11-28

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  4. Consistent modelling of wind turbine noise propagation from source to receiver

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barlas, Emre; Zhu, Wei Jun; Shen, Wen Zhong

    The unsteady nature of wind turbine noise is a major reason for annoyance. The variation of far-field sound pressure levels is not only caused by the continuous change in wind turbine noise source levels but also by the unsteady flow field and the ground characteristics between the turbine and receiver. To take these phenomena into account, a consistent numerical technique that models the sound propagation from the source to receiver is developed. Large eddy simulation with an actuator line technique is employed for the flow modelling and the corresponding flow fields are used to simulate sound generation and propagation. Themore » local blade relative velocity, angle of attack, and turbulence characteristics are input to the sound generation model. Time-dependent blade locations and the velocity between the noise source and receiver are considered within a quasi-3D propagation model. Long-range noise propagation of a 5 MW wind turbine is investigated. Sound pressure level time series evaluated at the source time are studied for varying wind speeds, surface roughness, and ground impedances within a 2000 m radius from the turbine.« less

  5. A theoretical study for the propagation of rolling noise over a porous road pavement

    NASA Astrophysics Data System (ADS)

    Keung Lui, Wai; Ming Li, Kai

    2004-07-01

    A simplified model based on the study of sound diffracted by a sphere is proposed for investigating the propagation of noise in a hornlike geometry between porous road surfaces and rolling tires. The simplified model is verified by comparing its predictions with the published numerical and experimental results of studies on the horn amplification of sound over a road pavement. In a parametric study, a point monopole source is assumed to be localized on the surface of a tire. In the frequency range of interest, a porous road pavement can effectively reduce the level of amplified sound due to the horn effect. It has been shown that an increase in the thickness and porosity of a porous layer, or the use of a double layer of porous road pavement, attenuates the horn amplification of sound. However, a decrease in the flow resistivity of a porous road pavement does little to reduce the horn amplification of sound. It has also been demonstrated that the horn effect over a porous road pavement is less dependent on the angular position of the source on the surface of tires.

  6. Acoustic investigation of wall jet over a backward-facing step using a microphone phased array

    NASA Astrophysics Data System (ADS)

    Perschke, Raimund F.; Ramachandran, Rakesh C.; Raman, Ganesh

    2015-02-01

    The acoustic properties of a wall jet over a hard-walled backward-facing step of aspect ratios 6, 3, 2, and 1.5 are studied using a 24-channel microphone phased array at Mach numbers up to M=0.6. The Reynolds number based on inflow velocity and step height assumes values from Reh = 3.0 ×104 to 7.2 ×105. Flow without and with side walls is considered. The experimental setup is open in the wall-normal direction and the expansion ratio is effectively 1. In case of flow through a duct, symmetry of the flow in the spanwise direction is lost downstream of separation at all but the largest aspect ratio as revealed by oil paint flow visualization. Hydrodynamic scattering of turbulence from the trailing edge of the step contributes significantly to the radiated sound. Reflection of acoustic waves from the bottom plate results in a modulation of power spectral densities. Acoustic source localization has been conducted using a 24-channel microphone phased array. Convective mean-flow effects on the apparent source origin have been assessed by placing a loudspeaker underneath a perforated flat plate and evaluating the displacement of the beamforming peak with inflow Mach number. Two source mechanisms are found near the step. One is due to interaction of the turbulent wall jet with the convex edge of the step. Free-stream turbulence sound is found to be peaked downstream of the step. Presence of the side walls increases free-stream sound. Results of the flow visualization are correlated with acoustic source maps. Trailing-edge sound and free-stream turbulence sound can be discriminated using source localization.

  7. Three-dimensional interpretation of TEM soundings

    NASA Astrophysics Data System (ADS)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  8. Brief Report: Suboptimal Auditory Localization in Autism Spectrum Disorder--Support for the Bayesian Account of Sensory Symptoms

    ERIC Educational Resources Information Center

    Skewes, Joshua C.; Gebauer, Line

    2016-01-01

    Convergent research suggests that people with ASD have difficulties localizing sounds in space. These difficulties have implications for communication, the development of social behavior, and quality of life. Recently, a theory has emerged which treats perceptual symptoms in ASD as the product of impairments in implicit Bayesian inference; as…

  9. 76 FR 59898 - Special Local Regulation, Hydroplane Races, Lake Sammamish, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... Local Regulation, Hydroplane Races, Lake Sammamish, WA AGENCY: Coast Guard, DHS. ACTION: Notice of... hydroplane event in Lake Sammamish, WA from 11 a.m. until 4:30 p.m. from September 30, 2011 through October 2... within the Captain of the Port Puget Sound Area of Responsibility 33 CFR 100.1308. The Lake Sammamish...

  10. Regional Science and Technology (RS&T) Organizations

    EPA Pesticide Factsheets

    EPA’s RS&T Organizations perform analytical and other work that: practices sound science, implements the principles of environmental protection, and promotes partnerships with states, Indian Nations, and local governments.

  11. Assessment on transient sound radiation of a vibrating steel bridge due to traffic loading

    NASA Astrophysics Data System (ADS)

    Zhang, He; Xie, Xu; Jiang, Jiqing; Yamashita, Mikio

    2015-02-01

    Structure-borne noise induced by vehicle-bridge coupling vibration is harmful to human health and living environment. Investigating the sound pressure level and the radiation mechanism of structure-borne noise is of great significance for the assessment of environmental noise pollution and noise control. In this paper, the transient noise induced by vehicle-bridge coupling vibration is investigated by employing the hybrid finite element method (FEM) and boundary element method (BEM). The effect of local vibration of the bridge deck is taken into account and the sound responses of the structure-borne noise in time domain is obtained. The precision of the proposed method is validated by comparing numerical results to the on-site measurements of a steel girder-plate bridge in service. It implies that the sound pressure level and its distribution in both time and frequency domains may be predicted by the hybrid approach of FEM-BEM with satisfactory accuracy. Numerical results indicate that the vibrating steel bridge radiates high-level noise because of its extreme flexibility and large surface area for sound radiation. The impact effects of the vehicle on the sound pressure when leaving the bridge are observed. The shape of the contour lines in the area around the bridge deck could be explained by the mode shapes of the bridge. The moving speed of the vehicle only affects the sound pressure components with frequencies lower than 10 Hz.

  12. Speech training alters consonant and vowel responses in multiple auditory cortex fields

    PubMed Central

    Engineer, Crystal T.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Fink, Melyssa K.; Kilgard, Michael P.

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. PMID:25827927

  13. Verification of the helioseismology travel-time measurement technique and the inversion procedure for sound speed using artificial data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parchevsky, K. V.; Zhao, J.; Hartlep, T.

    We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less

  14. Left-right and front-back spatial hearing with multiple directional microphone configurations in modern hearing aids.

    PubMed

    Carette, Evelyne; Van den Bogaert, Tim; Laureyns, Mark; Wouters, Jan

    2014-10-01

    Several studies have demonstrated negative effects of directional microphone configurations on left-right and front-back (FB) sound localization. New processing schemes, such as frequency-dependent directionality and front focus with wireless ear-to-ear communication in recent, commercial hearing aids may preserve the binaural cues necessary for left-right localization and may introduce useful spectral cues necessary for FB disambiguation. In this study, two hearing aids with different processing schemes, which were both designed to preserve the ability to localize sounds in the horizontal plane (left-right and FB), were compared. We compared horizontal (left-right and FB) sound localization performance of hearing aid users fitted with two types of behind-the-ear (BTE) devices. The first type of BTE device had four different programs that provided (1) no directionality, (2-3) symmetric frequency-dependent directionality, and (4) an asymmetric configuration. The second pair of BTE devices was evaluated in its omnidirectional setting. This setting automatically activates a soft forward-oriented directional scheme that mimics the pinna effect. Also, wireless communication between the hearing aids was present in this configuration (5). A broadband stimulus was used as a target signal. The directional hearing abilities of the listeners were also evaluated without hearing aids as a reference. A total of 12 listeners with moderate to severe hearing loss participated in this study. All were experienced hearing-aid users. As a reference, 11 listeners with normal hearing participated. The participants were positioned in a 13-speaker array (left-right, -90°/+90°) or 7-speaker array (FB, 0-180°) and were asked to report the number of the loudspeaker located the closest to where the sound was perceived. The root mean square error was calculated for the left-right experiment, and the percentage of FB errors was used as a FB performance measure. RESULTS were analyzed with repeated-measures analysis of variance. For the left-right localization task, no significant differences could be proven between the unaided condition and both partial directional schemes and the omnidirectional scheme. The soft forward-oriented system and the asymmetric system did show a detrimental effect compared with the unaided condition. On average, localization was worst when users used the asymmetric condition. Analysis of the results of the FB experiment showed good performance, similar to unaided, with both the partial directional systems and the asymmetric configuration. Significantly worse performance was found with the omnidirectional and the omnidirectional soft forward-oriented BTE systems compared with the other hearing-aid systems. Bilaterally fitted partial directional systems preserve (part of) the binaural cues necessary for left-right localization and introduce, preserve, or enhance useful spectral cues that allow FB disambiguation. Omnidirectional systems, although good for left-right localization, do not provide the user with enough spectral information for an optimal FB localization performance. American Academy of Audiology.

  15. Bottom currents and sediment transport in Long Island Sound: A modeling study

    USGS Publications Warehouse

    Signell, R.P.; List, J.H.; Farris, A.S.

    2000-01-01

    A high resolution (300-400 m grid spacing), process oriented modeling study was undertaken to elucidate the physical processes affecting the characteristics and distribution of sea-floor sedimentary environments in Long Island Sound. Simulations using idealized forcing and high-resolution bathymetry were performed using a three-dimensional circulation model ECOM (Blumberg and Mellor, 1987) and a stationary shallow water wave model HISWA (Holthuijsen et al., 1989). The relative contributions of tide-, density-, wind- and wave-driven bottom currents are assessed and related to observed characteristics of the sea-floor environments, and simple bedload sediment transport simulations are performed. The fine grid spacing allows features with scales of several kilometers to be resolved. The simulations clearly show physical processes that affect the observed sea-floor characteristics at both regional and local scales. Simulations of near-bottom tidal currents reveal a strong gradient in the funnel-shaped eastern part of the Sound, which parallels an observed gradient in sedimentary environments from erosion or nondeposition, through bedload transport and sediment sorting, to fine-grained deposition. A simulation of estuarine flow driven by the along-axis gradient in salinity shows generally westward bottom currents of 2-4 cm/s that are locally enhanced to 6-8 cm/s along the axial depression of the Sound. Bottom wind-driven currents flow downwind along the shallow margins of the basin, but flow against the wind in the deeper regions. These bottom flows (in opposition to the wind) are strongest in the axial depression and add to the estuarine flow when winds are from the west. The combination of enhanced bottom currents due to both estuarine circulation and the prevailing westerly winds provide an explanation for the relatively coarse sediments found along parts of the axial depression. Climatological simulations of wave-driven bottom currents show that frequent high-energy events occur along the shallow margins of the Sound, explaining the occurrence of relatively coarse sediments in these regions. Bedload sediment transport calculations show that the estuarine circulation coupled with the oscillatory tidal currents result in a net westward transport of sand in much of the eastern Sound. Local departures from this regional westward trend occur around topographic and shoreline irregularities, and there is strong predicted convergence of bedload transport over most of the large, linear sand ridges in the eastern Sound, providing a mechanism which prevents their decay. The strong correlation between the near-bottom current intensity based on the model results and the sediment response, as indicated by the distribution of sedimentary environments, provides a framework for predicting the long-term effects of anthropogenic activities.

  16. Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia

    NASA Astrophysics Data System (ADS)

    Rowell, Colin

    We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.

  17. Accoustic Localization of Breakdown in Radio Frequency Accelerating Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lane, Peter Gwin

    Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it wouldmore » be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.« less

  18. Acoustic localization of breakdown in radio frequency accelerating cavities

    NASA Astrophysics Data System (ADS)

    Lane, Peter

    Current designs for muon accelerators require high-gradient radio frequency (RF) cavities to be placed in solenoidal magnetic fields. These fields help contain and efficiently reduce the phase space volume of source muons in order to create a usable muon beam for collider and neutrino experiments. In this context and in general, the use of RF cavities in strong magnetic fields has its challenges. It has been found that placing normal conducting RF cavities in strong magnetic fields reduces the threshold at which RF cavity breakdown occurs. To aid the effort to study RF cavity breakdown in magnetic fields, it would be helpful to have a diagnostic tool which can localize the source of breakdown sparks inside the cavity. These sparks generate thermal shocks to small regions of the inner cavity wall that can be detected and localized using microphones attached to the outer cavity surface. Details on RF cavity sound sources as well as the hardware, software, and algorithms used to localize the source of sound emitted from breakdown thermal shocks are presented. In addition, results from simulations and experiments on three RF cavities, namely the Aluminum Mock Cavity, the High-Pressure Cavity, and the Modular Cavity, are also given. These results demonstrate the validity and effectiveness of the described technique for acoustic localization of breakdown.

  19. Comparisons of predicted steady-state levels in rooms with extended- and local-reaction bounding surfaces

    NASA Astrophysics Data System (ADS)

    Hodgson, Murray; Wareing, Andrew

    2008-01-01

    A combined beam-tracing and transfer-matrix model for predicting steady-state sound-pressure levels in rooms with multilayer bounding surfaces was used to compare the effect of extended- and local-reaction surfaces, and the accuracy of the local-reaction approximation. Three rooms—an office, a corridor and a workshop—with one or more multilayer test surfaces were considered. The test surfaces were a single-glass panel, a double-drywall panel, a carpeted floor, a suspended-acoustical ceiling, a double-steel panel, and glass fibre on a hard backing. Each test surface was modeled as of extended or of local reaction. Sound-pressure levels were predicted and compared to determine the significance of the surface-reaction assumption. The main conclusions were that the difference between modeling a room surface as of extended or of local reaction is not significant when the surface is a single plate or a single layer of material (solid or porous) with a hard backing. The difference is significant when the surface consists of multilayers of solid or porous material and includes a layer of fluid with a large thickness relative to the other layers. The results are partially explained by considering the surface-reflection coefficients at the first-reflection angles.

  20. Cross-correlation, triangulation, and curved-wavefront focusing of coral reef sound using a bi-linear hydrophone array.

    PubMed

    Freeman, Simon E; Buckingham, Michael J; Freeman, Lauren A; Lammers, Marc O; D'Spain, Gerald L

    2015-01-01

    A seven element, bi-linear hydrophone array was deployed over a coral reef in the Papahãnaumokuãkea Marine National Monument, Northwest Hawaiian Islands, in order to investigate the spatial, temporal, and spectral properties of biological sound in an environment free of anthropogenic influences. Local biological sound sources, including snapping shrimp and other organisms, produced curved-wavefront acoustic arrivals at the array, allowing source location via focusing to be performed over an area of 1600 m(2). Initially, however, a rough estimate of source location was obtained from triangulation of pair-wise cross-correlations of the sound. Refinements to these initial source locations, and source frequency information, were then obtained using two techniques, conventional and adaptive focusing. It was found that most of the sources were situated on or inside the reef structure itself, rather than over adjacent sandy areas. Snapping-shrimp-like sounds, all with similar spectral characteristics, originated from individual sources predominantly in one area to the east of the array. To the west, the spectral and spatial distributions of the sources were more varied, suggesting the presence of a multitude of heterogeneous biological processes. In addition to the biological sounds, some low-frequency noise due to distant breaking waves was received from end-fire north of the array.

  1. The effect of hearing impairment on localization dominance for single-word stimuli

    PubMed Central

    Akeroyd, Michael A; Guy, Fiona H.

    2012-01-01

    Localization dominance (one of the phenomena of the “precedence effect”) was measured in a large number of normal hearing and hearing-impaired individuals and related to self-reported difficulties in everyday listening. The stimuli (single words) were made-up of a “lead” followed 4-ms later by a equal-level “lag” from a different direction. The stimuli were presented from a circular ring of loudspeakers, either in quiet or in a background of spatially-diffuse babble. Listeners were required to identify the loudspeaker from which they heard the sound. Localization dominance was quantified by the weighting factor c [B.G. Shinn-Cunningham et al., J. Acoust. Soc. Am. 93, 2923-2932 (1993)]. The results demonstrated large individual differences: some listeners showed near-perfect localization dominance (c near 1) but many showed a much reduced effect. Two thirds (64/93) of listeners gave a value of c of at least 0.75. There was a significant correlation with hearing loss, such that better hearing listeners showed better localization dominance. One of the items of the self-report questionnaire (“Do you have the impression of sounds being exactly where you would expect them to be?”) showed a significant correlation with the experimental results. This suggests that reductions in localization dominance may affect everyday auditory perception. PMID:21786901

  2. Effectiveness of Interaural Delays Alone as Cues During Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The contribution of interaural time differences (ITDs) to the localization of virtual sound sources with and without head motion was examined. Listeners estimated the apparent azimuth, elevation and distance of virtual sources presented over headphones. Stimuli (3 sec., white noise) were synthesized from minimum-phase representations of nonindividualized head-related transfer functions (HRTFs); binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; head position was tracked and stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. Two synthesis conditions were tested: (1) both interaural level differences (ILDs) and ITDs correctly correlated with source location and head motion, (2) ITDs correct, no ILDs (flat magnitude spectrum). Head movements reduced azimuth confusions primarily when interaural cues were correctly correlated, although a smaller effect was also seen for ITDs alone. Externalization was generally poor for ITD-only conditions and was enhanced by head motion only for normal HRTFs. Overall the data suggest that, while ITDs alone can provide a significant cue for azimuth, the errors most commonly associated with virtual sources are reduced by location-dependent magnitude cues.

  3. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    PubMed

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  4. Steep, Transient Density Gradients in the Martian Ionosphere Similar to the Ionopause at Venus

    NASA Astrophysics Data System (ADS)

    Duru, Firdevs; Gurnett, Donald; Frahm, Rudy; Winningham, D. L.; Morgan, David; Howes, Gregory

    Using Mars Advanced Radar for Subsurface and Ionospheric Sounding (MARSIS) on the Mars Express (MEX) spacecraft, the electron density can be measured by two methods: from the excitation of local plasma oscillations and from remote sounding. A study of the local electron density versus time for 1664 orbits revealed that in 132 orbits very sharp gradients in the electron density occurred that are similar to the ionopause boundary commonly observed at Venus. In 40 of these cases, remote sounding data have also confirmed identical locations of steep ionopause-like density gradients. Measurements from the Analyzer of Space Plasma and Energetic Atoms (ASPERA-3) Electron Spectrometer (ELS) and Ion Mass Analyzer (IMA) instruments (also on Mars Express) verify that these sharp decreases in the electron density occur somewhere between the end of the region where ionospheric photoelectrons are dominant and the magnetosheath. Combined studies of the two experiments reveal that the steep density gradients define a boundary where the magnetic fields change from open to closed. This study shows that, although the individual cases are from a wide range of altitudes, the average altitude of the boundary as a function of solar zenith angle is almost constant. The average altitude is approximately 500 km up to solar zenith angles of 60o, after which it shows a slight increase. The average thickness of the boundary is about 22 km according to remote sounding measurements. The altitude of the steep gradients shows an increase at locations with strong crustal magnetic fields.

  5. Relating large-scale climate variability to local species abundance: ENSO forcing and shrimp in Breton Sound, Louisiana, USA

    USGS Publications Warehouse

    Piazza, Bryan P.; LaPeyre, Megan K.; Keim, B.D.

    2010-01-01

    Climate creates environmental constraints (filters) that affect the abundance and distribution of species. In estuaries, these constraints often result from variability in water flow properties and environmental conditions (i.e. water flow, salinity, water temperature) and can have significant effects on the abundance and distribution of commercially important nekton species. We investigated links between large-scale climate variability and juvenile brown shrimp Farfantepenaeus aztecus abundance in Breton Sound estuary, Louisiana (USA). Our goals were to (1) determine if a teleconnection exists between local juvenile brown shrimp abundance and the El Niño Southern Oscillation (ENSO) and (2) relate that linkage to environmental constraints that may affect juvenile brown shrimp recruitment to, and survival in, the estuary. Our results identified a teleconnection between winter ENSO conditions and juvenile brown shrimp abundance in Breton Sound estuary the following spring. The physical connection results from the impact of ENSO on winter weather conditions in Breton Sound (air pressure, temperature, and precipitation). Juvenile brown shrimp abundance effects lagged ENSO by 3 mo: lower than average abundances of juvenile brown shrimp were caught in springs following winter El Niño events, and higher than average abundances of brown shrimp were caught in springs following La Niña winters. Salinity was the dominant ENSO-forced environmental filter for juvenile brown shrimp. Spring salinity was cumulatively forced by winter river discharge, winter wind forcing, and spring precipitation. Thus, predicting brown shrimp abundance requires incorporating climate variability into models.

  6. Preoperative planning of calcium deposit removal in calcifying tendinitis of the rotator cuff - possible contribution of computed tomography, ultrasound and conventional X-Ray.

    PubMed

    Izadpanah, Kaywan; Jaeger, Martin; Maier, Dirk; Südkamp, Norbert P; Ogon, Peter

    2014-11-20

    The purpose of the present study was to investigate the accuracy of Ultrasound (US), conventional X-Ray (CX) and Computed Tomography (CT) to estimate the total count, localization, morphology and consistency of Calcium deposits (CDs) in the rotator cuff. US, CX and CT imaging was performed pre-operatively in 151 patients who underwent arthroscopic removal of CDs in the rotator cuff. In all procedures: (1) total CD counts were determined, (2) the CDs appearance in each image modality was correlated to the intraoperative consistency and (3) CDs were localized in their relation to the acromion using US, CX and CT. Using US158 CDs, using CT 188 CDs and using CX 164 CDs were identified. Reliable localization of the CDs was possible with all used diagnostic modalities. CT revealed 49% of the CDs to be septated, out of which 85% were uni- and 15% multiseptated. CX was not suitable for prediction of CDs consistency. US reliably predicted viscous-solid CDs consistency only when presenting with full sound extinction (PPV 84.6%) . CT had high positive and negative predictive values for detection of liquid-soft (PPV 92.9%) and viscous-solid (PPV 87.8%) CDs. US and CX are sufficient for preoperative planning of CD removal with regards to localization and prediction of consistency if the deposits present with full sound extinction. This is the case in the majority of the patients. However, in patients with missing sound extinction CT can be recommended if CDs consistency of the deposits should be determined. Satellite deposits or septations are regularly present, which is of importance if complete CD removal is aspired.

  7. Do top predators cue on sound production by mesopelagic prey?

    NASA Astrophysics Data System (ADS)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  8. Organizing Districts for Better Schools: A Summary of School District Reorganization Policies and Procedures. Bulletin, 1958, No. 9

    ERIC Educational Resources Information Center

    Fitzwater, C. O.

    1958-01-01

    The establishment of soundly organized local districts for administering the schools has long been a major problem in American education. Methods of dealing with this problem have varied greatly ranging from compulsory reorganization of districts by legislative decree to dependence upon highly permissive laws to be used or ignored as local people…

  9. Categorizing Sounds

    DTIC Science & Technology

    1989-12-01

    psychophysical study. These have been called Class A and Class B, or sensory and perceptual, or local and global, and probably 18 other terms. Among...Class A studies, detection, two-choice dis- criminability and other local measures reveal differential sensi- tivities of receptor or sensory systems...Eds.), Percepcion del Obieto: Estructura y Procesos, 553-596. Universidad Nacional de Educacion a Distancia. Lisanby, S. H., & Lockhead, G. R. (accepted

  10. Improvements to Passive Acoustic Tracking Methods for Marine Mammal Monitoring

    DTIC Science & Technology

    2016-05-02

    individual animals . 15. SUBJECT TERMS Marine mammal; Passive acoustic monitoring ; Localization; Tracking ; Multiple source ; Sparse array 16. SECURITY...al. 2004; Thode 2005; Nosal 2007] to localize animals in situations where straight-line propagation assumptions made by conventional marine mammal...Objective 1: Inveti for sound speed profiles. hydrophone position and hydrophone timing offset in addition to animal position Almost all marine mammal

  11. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    NASA Astrophysics Data System (ADS)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  12. VISSR Atmospheric Sounder (VAS) simulation experiment for a severe storm environment

    NASA Technical Reports Server (NTRS)

    Chesters, D.; Uccellini, L. W.; Mostek, A.

    1981-01-01

    Radiance fields were simulated for prethunderstorm environments in Oklahoma to demonstrate three points: (1) significant moisture gradients can be seen directly in images of the VISSIR Atmospheric Sounder (VAS) channels; (2) temperature and moisture profiles can be retrieved from VAS radiances with sufficient accuracy to be useful for mesoscale analysis of a severe storm environment; and (3) the quality of VAS mesoscale soundings improves with conditioning by local weather statistics. The results represent the optimum retrievability of mesoscale information from VAS radiance without the use of ancillary data. The simulations suggest that VAS data will yield the best soundings when a human being classifies the scene, picks relatively clear areas for retrieval, and applies a "local" statistical data base to resolve the ambiguities of satellite observations in favor of the most probable atmospheric structure.

  13. Design and analysis of ultrasonic monaural audio guiding device for the visually impaired.

    PubMed

    Kim, Keonwook; Kim, Hyunjai; Yun, Gihun; Kim, Myungsoo

    2009-01-01

    The novel Audio Guiding Device (AGD) based on the ultrasonic, which is named as SonicID, has been developed in order to localize point of interest for the visually impaired. The SonicID requires the infrastructure of the transmitters for broadcasting the location information over the ultrasonic carrier. The user with ultrasonic headset receives the information with variable amplitude upon the location and direction of the user due to the ultrasonic characteristic and modulation method. This paper proposes the monaural headset form factor of the SonicID which improves the daily life of the beneficiary compare to the previous version which uses the both ears. Experimental results from SonicID, Bluetooth, and audible sound show that the SonicID demonstrates comparable localization performance to the audible sound with silence to others.

  14. The Relative Contribution of Interaural Time and Magnitude Cues to Dynamic Sound Localization

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper presents preliminary data from a study examining the relative contribution of interaural time differences (ITDs) and interaural level differences (ILDs) to the localization of virtual sound sources both with and without head motion. The listeners' task was to estimate the apparent direction and distance of virtual sources (broadband noise) presented over headphones. Stimuli were synthesized from minimum phase representations of nonindividualized directional transfer functions; binaural magnitude spectra were derived from the minimum phase estimates and ITDs were represented as a pure delay. During dynamic conditions, listeners were encouraged to move their heads; the position of the listener's head was tracked and the stimuli were synthesized in real time using a Convolvotron to simulate a stationary external sound source. ILDs and ITDs were either correctly or incorrectly correlated with head motion: (1) both ILDs and ITDs correctly correlated, (2) ILDs correct, ITD fixed at 0 deg azimuth and 0 deg elevation, (3) ITDs correct, ILDs fixed at 0 deg, 0 deg. Similar conditions were run for static conditions except that none of the cues changed with head motion. The data indicated that, compared to static conditions, head movements helped listeners to resolve confusions primarily when ILDs were correctly correlated, although a smaller effect was also seen for correct ITDs. Together with the results for static conditions, the data suggest that localization tends to be dominated by the cue that is most reliable or consistent, when reliability is defined by consistency over time as well as across frequency bands.

  15. Efficacy of Bone-Anchored Hearing Aids in Single-Sided Deafness: A Systematic Review.

    PubMed

    Kim, Gaeun; Ju, Hyun Mi; Lee, Sun Hee; Kim, Hee-Soon; Kwon, Jeong A; Seo, Young Joon

    2017-04-01

    Bone-anchored hearing aids (BAHAs) have been known to partially restore some of the functions lost in subjects with single-sided deafness (SSD). Our aims in this systemic review were to analyze the present capabilities of BAHAs in the context of SSD, and to evaluate the efficacy of BAHAs in improving speech recognition in noisy conditions, sound localization, and subjective outcomes. A systematic search was undertaken until August 2015 by two independent reviewers, with disagreements resolved by consensus. Among 286 references, we analyzed 14 studies that used both subjective and objective indicators to assess the capabilities of a total of 296 patients in the unaided and aided situations. Although there was "no benefit" of BAHA implantation for sound localization, BAHAs certainly improved subjects' speech discrimination in noisy circumstances. In the six studies that dealt with sound localization, no significant difference was found after the implantation. Twelve studies showed the benefits of BAHAs for speech discrimination in noise. Regarding subjective outcomes of using the prosthesis in patients with SSD (abbreviated profile of hearing aid benefit [APHAB] and the Glasgow hearing aid benefit profile [GHABP], etc.), we noticed an improvement in the quality of life. This systematic review has indicated that BAHAs may successfully rehabilitate patients with SSD by alleviating the hearing handicap to a certain degree, which could improve patients' quality of life. This report has presented additional evidence of effective auditory rehabilitation for SSD and will be helpful to clinicians counseling patients regarding treatment options for SSD.

  16. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    PubMed

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  17. Contaminant distribution and accumulation in the surface sediments of Long Island Sound

    USGS Publications Warehouse

    Mecray, E.L.; Buchholtz ten Brink, Marilyn R.

    2000-01-01

    The distribution of contaminants in surface sediments has been measured and mapped as part of a U.S. Geological Survey study of the sediment quality and dynamics of Long Island Sound. Surface samples from 219 stations were analyzed for trace (Ag, Ba, Cd, Cr, Cu, Hg, Ni, Pb, V, Zn and Zr) and major (Al, Fe, Mn, Ca, and Ti) elements, grain size, and Clostridium perfringens spores. Principal Components Analysis was used to identify metals that may covary as a function of common sources or geochemistry. The metallic elements generally have higher concentrations in fine-grained deposits, and their transport and depositional patterns mimic those of small particles. Fine-grained particles are remobilized and transported from areas of high bottom energy and deposited in less dynamic regions of the Sound. Metal concentrations in bottom sediments are high in the western part of the Sound and low in the bottom-scoured regions of the eastern Sound. The sediment chemistry was compared to model results (Signell et al., 1998) and maps of sedimentary environments (Knebel et al., 1999) to better understand the processes responsible for contaminant distribution across the Sound. Metal concentrations were normalized to grain-size and the resulting ratios are uniform in the depositional basins of the Sound and show residual signals in the eastern end as well as in some local areas. The preferential transport of fine-grained material from regions of high bottom stress is probably the dominant factor controlling the metal concentrations in different regions of Long Island Sound. This physical redistribution has implications for environmental management in the region.

  18. Deltas, freshwater discharge, and waves along the Young Sound, NE Greenland.

    PubMed

    Kroon, Aart; Abermann, Jakob; Bendixen, Mette; Lund, Magnus; Sigsgaard, Charlotte; Skov, Kirstine; Hansen, Birger Ulf

    2017-02-01

    A wide range of delta morphologies occurs along the fringes of the Young Sound in Northeast Greenland due to spatial heterogeneity of delta regimes. In general, the delta regime is related to catchment and basin characteristics (geology, topography, drainage pattern, sediment availability, and bathymetry), fluvial discharges and associated sediment load, and processes by waves and currents. Main factors steering the Arctic fluvial discharges into the Young Sound are the snow and ice melt and precipitation in the catchment, and extreme events like glacier lake outburst floods (GLOFs). Waves are subordinate and only rework fringes of the delta plain forming sandy bars if the exposure and fetch are optimal. Spatial gradients and variability in driving forces (snow and precipitation) and catchment characteristics (amount of glacier coverage, sediment characteristics) as well as the strong and local influence of GLOFs in a specific catchment impede a simple upscaling of sediment fluxes from individual catchments toward a total sediment flux into the Young Sound.

  19. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  20. Emission Enhancement of Sound Emitters using an Acoustic Metamaterial Cavity

    PubMed Central

    Song, Kyungjun; Lee, Seong-Hyun; Kim, Kiwon; Hur, Shin; Kim, Jedo

    2014-01-01

    The emission enhancement of sound without electronic components has wide applications in a variety of remote systems, especially when highly miniaturized (smaller than wavelength) structures can be used. The recent advent of acoustic metamaterials has made it possible to realize this. In this study, we propose, design, and demonstrate a new class of acoustic cavity using a double-walled metamaterial structure operating at an extremely low frequency. Periodic zigzag elements which exhibit Fabry-Perot resonant behavior below the phononic band-gap are used to yield strong sound localization within the subwavelength gap, thus providing highly effective emission enhancement. We show, both theoretically and experimentally, 10 dB sound emission enhancement near 1060 Hz that corresponds to a wavelength approximately 30 times that of the periodicity. We also provide a general guideline for the independent tuning of the quality factor and effective volume of acoustic metamaterials. This approach shows the flexibility of our design in the efficient control of the enhancement rate. PMID:24584552

  1. Sound source localization on an axial fan at different operating points

    NASA Astrophysics Data System (ADS)

    Zenger, Florian J.; Herold, Gert; Becker, Stefan; Sarradj, Ennes

    2016-08-01

    A generic fan with unskewed fan blades is investigated using a microphone array method. The relative motion of the fan with respect to the stationary microphone array is compensated by interpolating the microphone data to a virtual rotating array with the same rotational speed as the fan. Hence, beamforming algorithms with deconvolution, in this case CLEAN-SC, could be applied. Sound maps and integrated spectra of sub-components are evaluated for five operating points. At selected frequency bands, the presented method yields sound maps featuring a clear circular source pattern corresponding to the nine fan blades. Depending on the adjusted operating point, sound sources are located on the leading or trailing edges of the fan blades. Integrated spectra show that in most cases leading edge noise is dominant for the low-frequency part and trailing edge noise for the high-frequency part. The shift from leading to trailing edge noise is strongly dependent on the operating point and frequency range considered.

  2. Spiking Models for Level-Invariant Encoding

    PubMed Central

    Brette, Romain

    2012-01-01

    Levels of ecological sounds vary over several orders of magnitude, but the firing rate and membrane potential of a neuron are much more limited in range. In binaural neurons of the barn owl, tuning to interaural delays is independent of level differences. Yet a monaural neuron with a fixed threshold should fire earlier in response to louder sounds, which would disrupt the tuning of these neurons. How could spike timing be independent of input level? Here I derive theoretical conditions for a spiking model to be insensitive to input level. The key property is a dynamic change in spike threshold. I then show how level invariance can be physiologically implemented, with specific ionic channel properties. It appears that these ingredients are indeed present in monaural neurons of the sound localization pathway of birds and mammals. PMID:22291634

  3. Locally resonant sonic materials

    PubMed

    Liu; Zhang; Mao; Zhu; Yang; Chan; Sheng

    2000-09-08

    We have fabricated sonic crystals, based on the idea of localized resonant structures, that exhibit spectral gaps with a lattice constant two orders of magnitude smaller than the relevant wavelength. Disordered composites made from such localized resonant structures behave as a material with effective negative elastic constants and a total wave reflector within certain tunable sonic frequency ranges. A 2-centimeter slab of this composite material is shown to break the conventional mass-density law of sound transmission by one or more orders of magnitude at 400 hertz.

  4. High intensity tone generation by axisymmetric ring cavities on training projectiles

    NASA Technical Reports Server (NTRS)

    Parthasarathy, S. P.; Cho, Y. I.; Back, L. H.

    1984-01-01

    An experimental investigation has been carried out on the production of high intensity tones by axisymmetric ring cavities. Maximum sound production occurs during a double resonance at Strouhal numbers which depend only on the local flow velocity independent of cavity location. Values of sound pressure of about 115 dB at 1 meter distance can be generated by axisymmetric ring cavities on projectiles moving at a relatively low flight speed equal to 65 m/s. Frequencies in the audible range up to several Kilo Hertz can be generated aeroacoustically.

  5. Sound propagation in a duct of periodic wall structure. [numerical analysis

    NASA Technical Reports Server (NTRS)

    Kurze, U.

    1978-01-01

    A boundary condition, which accounts for the coupling in the sections behind the duct boundary, is given for the sound-absorbing duct with a periodic structure of the wall lining and using regular partition walls. The soundfield in the duct is suitably described by the method of differences. For locally active walls this renders an explicit approximate solution for the propagation constant. Coupling may be accounted for by the method of differences in a clear manner. Numerical results agree with measurements and yield information which has technical applications.

  6. Boundary-layer receptivity of sound with roughness

    NASA Technical Reports Server (NTRS)

    Saric, William S.; Hoos, Jon A.; Radeztsky, Ronald H.

    1991-01-01

    An experimental study of receptivity was carried out using an acoustical disturbance in the freestream. The receptivity was enhanced by using a uniform two-dimensional roughness strip (tape). The roughness strip generated the local adjustment in the flow needed to couple the long-wavelength sound wave with the short-wavelength T-S wave. The method proved to be highly sensitive, with slight changes in the forcing frequency or in the height of the 2D roughness element having a strong effect on the amplitude of the observed T-S wave.

  7. Untangling the roles of wind, run-off and tides in Prince William Sound

    NASA Astrophysics Data System (ADS)

    Colas, François; Wang, Xiaochun; Capet, Xavier; Chao, Yi; McWilliams, James C.

    2013-07-01

    Prince William Sound (PWS) oceanic circulation is driven by a combination of local wind, large run-off and strong tides. Using a regional oceanic model of the Gulf of Alaska, adequately resolving the mean circulation and mesoscale eddies, we configure a series of three nested domains. The inner domain zooms in on Prince William Sound with a 1-km horizontal grid resolution. We analyze a set of four experiments with different combinations of run-off, wind and tides to demonstrate the relative influence of these forcing on the central Sound mean circulation cell and its seasonal variability. The mean circulation in the central PWS region is generally characterized by a cyclonic cell. When forced only by the wind, the circulation is cyclonic in winter and fall and strongly anticyclonic in summer. The addition of freshwater run-off greatly enhances the eddy kinetic energy in PWS partly through near-surface baroclinic instabilities. This leads to a much more intermittent circulation in the central Sound, with the presence of intense small-scale turbulence and a disappearance of the summer wind-forced anticyclonic cell. The addition of tides reduces the turbulence intensity (relatively to the experiment with run-off only), particularly in the central Sound. The generation of turbulent motions by baroclinic processes is lowered by tidal mixing and by modification of the exchange at Hinchinbrook Entrance. Tides have an overall stabilizing effect on the central Sound circulation. Tidal rectification currents help maintain a mean cyclonic circulation throughout the year.

  8. COBE video news

    NASA Astrophysics Data System (ADS)

    1989-10-01

    This videotape was produced for hand-out to both local and national broadcast media as a prelude to the launch of the Cosmic Background Explorer. The tape consists of short clips with multi-channel sound to facilitate news media editing.

  9. Factors regulating early life history dispersal of Atlantic cod (Gadus morhua) from coastal Newfoundland.

    PubMed

    Stanley, Ryan R E; deYoung, Brad; Snelgrove, Paul V R; Gregory, Robert S

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day(-1) with a net mortality of 27%•day(-1). Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10-20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic.

  10. The influence of underwater data transmission sounds on the displacement behaviour of captive harbour seals (Phoca vitulina).

    PubMed

    Kastelein, Ronald A; van der Heul, Sander; Verboom, Willem C; Triesscheijn, Rob J V; Jennings, Nancy V

    2006-02-01

    To prevent grounding of ships and collisions between ships in shallow coastal waters, an underwater data collection and communication network (ACME) using underwater sounds to encode and transmit data is currently under development. Marine mammals might be affected by ACME sounds since they may use sound of a similar frequency (around 12 kHz) for communication, orientation, and prey location. If marine mammals tend to avoid the vicinity of the acoustic transmitters, they may be kept away from ecologically important areas by ACME sounds. One marine mammal species that may be affected in the North Sea is the harbour seal (Phoca vitulina). No information is available on the effects of ACME-like sounds on harbour seals, so this study was carried out as part of an environmental impact assessment program. Nine captive harbour seals were subjected to four sound types, three of which may be used in the underwater acoustic data communication network. The effect of each sound was judged by comparing the animals' location in a pool during test periods to that during baseline periods, during which no sound was produced. Each of the four sounds could be made into a deterrent by increasing its amplitude. The seals reacted by swimming away from the sound source. The sound pressure level (SPL) at the acoustic discomfort threshold was established for each of the four sounds. The acoustic discomfort threshold is defined as the boundary between the areas that the animals generally occupied during the transmission of the sounds and the areas that they generally did not enter during transmission. The SPLs at the acoustic discomfort thresholds were similar for each of the sounds (107 dB re 1 microPa). Based on this discomfort threshold SPL, discomfort zones at sea for several source levels (130-180 dB re 1 microPa) of the sounds were calculated, using a guideline sound propagation model for shallow water. The discomfort zone is defined as the area around a sound source that harbour seals are expected to avoid. The definition of the discomfort zone is based on behavioural discomfort, and does not necessarily coincide with the physical discomfort zone. Based on these results, source levels can be selected that have an acceptable effect on harbour seals in particular areas. The discomfort zone of a communication sound depends on the sound, the source level, and the propagation characteristics of the area in which the sound system is operational. The source level of the communication system should be adapted to each area (taking into account the width of a sea arm, the local sound propagation, and the importance of an area to the affected species). The discomfort zone should not coincide with ecologically important areas (for instance resting, breeding, suckling, and feeding areas), or routes between these areas.

  11. Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments

    NASA Astrophysics Data System (ADS)

    Tsuji, Daisuke; Suyama, Kenji

    This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.

  12. Estrogen and hearing from a clinical point of view; characteristics of auditory function in women with Turner syndrome.

    PubMed

    Hederstierna, Christina; Hultcrantz, Malou; Rosenhall, Ulf

    2009-06-01

    Turner syndrome is a chromosomal aberration affecting 1:2000 newborn girls, in which all or part of one X chromosome is absent. This leads to ovarial dysgenesis and little or no endogenous estrogen production. These women have, among many other syndromal features, a high occurrence of ear and hearing problems, and neurocognitive dysfunctions, including reduced visual-spatial abilities; it is assumed that estrogen deficiency is at least partially responsible for these problems. In this, study 30 Turner women aged 40-67, with mild to moderate hearing loss, performed a battery of hearing tests aimed at localizing the lesion causing the sensorineural hearing impairment and assessing central auditory function, primarily sound localization. The results of TEOAE, ABR and speech recognition scores in noise were all indicative of cochlear dysfunction as the cause of the sensorineural impairment. Phase audiometry, a test for sound localization, showed mild disturbances in the Turner women compared to the reference group, suggesting that auditory-spatial dysfunction is another facet of the recognized neurocognitive phenotype in Turner women.

  13. An open real-time tele-stethoscopy system.

    PubMed

    Foche-Perez, Ignacio; Ramirez-Payba, Rodolfo; Hirigoyen-Emparanza, German; Balducci-Gonzalez, Fernando; Simo-Reigadas, Francisco-Javier; Seoane-Pascual, Joaquin; Corral-Peñafiel, Jaime; Martinez-Fernandez, Andres

    2012-08-23

    Acute respiratory infections are the leading cause of childhood mortality. The lack of physicians in rural areas of developing countries makes difficult their correct diagnosis and treatment. The staff of rural health facilities (health-care technicians) may not be qualified to distinguish respiratory diseases by auscultation. For this reason, the goal of this project is the development of a tele-stethoscopy system that allows a physician to receive real-time cardio-respiratory sounds from a remote auscultation, as well as video images showing where the technician is placing the stethoscope on the patient's body. A real-time wireless stethoscopy system was designed. The initial requirements were: 1) The system must send audio and video synchronously over IP networks, not requiring an Internet connection; 2) It must preserve the quality of cardiorespiratory sounds, allowing to adapt the binaural pieces and the chestpiece of standard stethoscopes, and; 3) Cardiorespiratory sounds should be recordable at both sides of the communication. In order to verify the diagnostic capacity of the system, a clinical validation with eight specialists has been designed. In a preliminary test, twelve patients have been auscultated by all the physicians using the tele-stethoscopy system, versus a local auscultation using traditional stethoscope. The system must allow listen the cardiac (systolic and diastolic murmurs, gallop sound, arrhythmias) and respiratory (rhonchi, rales and crepitations, wheeze, diminished and bronchial breath sounds, pleural friction rub) sounds. The design, development and initial validation of the real-time wireless tele-stethoscopy system are described in detail. The system was conceived from scratch as open-source, low-cost and designed in such a way that many universities and small local companies in developing countries may manufacture it. Only free open-source software has been used in order to minimize manufacturing costs and look for alliances to support its improvement and adaptation. The microcontroller firmware code, the computer software code and the PCB schematics are available for free download in a subversion repository hosted in SourceForge. It has been shown that real-time tele-stethoscopy, together with a videoconference system that allows a remote specialist to oversee the auscultation, may be a very helpful tool in rural areas of developing countries.

  14. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    PubMed

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  15. Low-frequency sound propagation modeling over a locally-reacting boundary using the parabolic approximation

    NASA Technical Reports Server (NTRS)

    Robertson, J. S.; Siegman, W. L.; Jacobson, M. J.

    1989-01-01

    There is substantial interest in the analytical and numerical modeling of low-frequency, long-range atmospheric acoustic propagation. Ray-based models, because of frequency limitations, do not always give an adequate prediction of quantities such as sound pressure or intensity levels. However, the parabolic approximation method, widely used in ocean acoustics, and often more accurate than ray models for lower frequencies of interest, can be applied to acoustic propagation in the atmosphere. Modifications of an existing implicit finite-difference implementation for computing solutions to the parabolic approximation are discussed. A locally-reacting boundary is used together with a one-parameter impedance model. Intensity calculations are performed for a number of flow resistivity values in both quiescent and windy atmospheres. Variations in the value of this parameter are shown to have substantial effects on the spatial variation of the acoustic signal.

  16. Directional hearing by linear summation of binaural inputs at the medial superior olive

    PubMed Central

    van der Heijden, Marcel; Lorteije, Jeannette A. M.; Plauška, Andrius; Roberts, Michael T.; Golding, Nace L.; Borst, J. Gerard G.

    2013-01-01

    SUMMARY Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own “best ITD” to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. PMID:23764292

  17. Scaling of membrane-type locally resonant acoustic metamaterial arrays.

    PubMed

    Naify, Christina J; Chang, Chia-Ming; McKnight, Geoffrey; Nutt, Steven R

    2012-10-01

    Metamaterials have emerged as promising solutions for manipulation of sound waves in a variety of applications. Locally resonant acoustic materials (LRAM) decrease sound transmission by 500% over acoustic mass law predictions at peak transmission loss (TL) frequencies with minimal added mass, making them appealing for weight-critical applications such as aerospace structures. In this study, potential issues associated with scale-up of the structure are addressed. TL of single-celled and multi-celled LRAM was measured using an impedance tube setup with systematic variation in geometric parameters to understand the effects of each parameter on acoustic response. Finite element analysis was performed to predict TL as a function of frequency for structures with varying complexity, including stacked structures and multi-celled arrays. Dynamic response of the array structures under discrete frequency excitation was investigated using laser vibrometry to verify negative dynamic mass behavior.

  18. Double dissociation of 'what' and 'where' processing in auditory cortex.

    PubMed

    Lomber, Stephen G; Malhotra, Shveta

    2008-05-01

    Studies of cortical connections or neuronal function in different cerebral areas support the hypothesis that parallel cortical processing streams, similar to those identified in visual cortex, may exist in the auditory system. However, this model has not yet been behaviorally tested. We used reversible cooling deactivation to investigate whether the individual regions in cat nonprimary auditory cortex that are responsible for processing the pattern of an acoustic stimulus or localizing a sound in space could be doubly dissociated in the same animal. We found that bilateral deactivation of the posterior auditory field resulted in deficits in a sound-localization task, whereas bilateral deactivation of the anterior auditory field resulted in deficits in a pattern-discrimination task, but not vice versa. These findings support a model of cortical organization that proposes that identifying an acoustic stimulus ('what') and its spatial location ('where') are processed in separate streams in auditory cortex.

  19. Azimuthal sound localization in the European starling (Sturnus vulgaris): I. Physical binaural cues.

    PubMed

    Klump, G M; Larsen, O N

    1992-02-01

    The physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2). A directional pattern of sound intensity in the free-field was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the sound-pressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starling's interaural pathway using a closed sound system were in accord with the results of the free-field measurements. In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starling's peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.

  20. Development of on-off spiking in superior paraolivary nucleus neurons of the mouse

    PubMed Central

    Felix, Richard A.; Vonderschen, Katrin; Berrebi, Albert S.

    2013-01-01

    The superior paraolivary nucleus (SPON) is a prominent cell group in the auditory brain stem that has been increasingly implicated in representing temporal sound structure. Although SPON neurons selectively respond to acoustic signals important for sound periodicity, the underlying physiological specializations enabling these responses are poorly understood. We used in vitro and in vivo recordings to investigate how SPON neurons develop intrinsic cellular properties that make them well suited for encoding temporal sound features. In addition to their hallmark rebound spiking at the stimulus offset, SPON neurons were characterized by spiking patterns termed onset, adapting, and burst in response to depolarizing stimuli in vitro. Cells with burst spiking had some morphological differences compared with other SPON neurons and were localized to the dorsolateral region of the nucleus. Both membrane and spiking properties underwent strong developmental regulation, becoming more temporally precise with age for both onset and offset spiking. Single-unit recordings obtained in young mice demonstrated that SPON neurons respond with temporally precise onset spiking upon tone stimulation in vivo, in addition to the typical offset spiking. Taken together, the results of the present study demonstrate that SPON neurons develop sharp on-off spiking, which may confer sensitivity to sound amplitude modulations or abrupt sound transients. These findings are consistent with the proposed involvement of the SPON in the processing of temporal sound structure, relevant for encoding communication cues. PMID:23515791

  1. Sounds and source levels from bowhead whales off Pt. Barrow, Alaska.

    PubMed

    Cummings, W C; Holliday, D V

    1987-09-01

    Sounds were recorded from bowhead whales migrating past Pt. Barrow, AK, to the Canadian Beaufort Sea. They mainly consisted of various low-frequency (25- to 900-Hz) moans and well-defined sound sequences organized into "song" (20-5000 Hz) recorded with our 2.46-km hydrophone array suspended from the ice. Songs were composed of up to 20 repeated phrases (mean, 10) which lasted up to 146 s (mean, 66.3). Several bowhead whales often were within acoustic range of the array at once, but usually only one sang at a time. Vocalizations exhibited diurnal peaks of occurrence (0600-0800, 1600-1800 h). Sounds which were located in the horizontal plane had peak source spectrum levels as follows--44 moans: 129-178 dB re: 1 microPa, 1 m (median, 159); 3 garglelike utterances: 152, 155, and 169 dB; 33 songs: 158-189 dB (median, 177), all presumably from different whales. Based on ambient noise levels, measured total propagation loss, and whale sound source levels, our detection of whale sounds was theoretically noise-limited beyond 2.5 km (moans) and beyond 10.7 km (songs), a model supported by actual localizations. This study showed that over much of the shallow Arctic and sub-Arctic waters, underwater communications of the bowhead whale would be limited to much shorter ranges than for other large whales in lower latitude, deep-water regions.

  2. Low-momentum dynamic structure factor of a strongly interacting Fermi gas at finite temperature: A two-fluid hydrodynamic description

    NASA Astrophysics Data System (ADS)

    Hu, Hui; Zou, Peng; Liu, Xia-Ji

    2018-02-01

    We provide a description of the dynamic structure factor of a homogeneous unitary Fermi gas at low momentum and low frequency, based on the dissipative two-fluid hydrodynamic theory. The viscous relaxation time is estimated and is used to determine the regime where the hydrodynamic theory is applicable and to understand the nature of sound waves in the density response near the superfluid phase transition. By collecting the best knowledge on the shear viscosity and thermal conductivity known so far, we calculate the various diffusion coefficients and obtain the damping width of the (first and second) sounds. We find that the damping width of the first sound is greatly enhanced across the superfluid transition and very close to the transition the second sound might be resolved in the density response for the transferred momentum up to half of Fermi momentum. Our work is motivated by the recent measurement of the local dynamic structure factor at low momentum at Swinburne University of Technology and the ongoing experiment on sound attenuation of a homogeneous unitary Fermi gas at Massachusetts Institute of Technology. We discuss how the measurement of the velocity and damping width of the sound modes in low-momentum dynamic structure factor may lead to an improved determination of the universal superfluid density, shear viscosity, and thermal conductivity of a unitary Fermi gas.

  3. Active control of noise on the source side of a partition to increase its sound isolation

    NASA Astrophysics Data System (ADS)

    Tarabini, Marco; Roure, Alain; Pinhede, Cedric

    2009-03-01

    This paper describes a local active noise control system that virtually increases the sound isolation of a dividing wall by means of a secondary source array. With the proposed method, sound pressure on the source side of the partition is reduced using an array of loudspeakers that generates destructive interference on the wall surface, where an array of error microphones is placed. The reduction of sound pressure on the incident side of the wall is expected to decrease the sound radiated into the contiguous room. The method efficiency was experimentally verified by checking the insertion loss of the active noise control system; in order to investigate the possibility of using a large number of actuators, a decentralized FXLMS control algorithm was used. Active control performances and stability were tested with different array configurations, loudspeaker directivities and enclosure characteristics (sound source position and absorption coefficient). The influence of all these parameters was investigated with the factorial design of experiments. The main outcome of the experimental campaign was that the insertion loss produced by the secondary source array, in the 50-300 Hz frequency range, was close to 10 dB. In addition, the analysis of variance showed that the active noise control performance can be optimized with a proper choice of the directional characteristics of the secondary source and the distance between loudspeakers and error microphones.

  4. Robust sound speed estimation for ultrasound-based hepatic steatosis assessment

    NASA Astrophysics Data System (ADS)

    Imbault, Marion; Faccinetto, Alex; Osmanski, Bruno-Félix; Tissier, Antoine; Deffieux, Thomas; Gennisson, Jean-Luc; Vilgrain, Valérie; Tanter, Mickaël

    2017-05-01

    Hepatic steatosis is a common condition, the prevalence of which is increasing along with non-alcoholic fatty liver disease (NAFLD). Currently, the most accurate noninvasive imaging method for diagnosing and quantifying hepatic steatosis is MRI, which estimates the proton-density fat fraction (PDFF) as a measure of fractional fat content. However, MRI suffers several limitations including cost, contra-indications and poor availability. Although conventional ultrasound is widely used by radiologists for hepatic steatosis assessment, it remains qualitative and operator dependent. Interestingly, the speed of sound within soft tissues is known to vary slightly from muscle (1.575 mm · µs-1) to fat (1.450 mm · µs-1). Building upon this fact, steatosis could affect liver sound speed when the fat content increases. The main objectives of this study are to propose a robust method for sound speed estimation (SSE) locally in the liver and to assess its accuracy for steatosis detection and staging. This technique was first validated on two phantoms and SSE was assessed with a precision of 0.006 and 0.003 mm · µs-1 respectively for the two phantoms. Then a preliminary clinical trial (N  =  17 patients) was performed. SSE results was found to be highly correlated with MRI proton density fat fraction (R 2  =  0.69) and biopsy (AUROC  =  0.952) results. This new method based on the assessment of spatio-temporal properties of the local speckle noise for SSE provides an efficient way to diagnose and stage hepatic steatosis.

  5. Annoyance, detection and recognition of wind turbine noise.

    PubMed

    Van Renterghem, Timothy; Bockstael, Annelies; De Weirt, Valentine; Botteldooren, Dick

    2013-07-01

    Annoyance, recognition and detection of noise from a single wind turbine were studied by means of a two-stage listening experiment with 50 participants with normal hearing abilities. In-situ recordings made at close distance from a 1.8-MW wind turbine operating at 22 rpm were mixed with road traffic noise, and processed to simulate indoor sound pressure levels at LAeq 40 dBA. In a first part, where people were unaware of the true purpose of the experiment, samples were played during a quiet leisure activity. Under these conditions, pure wind turbine noise gave very similar annoyance ratings as unmixed highway noise at the same equivalent level, while annoyance by local road traffic noise was significantly higher. In a second experiment, listeners were asked to identify the sample containing wind turbine noise in a paired comparison test. The detection limit of wind turbine noise in presence of highway noise was estimated to be as low as a signal-to-noise ratio of -23 dBA. When mixed with local road traffic, such a detection limit could not be determined. These findings support that noticing the sound could be an important aspect of wind turbine noise annoyance at the low equivalent levels typically observed indoors in practice. Participants that easily recognized wind-turbine(-like) sounds could detect wind turbine noise better when submersed in road traffic noise. Recognition of wind turbine sounds is also linked to higher annoyance. Awareness of the source is therefore a relevant aspect of wind turbine noise perception which is consistent with previous research. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Remote transmission of live endoscopy over the Internet: Report from the 87th Congress of the Japan Gastroenterological Endoscopy Society.

    PubMed

    Shimizu, Shuji; Ohtsuka, Takao; Takahata, Shunichi; Nagai, Eishi; Nakashima, Naoki; Tanaka, Masao

    2016-01-01

    Live demonstration of endoscopy is one of the most attractive and useful methods for education and is often organized locally in hospitals. However, problems have been apparent in terms of cost, preparation, and potential risks to patients. Our aim was to evaluate a new approach to live endoscopy whereby remote hospitals are connected by the Internet for live endoscopic demonstrations. Live endoscopy was transmitted to the Congress of the Japan Gastroenterological Endoscopic Society by 13 domestic and international hospitals. Patients with upper and lower gastrointestinal diseases and with pancreatobiliary disorders were the subjects of a live demonstration. Questionnaires were distributed to the audience and were sent to the demonstrators. Questions concerned the quality of transmitted images and sound, cost, preparations, programs, preference of style, and adverse events. Of the audience, 91.2% (249/273) answered favorably regarding the transmitted image quality and 93.8% (259/276) regarding the sound quality. All demonstrators answered favorably regarding image quality and 93% (13/14) regarding sound quality. Preparations were completed without any outsourcing at 11 sites (79%) and were evaluated as 'very easy' or 'easy' at all but one site (92.3%). Preparation cost was judged as 'very cheap' or 'cheap' at 12 sites (86%). Live endoscopy connecting multiple international centers was satisfactory in image and sound quality for both audience and demonstrators, with easy and inexpensive preparation. The remote transmission of live endoscopy from demonstrators' own hospitals was preferred to the conventional style of locally organized live endoscopy. © 2015 The Authors Digestive Endoscopy © 2015 Japan Gastroenterological Endoscopy Society.

  7. A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling

    NASA Astrophysics Data System (ADS)

    Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit

    2014-09-01

    When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.

  8. Influence of sound source location on the behavior and physiology of the precedence effect in cats.

    PubMed

    Dent, Micheal L; Tollin, Daniel J; Yin, Tom C T

    2009-08-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from approximately 0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space.

  9. Influence of Sound Source Location on the Behavior and Physiology of the Precedence Effect in Cats

    PubMed Central

    Dent, Micheal L.; Tollin, Daniel J.; Yin, Tom C. T.

    2009-01-01

    Psychophysical experiments on the precedence effect (PE) in cats have shown that they localize pairs of auditory stimuli presented from different locations in space based on the spatial position of the stimuli and the interstimulus delay (ISD) between the stimuli in a manner similar to humans. Cats exhibit localization dominance for pairs of transient stimuli with |ISDs| from ∼0.4 to 10 ms, summing localization for |ISDs| < 0.4 ms and breakdown of fusion for |ISDs| > 10 ms, which is the approximate echo threshold. The neural correlates to the PE have been described in both anesthetized and unanesthetized animals at many levels from auditory nerve to cortex. Single-unit recordings from the inferior colliculus (IC) and auditory cortex of cats demonstrate that neurons respond to both lead and lag sounds at ISDs above behavioral echo thresholds, but the response to the lag is reduced at shorter ISDs, consistent with localization dominance. Here the influence of the relative locations of the leading and lagging sources on the PE was measured behaviorally in a psychophysical task and physiologically in the IC of awake behaving cats. At all configurations of lead-lag stimulus locations, the cats behaviorally exhibited summing localization, localization dominance, and breakdown of fusion. Recordings from the IC of awake behaving cats show neural responses paralleling behavioral measurements. Both behavioral and physiological results suggest systematically shorter echo thresholds when stimuli are further apart in space. PMID:19439668

  10. Bimodal benefits on objective and subjective outcomes for adult cochlear implant users.

    PubMed

    Heo, Ji-Hye; Lee, Jae-Hee; Lee, Won-Sang

    2013-09-01

    Given that only a few studies have focused on the bimodal benefits on objective and subjective outcomes and emphasized the importance of individual data, the present study aimed to measure the bimodal benefits on the objective and subjective outcomes for adults with cochlear implant. Fourteen listeners with bimodal devices were tested on the localization and recognition abilities using environmental sounds, 1-talker, and 2-talker speech materials. The localization ability was measured through an 8-loudspeaker array. For the recognition measures, listeners were asked to repeat the sentences or say the environmental sounds the listeners heard. As a subjective questionnaire, three domains of Korean-version of Speech, Spatial, Qualities of Hearing scale (K-SSQ) were used to explore any relationships between objective and subjective outcomes. Based on the group-mean data, the bimodal hearing enhanced both localization and recognition regardless of test material. However, the inter- and intra-subject variability appeared to be large across test materials for both localization and recognition abilities. Correlation analyses revealed that the relationships were not always consistent between the objective outcomes and the subjective self-reports with bimodal devices. Overall, this study supports significant bimodal advantages on localization and recognition measures, yet the large individual variability in bimodal benefits should be considered carefully for the clinical assessment as well as counseling. The discrepant relations between objective and subjective results suggest that the bimodal benefits in traditional localization or recognition measures might not necessarily correspond to the self-reported subjective advantages in everyday listening environments.

  11. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli.

    PubMed

    Van Opstal, A John; Vliegen, Joyce; Van Esch, Thamar

    2017-01-01

    Human sound localization in the mid-saggital plane (elevation) relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners' spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs). These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle.

  12. Lung Sounds in Children before and after Respiratory Physical Therapy for Right Middle Lobe Atelectasis

    PubMed Central

    Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko

    2016-01-01

    Background Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. Method The subjects of this study were 13 children with right middle lobe atelectasis (3–7 years) and 14 healthy children (3–7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100–200 Hz, 200–400 Hz, and 400–800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Result Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Conclusion Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children. PMID:27611433

  13. Geographic variation and acoustic structure of the underwater vocalization of harbor seal (Phoca vitulina) in Norway, Sweden and Scotland.

    PubMed

    Bjørgesaeter, Anders; Ugland, Karl Inne; Bjørge, Arne

    2004-10-01

    The male harbor seal (Phoca vitulina) produces broadband nonharmonic vocalizations underwater during the breeding season. In total, 120 vocalizations from six colonies were analyzed to provide a description of the acoustic structure and for the presence of geographic variation. The complex harbor seal vocalizations may be described by how the frequency bandwidth varies over time. An algorithm that identifies the boundaries between noise and signal from digital spectrograms was developed in order to extract a frequency bandwidth contour. The contours were used as inputs for multivariate analysis. The vocalizations' sound types (e.g., pulsed sound, whistle, and broadband nonharmonic sound) were determined by comparing the vocalizations' spectrographic representations with sound waves produced by known sound sources. Comparison between colonies revealed differences in the frequency contours, as well as some geographical variation in use of sound types. The vocal differences may reflect a limited exchange of individuals between the six colonies due to long distances and strong site fidelity. Geographically different vocal repertoires have potential for identifying discrete breeding colonies of harbor seals, but more information is needed on the nature and extent of early movements of young, the degree of learning, and the stability of the vocal repertoire. A characteristic feature of many vocalizations in this study was the presence of tonal-like introductory phrases that fit into the categories pulsed sound and whistles. The functions of these phrases are unknown but may be important in distance perception and localization of the sound source. The potential behavioral consequences of the observed variability may be indicative of adaptations to different environmental properties influencing determination of distance and direction and plausible different male mating tactics.

  14. Lung Sounds in Children before and after Respiratory Physical Therapy for Right Middle Lobe Atelectasis.

    PubMed

    Adachi, Satoshi; Nakano, Hiroshi; Odajima, Hiroshi; Motomura, Chikako; Yoshioka, Yukiko

    2016-01-01

    Chest auscultation is commonly performed during respiratory physical therapy (RPT). However, the changes in breath sounds in children with atelectasis have not been previously reported. The aim of this study was to clarify the characteristics of breath sounds in children with atelectasis using acoustic measurements. The subjects of this study were 13 children with right middle lobe atelectasis (3-7 years) and 14 healthy children (3-7 years). Lung sounds at the bilateral fifth intercostal spaces on the midclavicular line were recorded. The right-to-left ratio (R/L ratio) and the expiration to inspiration ratio (E/I ratio) of the breath sound sound pressure were calculated separately for three octave bands (100-200 Hz, 200-400 Hz, and 400-800 Hz). These data were then compared between the atelectasis and control groups. In addition, the same measurements were repeated after treatment, including RPT, in the atelectasis group. Before treatment, the inspiratory R/L ratios for all the frequency bands were significantly lower in the atelectasis group than in the control group, and the E/I ratios for all the frequency bands were significantly higher in the atelectasis group than in the control group. After treatment, the inspiratory R/L ratios of the atelectasis group did not increase significantly, but the E/I ratios decreased for all the frequency bands and became similar to those of the control group. Breath sound attenuation in the atelectatic area remained unchanged even after radiographical resolution, suggesting a continued decrease in local ventilation. On the other hand, the elevated E/I ratio for the atelectatic area was normalized after treatment. Therefore, the differences between inspiratory and expiration sound intensities may be an important marker of atelectatic improvement in children.

  15. Cortical Activation Patterns Evoked by Temporally Asymmetric Sounds and Their Modulation by Learning

    PubMed Central

    Horikawa, Junsei

    2017-01-01

    When complex sounds are reversed in time, the original and reversed versions are perceived differently in spectral and temporal dimensions despite their identical duration and long-term spectrum-power profiles. Spatiotemporal activation patterns evoked by temporally asymmetric sound pairs demonstrate how the temporal envelope determines the readout of the spectrum. We examined the patterns of activation evoked by a temporally asymmetric sound pair in the primary auditory field (AI) of anesthetized guinea pigs and determined how discrimination training modified these patterns. Optical imaging using a voltage-sensitive dye revealed that a forward ramped-down natural sound (F) consistently evoked much stronger responses than its time-reversed, ramped-up counterpart (revF). The spatiotemporal maximum peak (maxP) of F-evoked activation was always greater than that of revF-evoked activation, and these maxPs were significantly separated within the AI. Although discrimination training did not affect the absolute magnitude of these maxPs, the revF-to-F ratio of the activation peaks calculated at the location where hemispheres were maximally activated (i.e., F-evoked maxP) was significantly smaller in the trained group. The F-evoked activation propagated across the AI along the temporal axis to the ventroanterior belt field (VA), with the local activation peak within the VA being significantly larger in the trained than in the naïve group. These results suggest that the innate network is more responsive to natural sounds of ramped-down envelopes than their time-reversed, unnatural sounds. The VA belt field activation might play an important role in emotional learning of sounds through its connections with amygdala. PMID:28451640

  16. Hybrid cochlear implantation: quality of life, quality of hearing, and working performance compared to patients with conventional unilateral or bilateral cochlear implantation.

    PubMed

    Härkönen, Kati; Kivekäs, Ilkka; Kotti, Voitto; Sivonen, Ville; Vasama, Juha-Pekka

    2017-10-01

    The objective of the present study is to evaluate the effect of hybrid cochlear implantation (hCI) on quality of life (QoL), quality of hearing (QoH), and working performance in adult patients, and to compare the long-term results of patients with hCI to those of patients with conventional unilateral cochlear implantation (CI), bilateral CI, and single-sided deafness (SSD) with CI. Sound localization accuracy and speech-in-noise test were also compared between these groups. Eight patients with high-frequency sensorineural hearing loss of unknown etiology were selected in the study. Patients with hCI had better long-term speech perception in noise than uni- or bilateral CI patients, but the difference was not statistically significant. The sound localization accuracy was equal in the hCI, bilateral CI, and SSD patients. QoH was statistically significantly better in bilateral CI patients than in the others. In hCI patients, residual hearing was preserved in all patients after the surgery. During the 3.6-year follow-up, the mean hearing threshold at 125-500 Hz decreased on average by 15 dB HL in the implanted ear. QoL and working performance improved significantly in all CI patients. Hearing outcomes with hCI are comparable to the results of bilateral CI or CI with SSD, but hearing in noise and sound localization are statistically significantly better than with unilateral CI. Interestingly, the impact of CI on QoL, QoH, and working performance was similar in all groups.

  17. An Alexandrium Spp. Cyst Record from Sequim Bay, Washington State, USA, and its Relation to Past Climate Variability(1).

    PubMed

    Feifel, Kirsten M; Moore, Stephanie K; Horner, Rita A

    2012-06-01

    Since the 1970s, Puget Sound, Washington State, USA, has experienced an increase in detections of paralytic shellfish toxins (PSTs) in shellfish due to blooms of the harmful dinoflagellate Alexandrium. Natural patterns of climate variability, such as the Pacific Decadal Oscillation (PDO), and changes in local environmental factors, such as sea surface temperature (SST) and air temperature, have been linked to the observed increase in PSTs. However, the lack of observations of PSTs in shellfish prior to the 1950s has inhibited statistical assessments of longer-term trends in climate and environmental conditions on Alexandrium blooms. After a bloom, Alexandrium cells can enter a dormant cyst stage, which settles on the seafloor and then becomes entrained into the sedimentary record. In this study, we created a record of Alexandrium spp. cysts from a sediment core obtained from Sequim Bay, Puget Sound. Cyst abundances ranged from 0 to 400 cysts · cm(-3) and were detected down-core to a depth of 100 cm, indicating that Alexandrium has been present in Sequim Bay since at least the late 1800s. The cyst record allowed us to statistically examine relationships with available environmental parameters over the past century. Local air temperature and sea surface temperature were positively and significantly correlated with cyst abundances from the late 1800s to 2005; no significant relationship was found between PDO and cyst abundances. This finding suggests that local environmental variations more strongly influence Alexandrium population dynamics in Puget Sound when compared to large-scale changes. © 2012 Phycological Society of America.

  18. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    NASA Astrophysics Data System (ADS)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  19. Tonotopic alterations in inhibitory input to the medial nucleus of the trapezoid body in a mouse model of Fragile X syndrome.

    PubMed

    McCullagh, Elizabeth A; Salcedo, Ernesto; Huntsman, Molly M; Klug, Achim

    2017-11-01

    Hyperexcitability and the imbalance of excitation/inhibition are one of the leading causes of abnormal sensory processing in Fragile X syndrome (FXS). The precise timing and distribution of excitation and inhibition is crucial for auditory processing at the level of the auditory brainstem, which is responsible for sound localization ability. Sound localization is one of the sensory abilities disrupted by loss of the Fragile X Mental Retardation 1 (Fmr1) gene. Using triple immunofluorescence staining we tested whether there were alterations in the number and size of presynaptic structures for the three primary neurotransmitters (glutamate, glycine, and GABA) in the auditory brainstem of Fmr1 knockout mice. We found decreases in either glycinergic or GABAergic inhibition to the medial nucleus of the trapezoid body (MNTB) specific to the tonotopic location within the nucleus. MNTB is one of the primary inhibitory nuclei in the auditory brainstem and participates in the sound localization process with fast and well-timed inhibition. Thus, a decrease in inhibitory afferents to MNTB neurons should lead to greater inhibitory output to the projections from this nucleus. In contrast, we did not see any other significant alterations in balance of excitation/inhibition in any of the other auditory brainstem nuclei measured, suggesting that the alterations observed in the MNTB are both nucleus and frequency specific. We furthermore show that glycinergic inhibition may be an important contributor to imbalances in excitation and inhibition in FXS and that the auditory brainstem is a useful circuit for testing these imbalances. © 2017 Wiley Periodicals, Inc.

  20. Effect of eye position on saccades and neuronal responses to acoustic stimuli in the superior colliculus of the behaving cat.

    PubMed

    Populin, Luis C; Tollin, Daniel J; Yin, Tom C T

    2004-10-01

    We examined the motor error hypothesis of visual and auditory interaction in the superior colliculus (SC), first tested by Jay and Sparks in the monkey. We trained cats to direct their eyes to the location of acoustic sources and studied the effects of eye position on both the ability of cats to localize sounds and the auditory responses of SC neurons with the head restrained. Sound localization accuracy was generally not affected by initial eye position, i.e., accuracy was not proportionally affected by the deviation of the eyes from the primary position at the time of stimulus presentation, showing that eye position is taken into account when orienting to acoustic targets. The responses of most single SC neurons to acoustic stimuli in the intact cat were modulated by eye position in the direction consistent with the predictions of the "motor error" hypothesis, but the shift accounted for only two-thirds of the initial deviation of the eyes. However, when the average horizontal sound localization error, which was approximately 35% of the target amplitude, was taken into account, the magnitude of the horizontal shifts in the SC auditory receptive fields matched the observed behavior. The modulation by eye position was not due to concomitant movements of the external ears, as confirmed by recordings carried out after immobilizing the pinnae of one cat. However, the pattern of modulation after pinnae immobilization was inconsistent with the observations in the intact cat, suggesting that, in the intact animal, information about the position of the pinnae may be taken into account.

  1. Free-flight phonotaxis in a parasitoid fly: behavioural thresholds, relative attraction and susceptibility to noise

    NASA Astrophysics Data System (ADS)

    Ramsauer, N.; Robert, D.

    The phonotactic capacity of tachinid flies to acoustically detect and localize a sound source simulating their cricket host was investigated in a large flight room. Acoustic measurements were performed to estimate the actual stimulus delivered to the flies, revealing highly heterogeneous sound fields. When presented with a simulated cricket song in red or infrared light conditions, the flies readily flew to the sound source and landed on it. Behavioural phonotactic thresholds were established as a function of carrier frequency and were found to coincide well with the frequency of the host's natural song (4.5-5.2kHz). Experiments revealed that the same range of frequencies is preferentially attractive to the free-flying flies, and that the reliability of signal detection in the presence of noise is best at behaviourally relevant frequencies.

  2. Beamforming transmission in IEEE 802.11ac under time-varying channels.

    PubMed

    Yu, Heejung; Kim, Taejoon

    2014-01-01

    The IEEE 802.11ac wireless local area network (WLAN) standard has adopted beamforming (BF) schemes to improve spectral efficiency and throughput with multiple antennas. To design the transmit beam, a channel sounding process to feedback channel state information (CSI) is required. Due to sounding overhead, throughput increases with the amount of transmit data under static channels. Under practical channel conditions with mobility, however, the mismatch between the transmit beam and the channel at transmission time causes performance loss when transmission duration after channel sounding is too long. When the fading rate, payload size, and operating signal-to-noise ratio are given, the optimal transmission duration (i.e., packet length) can be determined to maximize throughput. The relationship between packet length and throughput is also investigated for single-user and multiuser BF modes.

  3. Beamforming Transmission in IEEE 802.11ac under Time-Varying Channels

    PubMed Central

    2014-01-01

    The IEEE 802.11ac wireless local area network (WLAN) standard has adopted beamforming (BF) schemes to improve spectral efficiency and throughput with multiple antennas. To design the transmit beam, a channel sounding process to feedback channel state information (CSI) is required. Due to sounding overhead, throughput increases with the amount of transmit data under static channels. Under practical channel conditions with mobility, however, the mismatch between the transmit beam and the channel at transmission time causes performance loss when transmission duration after channel sounding is too long. When the fading rate, payload size, and operating signal-to-noise ratio are given, the optimal transmission duration (i.e., packet length) can be determined to maximize throughput. The relationship between packet length and throughput is also investigated for single-user and multiuser BF modes. PMID:25152927

  4. Computation of interaural time difference in the owl's coincidence detector neurons.

    PubMed

    Funabiki, Kazuo; Ashida, Go; Konishi, Masakazu

    2011-10-26

    Both the mammalian and avian auditory systems localize sound sources by computing the interaural time difference (ITD) with submillisecond accuracy. The neural circuits for this computation in birds consist of axonal delay lines and coincidence detector neurons. Here, we report the first in vivo intracellular recordings from coincidence detectors in the nucleus laminaris of barn owls. Binaural tonal stimuli induced sustained depolarizations (DC) and oscillating potentials whose waveforms reflected the stimulus. The amplitude of this sound analog potential (SAP) varied with ITD, whereas DC potentials did not. The amplitude of the SAP was correlated with firing rate in a linear fashion. Spike shape, synaptic noise, the amplitude of SAP, and responsiveness to current pulses differed between cells at different frequencies, suggesting an optimization strategy for sensing sound signals in neurons tuned to different frequencies.

  5. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats

    PubMed Central

    Luo, Jinhong; Koselj, Klemen; Zsebők, Sándor; Siemers, Björn M.; Goerlitz, Holger R.

    2014-01-01

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey. PMID:24335559

  6. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats.

    PubMed

    Luo, Jinhong; Koselj, Klemen; Zsebok, Sándor; Siemers, Björn M; Goerlitz, Holger R

    2014-02-06

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey.

  7. Automated detection and localization of bowhead whale sounds in the presence of seismic airgun surveys.

    PubMed

    Thode, Aaron M; Kim, Katherine H; Blackwell, Susanna B; Greene, Charles R; Nations, Christopher S; McDonald, Trent L; Macrander, A Michael

    2012-05-01

    An automated procedure has been developed for detecting and localizing frequency-modulated bowhead whale sounds in the presence of seismic airgun surveys. The procedure was applied to four years of data, collected from over 30 directional autonomous recording packages deployed over a 280 km span of continental shelf in the Alaskan Beaufort Sea. The procedure has six sequential stages that begin by extracting 25-element feature vectors from spectrograms of potential call candidates. Two cascaded neural networks then classify some feature vectors as bowhead calls, and the procedure then matches calls between recorders to triangulate locations. To train the networks, manual analysts flagged 219 471 bowhead call examples from 2008 and 2009. Manual analyses were also used to identify 1.17 million transient signals that were not whale calls. The network output thresholds were adjusted to reject 20% of whale calls in the training data. Validation runs using 2007 and 2010 data found that the procedure missed 30%-40% of manually detected calls. Furthermore, 20%-40% of the sounds flagged as calls are not present in the manual analyses; however, these extra detections incorporate legitimate whale calls overlooked by human analysts. Both manual and automated methods produce similar spatial and temporal call distributions.

  8. Large-scale Cortical Network Properties Predict Future Sound-to-Word Learning Success

    PubMed Central

    Sheppard, John Patrick; Wang, Ji-Ping; Wong, Patrick C. M.

    2013-01-01

    The human brain possesses a remarkable capacity to interpret and recall novel sounds as spoken language. These linguistic abilities arise from complex processing spanning a widely distributed cortical network and are characterized by marked individual variation. Recently, graph theoretical analysis has facilitated the exploration of how such aspects of large-scale brain functional organization may underlie cognitive performance. Brain functional networks are known to possess small-world topologies characterized by efficient global and local information transfer, but whether these properties relate to language learning abilities remains unknown. Here we applied graph theory to construct large-scale cortical functional networks from cerebral hemodynamic (fMRI) responses acquired during an auditory pitch discrimination task and found that such network properties were associated with participants’ future success in learning words of an artificial spoken language. Successful learners possessed networks with reduced local efficiency but increased global efficiency relative to less successful learners and had a more cost-efficient network organization. Regionally, successful and less successful learners exhibited differences in these network properties spanning bilateral prefrontal, parietal, and right temporal cortex, overlapping a core network of auditory language areas. These results suggest that efficient cortical network organization is associated with sound-to-word learning abilities among healthy, younger adults. PMID:22360625

  9. Energy- and wave-based beam-tracing prediction of room-acoustical parameters using different boundary conditions.

    PubMed

    Yousefzadeh, Behrooz; Hodgson, Murray

    2012-09-01

    A beam-tracing model was used to study the acoustical responses of three empty, rectangular rooms with different boundary conditions. The model is wave-based (accounting for sound phase) and can be applied to rooms with extended-reaction surfaces that are made of multiple layers of solid, fluid, or poroelastic materials-the acoustical properties of these surfaces are calculated using Biot theory. Three room-acoustical parameters were studied in various room configurations: sound strength, reverberation time, and RApid Speech Transmission Index. The main objective was to investigate the effects of modeling surfaces as either local or extended reaction on predicted values of these three parameters. Moreover, the significance of modeling interference effects was investigated, including the study of sound phase-change on surface reflection. Modeling surfaces as of local or extended reaction was found to be significant for surfaces consisting of multiple layers, specifically when one of the layers is air. For multilayers of solid materials with an air-cavity, this was most significant around their mass-air-mass resonance frequencies. Accounting for interference effects made significant changes in the predicted values of all parameters. Modeling phase change on reflection, on the other hand, was found to be relatively much less significant.

  10. Biologically inspired circuitry that mimics mammalian hearing

    NASA Astrophysics Data System (ADS)

    Hubbard, Allyn; Cohen, Howard; Karl, Christian; Freedman, David; Mountain, David; Ziph-Schatzberg, Leah; Nourzad Karl, Marianne; Kelsall, Sarah; Gore, Tyler; Pu, Yirong; Yang, Zibing; Xing, Xinyu; Deligeorges, Socrates

    2009-05-01

    We are developing low-power microcircuitry that implements classification and direction finding systems of very small size and small acoustic aperture. Our approach was inspired by the fact that small mammals are able to localize sounds despite their ears may be separated by as little as a centimeter. Gerbils, in particular are good low-frequency localizers, which is a particularly difficult task, since a wavelength at 500 Hz is on the order of two feet. Given such signals, crosscorrelation- based methods to determine direction fail badly in the presence of a small amount of noise, e.g. wind noise and noise clutter common to almost any realistic environment. Circuits are being developed using both analog and digital techniques, each of which process signals in fundamentally the same way the peripheral auditory system of mammals processes sound. A filter bank represents filtering done by the cochlea. The auditory nerve is implemented using a combination of an envelope detector, an automatic gain stage, and a unique one-bit A/D, which creates what amounts to a neural impulse. These impulses are used to extract pitch characteristics, which we use to classify sounds such as vehicles, small and large weaponry from AK-47s to 155mm cannon, including mortar launches and impacts. In addition to the pitchograms, we also use neural nets for classification.

  11. Binaural hearing in children using Gaussian enveloped and transposed tones.

    PubMed

    Ehlers, Erica; Kan, Alan; Winn, Matthew B; Stoelb, Corey; Litovsky, Ruth Y

    2016-04-01

    Children who use bilateral cochlear implants (BiCIs) show significantly poorer sound localization skills than their normal hearing (NH) peers. This difference has been attributed, in part, to the fact that cochlear implants (CIs) do not faithfully transmit interaural time differences (ITDs) and interaural level differences (ILDs), which are known to be important cues for sound localization. Interestingly, little is known about binaural sensitivity in NH children, in particular, with stimuli that constrain acoustic cues in a manner representative of CI processing. In order to better understand and evaluate binaural hearing in children with BiCIs, the authors first undertook a study on binaural sensitivity in NH children ages 8-10, and in adults. Experiments evaluated sound discrimination and lateralization using ITD and ILD cues, for stimuli with robust envelope cues, but poor representation of temporal fine structure. Stimuli were spondaic words, Gaussian-enveloped tone pulse trains (100 pulse-per-second), and transposed tones. Results showed that discrimination thresholds in children were adult-like (15-389 μs for ITDs and 0.5-6.0 dB for ILDs). However, lateralization based on the same binaural cues showed higher variability than seen in adults. Results are discussed in the context of factors that may be responsible for poor representation of binaural cues in bilaterally implanted children.

  12. Calibration of the R/V Marcus G. Langseth Seismic Array in shallow Cascadia waters using the Multi-Channel Streamer

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Tolstoy, M.; Carton, H. D.

    2013-12-01

    In the summer of 2012, two multi-channel seismic (MCS) experiments, Cascadia Open-Access Seismic Transects (COAST) and Ridge2Trench, were conducted in the offshore Cascadia region. An area of growing environmental concern with active source seismic experiments is the potential impact of the received sound on marine mammals, but data relating to this issue is limited. For these surveys sound level 'mitigation radii' are established for the protection of marine mammals, based on direct arrival modeling and previous calibration experiments. Propagation of sound from seismic arrays can be accurately modeled in deep-water environments, but in shallow and sloped environments the complexity of local geology and bathymetry can make it difficult to predict sound levels as a function of distance from the source array. One potential solution to this problem is to measure the received levels in real-time using the ship's streamer (Diebold et al., 2010), which would allow the dynamic determination of suitable mitigation radii. We analyzed R/V Langseth streamer data collected on the shelf and slope off the Washington coast during the COAST experiment to measure received levels in situ up to 8 km away from the ship. Our analysis shows that water depth and bathymetric features can affect received levels in shallow water environments. The establishment of dynamic mitigation radii based on local conditions may help maximize the safety of marine mammals while also maximizing the ability of scientists to conduct seismic research. With increasing scientific and societal focus on subduction zone environments, a better understanding of shallow water sound propagation is essential for allowing seismic exploration of these hazardous environments to continue. Diebold, J. M., M. Tolstoy, L. Doermann, S. Nooner, S. Webb, and T. J. Crone (2010) R/V Marcus G. Langseth Seismic Source: Modeling and Calibration. Geochemistry, Geophysics, Geosystems, 11, Q12012, doi:10.1029/2010GC003216.

  13. Engineering Data Compendium. Human Perception and Performance. Volume 2

    DTIC Science & Technology

    1988-01-01

    Stimulation 5.1014 5.1004 Auditory Detection in the Presence of Visual Stimulation 5.1015 5.1005 Tactual Detection and Discrimination in the Presence of...Accessory Stimulation 5.1016 5.1006 Tactile Versus Auditory Localization of Sound 5.1007 Spatial Localization in the Presence of Inter- 5.1017...York: Wiley. Cross References 5.1004 Auditory detection in the presence of visual stimulation ; 5.1005 Tactual detection and dis- crimination in

  14. Low Noise Infrasonic Sensor System with High Reduction of Natural Background Noise

    DTIC Science & Technology

    2006-05-01

    local processing allows a variety of options both in the array geometry and signal processing. A generic geometry is indicated in Figure 2. Geometric...higher frequency sound detected . Table 1 provides a comparison of piezocable and microbarograph based arrays . Piezocable Sensor Local Signal ...aliasing associated with the current infrasound sensors used at large spacing in the present designs of infrasound monitoring arrays , particularly in the

  15. Dialogue on Diversity - Broadening the Voices in Urban and Community Forestry

    Treesearch

    Maureen McDonough; Kasey Russell; Lisa Burban; Lee Nancarrow

    2003-01-01

    Does this sound familiar? You work to set up a meeting or workshop, and try your best to get a good cross-section of the community, and yet only the same people come. You contacted the local chamber of commerce, and sent an invitation to the city planner. You called the head of the local garden club and other service organizations. From federal agencies to small nonpro...

  16. Loss of urban forest canopy and the related effects on soundscape and human directed attention

    NASA Astrophysics Data System (ADS)

    Laverne, Robert James Paul

    The specific questions addressed in this research are: Will the loss of trees in residential neighborhoods result in a change to the local soundscape? The investigation of this question leads to a related inquiry: Do the sounds of the environment in which a person is present affect their directed attention?. An invasive insect pest, the Emerald Ash Borer (Agrilus planipennis ), is killing millions of ash trees (genus Fraxinus) throughout North America. As the loss of tree canopy occurs, urban ecosystems change (including higher summer temperatures, more stormwater runoff, and poorer air quality) causing associated changes to human physical and mental health. Previous studies suggest that conditions in urban environments can result in chronic stress in humans and fatigue to directed attention, which is the ability to focus on tasks and to pay attention. Access to nature in cities can help refresh directed attention. The sights and sounds associated with parks, open spaces, and trees can serve as beneficial counterbalances to the irritating conditions associated with cities. This research examines changes to the quantity and quality of sounds in Arlington Heights, Illinois. A series of before-and-after sound recordings were gathered as trees died and were removed between 2013 and 2015. Comparison of recordings using the Raven sound analysis program revealed significant differences in some, but not all measures of sound attributes as tree canopy decreased. In general, more human-produced mechanical sounds (anthrophony) and fewer sounds associated with weather (geophony) were detected. Changes in sounds associated with animals (biophony) varied seasonally. Monitoring changes in the proportions of anthrophony, biophony and geophony can provide insight into changes in biodiversity, environmental health, and quality of life for humans. Before-tree-removal and after-tree-removal sound recordings served as the independent variable for randomly-assigned human volunteers as they performed the Stroop Test and the Necker Cube Pattern Control test to measure directed attention. The sound treatments were not found to have significant effects on the directed attention test scores. Future research is needed to investigate the characteristics of urban soundscapes that are detrimental or potentially conducive to human cognitive functioning.

  17. Response of the human tympanic membrane to transient acoustic and mechanical stimuli: Preliminary results.

    PubMed

    Razavi, Payam; Ravicz, Michael E; Dobrev, Ivo; Cheng, Jeffrey Tao; Furlong, Cosme; Rosowski, John J

    2016-10-01

    The response of the tympanic membrane (TM) to transient environmental sounds and the contributions of different parts of the TM to middle-ear sound transmission were investigated by measuring the TM response to global transients (acoustic clicks) and to local transients (mechanical impulses) applied to the umbo and various locations on the TM. A lightly-fixed human temporal bone was prepared by removing the ear canal, inner ear, and stapes, leaving the incus, malleus, and TM intact. Motion of nearly the entire TM was measured by a digital holography system with a high speed camera at a rate of 42 000 frames per second, giving a temporal resolution of <24 μs for the duration of the TM response. The entire TM responded nearly instantaneously to acoustic transient stimuli, though the peak displacement and decay time constant varied with location. With local mechanical transients, the TM responded first locally at the site of stimulation, and the response spread approximately symmetrically and circumferentially around the umbo and manubrium. Acoustic and mechanical transients provide distinct and complementary stimuli for the study of TM response. Spatial variations in decay and rate of spread of response imply local variations in TM stiffness, mass, and damping. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Response of the human tympanic membrane to transient acoustic and mechanical stimuli: Preliminary results

    PubMed Central

    Razavi, Payam; Ravicz, Michael E.; Dobrev, Ivo; Cheng, Jeffrey Tao; Furlong, Cosme; Rosowski, John J.

    2016-01-01

    The response of the tympanic membrane (TM) to transient environmental sounds and the contributions of different parts of the TM to middle-ear sound transmission were investigated by measuring the TM response to global transients (acoustic clicks) and to local transients (mechanical impulses) applied to the umbo and various locations on the TM. A lightly-fixed human temporal bone was prepared by removing the ear canal, inner ear, and stapes, leaving the incus, malleus, and TM intact. Motion of nearly the entire TM was measured by a digital holography system with a high speed camera at a rate of 42 000 frames per second, giving a temporal resolution of <24 μs for the duration of the TM response. The entire TM responded nearly instantaneously to acoustic transient stimuli, though the peak displacement and decay time constant varied with location. With local mechanical transients, the TM responded first locally at the site of stimulation, and the response spread approximately symmetrically and circumferentially around the umbo and manubrium. Acoustic and mechanical transients provide distinct and complementary stimuli for the study of TM response. Spatial variations in decay and rate of spread of response imply local variations in TM stiffness, mass, and damping. PMID:26880098

  19. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  20. Sound Localization and Speech Perception in Noise of Pediatric Cochlear Implant Recipients: Bimodal Fitting Versus Bilateral Cochlear Implants.

    PubMed

    Choi, Ji Eun; Moon, Il Joon; Kim, Eun Yeon; Park, Hee-Sung; Kim, Byung Kil; Chung, Won-Ho; Cho, Yang-Sun; Brown, Carolyn J; Hong, Sung Hwa

    The aim of this study was to compare binaural performance of auditory localization task and speech perception in babble measure between children who use a cochlear implant (CI) in one ear and a hearing aid (HA) in the other (bimodal fitting) and those who use bilateral CIs. Thirteen children (mean age ± SD = 10 ± 2.9 years) with bilateral CIs and 19 children with bimodal fitting were recruited to participate. Sound localization was assessed using a 13-loudspeaker array in a quiet sound-treated booth. Speakers were placed in an arc from -90° azimuth to +90° azimuth (15° interval) in horizontal plane. To assess the accuracy of sound location identification, we calculated the absolute error in degrees between the target speaker and the response speaker during each trial. The mean absolute error was computed by dividing the sum of absolute errors by the total number of trials. We also calculated the hemifield identification score to reflect the accuracy of right/left discrimination. Speech-in-babble perception was also measured in the sound field using target speech presented from the front speaker. Eight-talker babble was presented in the following four different listening conditions: from the front speaker (0°), from one of the two side speakers (+90° or -90°), from both side speakers (±90°). Speech, spatial, and quality questionnaire was administered. When the two groups of children were directly compared with each other, there was no significant difference in localization accuracy ability or hemifield identification score under binaural condition. Performance in speech perception test was also similar to each other under most babble conditions. However, when the babble was from the first device side (CI side for children with bimodal stimulation or first CI side for children with bilateral CIs), speech understanding in babble by bilateral CI users was significantly better than that by bimodal listeners. Speech, spatial, and quality scores were comparable with each other between the two groups. Overall, the binaural performance was similar to each other between children who are fit with two CIs (CI + CI) and those who use bimodal stimulation (HA + CI) in most conditions. However, the bilateral CI group showed better speech perception than the bimodal CI group when babble was from the first device side (first CI side for bilateral CI users or CI side for bimodal listeners). Therefore, if bimodal performance is significantly below the mean bilateral CI performance on speech perception in babble, these results suggest that a child should be considered to transit from bimodal stimulation to bilateral CIs.

  1. 76 FR 3057 - Special Local Regulation; Hydroplane Races Within the Captain of the Port Puget Sound Area of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-19

    ...: November 24, 2010. G.T. Blore, Rear Admiral, U.S. Coast Guard, Commander, Thirteenth Coast Guard District.... Voluntary consensus standards are technical standards (e.g., specifications of materials, performance...

  2. Neuromagnetic recordings reveal the temporal dynamics of auditory spatial processing in the human cortex.

    PubMed

    Tiitinen, Hannu; Salminen, Nelli H; Palomäki, Kalle J; Mäkinen, Ville T; Alku, Paavo; May, Patrick J C

    2006-03-20

    In an attempt to delineate the assumed 'what' and 'where' processing streams, we studied the processing of spatial sound in the human cortex by using magnetoencephalography in the passive and active recording conditions and two kinds of spatial stimuli: individually constructed, highly realistic spatial (3D) stimuli and stimuli containing interaural time difference (ITD) cues only. The auditory P1m, N1m, and P2m responses of the event-related field were found to be sensitive to the direction of sound source in the azimuthal plane. In general, the right-hemispheric responses to spatial sounds were more prominent than the left-hemispheric ones. The right-hemispheric P1m and N1m responses peaked earlier for sound sources in the contralateral than for sources in the ipsilateral hemifield and the peak amplitudes of all responses reached their maxima for contralateral sound sources. The amplitude of the right-hemispheric P2m response reflected the degree of spatiality of sound, being twice as large for the 3D than ITD stimuli. The results indicate that the right hemisphere is specialized in the processing of spatial cues in the passive recording condition. Minimum current estimate (MCE) localization revealed that temporal areas were activated both in the active and passive condition. This initial activation, taking place at around 100 ms, was followed by parietal and frontal activity at 180 and 200 ms, respectively. The latter activations, however, were specific to attentional engagement and motor responding. This suggests that parietal activation reflects active responding to a spatial sound rather than auditory spatial processing as such.

  3. [Auditory processing and high frequency audiometry in students of São Paulo].

    PubMed

    Ramos, Cristina Silveira; Pereira, Liliane Desgualdo

    2005-01-01

    Auditory processing and auditory sensibility to high Frequency sounds. To characterize the localization processes, temporal ordering, hearing patterns and detection of high frequency sounds, looking for possible relations between these factors. 32 hearing fourth grade students, born in city of São Paulo, were submitted to: a simplified evaluation of the auditory processing; duration pattern test; high frequency audiometry. Three (9,4%) individuals presented auditory processing disorder (APD) and in one of them there was the coexistence of lower hearing thresholds in high frequency audiometry. APD associated to an auditory sensibility loss in high frequencies should be further investigated.

  4. How Nemo finds home: the neuroecology of dispersal and of population connectivity in larvae of marine fishes.

    PubMed

    Leis, Jeffrey M; Siebeck, Ulrike; Dixson, Danielle L

    2011-11-01

    Nearly all demersal teleost marine fishes have pelagic larval stages lasting from several days to several weeks, during which time they are subject to dispersal. Fish larvae have considerable swimming abilities, and swim in an oriented manner in the sea. Thus, they can influence their dispersal and thereby, the connectivity of their populations. However, the sensory cues marine fish larvae use for orientation in the pelagic environment remain unclear. We review current understanding of these cues and how sensory abilities of larvae develop and are used to achieve orientation with particular emphasis on coral-reef fishes. The use of sound is best understood; it travels well underwater with little attenuation, and is current-independent but location-dependent, so species that primarily utilize sound for orientation will have location-dependent orientation. Larvae of many species and families can hear over a range of ~100-1000 Hz, and can distinguish among sounds. They can localize sources of sounds, but the means by which they do so is unclear. Larvae can hear during much of their pelagic larval phase, and ontogenetically, hearing sensitivity, and frequency range improve dramatically. Species differ in sensitivity to sound and in the rate of improvement in hearing during ontogeny. Due to large differences among-species within families, no significant differences in hearing sensitivity among families have been identified. Thus, distances over which larvae can detect a given sound vary among species and greatly increase ontogenetically. Olfactory cues are current-dependent and location-dependent, so species that primarily utilize olfactory cues will have location-dependent orientation, but must be able to swim upstream to locate sources of odor. Larvae can detect odors (e.g., predators, conspecifics), during most of their pelagic phase, and at least on small scales, can localize sources of odors in shallow water, although whether they can do this in pelagic environments is unknown. Little is known of the ontogeny of olfactory ability or the range over which larvae can localize sources of odors. Imprinting on an odor has been shown in one species of reef-fish. Celestial cues are current- and location-independent, so species that primarily utilize them will have location-independent orientation that can apply over broad scales. Use of sun compass or polarized light for orientation by fish larvae is implied by some behaviors, but has not been proven. Use of neither magnetic fields nor direction of waves for orientation has been shown in marine fish larvae. We highlight research priorities in this area. © The Author 2011. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved.

  5. Tutorial on the Psychophysics and Technology of Virtual Acoustic Displays

    NASA Technical Reports Server (NTRS)

    Wenzel, Elizabeth M.; Null, Cynthia (Technical Monitor)

    1998-01-01

    Virtual acoustics, also known as 3-D sound and auralization, is the simulation of the complex acoustic field experienced by a listener within an environment. Going beyond the simple intensity panning of normal stereo techniques, the goal is to process sounds so that they appear to come from particular locations in three-dimensional space. Although loudspeaker systems are being developed, most of the recent work focuses on using headphones for playback and is the outgrowth of earlier analog techniques. For example, in binaural recording, the sound of an orchestra playing classical music is recorded through small mics in the two "ear canals" of an anthropomorphic artificial or "dummy" head placed in the audience of a concert hall. When the recorded piece is played back over headphones, the listener passively experiences the illusion of hearing the violins on the left and the cellos on the right, along with all the associated echoes, resonances, and ambience of the original environment. Current techniques use digital signal processing to synthesize the acoustical properties that people use to localize a sound source in space. Thus, they provide the flexibility of a kind of digital dummy head, allowing a more active experience in which a listener can both design and move around or interact with a simulated acoustic environment in real time. Such simulations are being developed for a variety of application areas including architectural acoustics, advanced human-computer interfaces, telepresence and virtual reality, navigation aids for the visually-impaired, and as a test bed for psychoacoustical investigations of complex spatial cues. The tutorial will review the basic psychoacoustical cues that determine human sound localization and the techniques used to measure these cues as Head-Related Transfer Functions (HRTFs) for the purpose of synthesizing virtual acoustic environments. The only conclusive test of the adequacy of such simulations is an operational one in which the localization of real and synthesized stimuli are directly compared in psychophysical studies. To this end, the results of psychophysical experiments examining the perceptual validity of the synthesis technique will be reviewed and factors that can enhance perceptual accuracy and realism will be discussed. Of particular interest is the relationship between individual differences in HRTFs and in behavior, the role of reverberant cues in reducing the perceptual errors observed with virtual sound sources, and the importance of developing perceptually valid methods of simplifying the synthesis technique. Recent attempts to implement the synthesis technique in real time systems will also be discussed and an attempt made to interpret their quoted system specifications in terms of perceptual performance. Finally, some critical research and technology development issues for the future will be outlined.

  6. Effect of rod gap spacing on a suction panel for laminar flow and noise control in supersonic wind tunnels. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Harvey, W. D.

    1975-01-01

    Results are presented of a coordinated experimental and theoretical study of a sound shield concept which aims to provide a means of noise reduction in the test section of supersonic wind tunnels at high Reynolds numbers. The model used consists of a planar array of circular rods aligned with the flow, with adjustable gaps between them for boundary layer removal by suction, i.e., laminar flow control. One of the basic requirements of the present sound shield concept is to achieve sonic cross flow through the gaps in order to prevent lee-side flow disturbances from penetrating back into the shielded region. Tests were conducted at Mach 6 over a local unit Reynolds number range from about 1.2 x 10 to the 6th power to 13.5 x 10 to the 6th power per foot. Measurements of heat transfer, static pressure, and sound levels were made to establish the transition characteristics of the boundary layer on the rod array and the sound shielding effectiveness.

  7. Computational studies of steady-state sound field and reverberant sound decay in a system of two coupled rooms

    NASA Astrophysics Data System (ADS)

    Meissner, Mirosław

    2007-09-01

    The acoustical properties of an irregularly shaped room consisting of two connected rectangular subrooms were studied. An eigenmode method supported by a numerical implementation has been used to predict acoustic characteristics of the coupled system, such as the distribution of the sound pressure in steady-state and the reverberation time. In the theoretical model a low-frequency limit was considered. In this case the eigenmodes are lightly damped, thusthey were approximated by normal acoustic modes of a hard-walled room. The eigenfunctions and eigenfrequencies were computed numerically via application of a forced oscillator method with a finite difference algorithm. The influence of coupling between subrooms on acoustic parameters of the enclosure was demonstrated in numerical simulations where different distributions of absorbing materials on the walls of the subrooms and various positions of the sound source were assumed. Calculation results have shown that for large differences in the absorption coefficient in the subrooms the effect of modal localization contributes to peaks of RMS pressure in steady-state and a large increase in the reverberation time.

  8. A telescopic cinema sound camera for observing high altitude aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Slater, Dan

    2014-09-01

    Rockets and other high altitude aerospace vehicles produce interesting visual and aural phenomena that can be remotely observed from long distances. This paper describes a compact, passive and covert remote sensing system that can produce high resolution sound movies at >100 km viewing distances. The telescopic high resolution camera is capable of resolving and quantifying space launch vehicle dynamics including plume formation, staging events and payload fairing jettison. Flight vehicles produce sounds and vibrations that modulate the local electromagnetic environment. These audio frequency modulations can be remotely sensed by passive optical and radio wave detectors. Acousto-optic sensing methods were primarily used but an experimental radioacoustic sensor using passive micro-Doppler radar techniques was also tested. The synchronized combination of high resolution flight vehicle imagery with the associated vehicle sounds produces a cinema like experience that that is useful in both an aerospace engineering and a Hollywood film production context. Examples of visual, aural and radar observations of the first SpaceX Falcon 9 v1.1 rocket launch are shown and discussed.

  9. Temporal-Spectral Characterization and Classification of Marine Mammal Vocalizations and Diesel-Electric Ships Radiated Sound over Continental Shelf Scale Regions with Coherent Hydrophone Array Measurements

    NASA Astrophysics Data System (ADS)

    Huang, Wei

    The passive ocean acoustic waveguide remote sensing (POAWRS) technology is capable of monitoring a large variety of underwater sound sources over instantaneous wide areas spanning continental-shelf scale regions. POAWRS uses a large-aperture densely-sampled coherent hydrophone array to significantly enhance the signal-to-noise ratio via beamforming, enabling detection of sound sources roughly two-orders of magnitude more distant in range than that possible with a single hydrophone. The sound sources detected by POAWRS include ocean biology, geophysical processes, and man-made activities. POAWRS provides detection, bearing-time estimation, localization, and classification of underwater sound sources. The volume of underwater sounds detected by POAWRS is immense, typically exceeding a million unique signal detections per day, in the 10-4000 Hz frequency range, making it a tremendously challenging task to distinguish and categorize the various sound sources present in a given region. Here we develop various approaches for characterizing and clustering the signal detections for various subsets of data acquired using the POAWRS technology. The approaches include pitch tracking of the dominant signal detections, time-frequency feature extraction, clustering and categorization methods. These approaches are essential for automatic processing and enhancing the efficiency and accuracy of POAWRS data analysis. The results of the signal detection, clustering and classification analysis are required for further POAWRS processing, including localization and tracking of a large number of oceanic sound sources. Here the POAWRS detection, localization and clustering approaches are applied to analyze and elucidate the vocalization behavior of humpback, sperm and fin whales in the New England continental shelf and slope, including the Gulf of Maine from data acquired using coherent hydrophone arrays. The POAWRS technology can also be applied for monitoring ocean vehicles. Here the approach is calibrated by application to known ships present in the Gulf of Maine and in the Norwegian Sea from their underwater sounds received using a coherent hydrophone array. The vocalization behavior of humpback whales was monitored over vast areas of the Gulf of Maine using the POAWRS technique over multiple diel cycles in Fall 2006. The humpback vocalizations, received at a rate of roughly 1800+/-1100 calls per day, comprised of both song and non-song. The song vocalizations, composed of highly structured and repeatable set of phrases, are characterized by inter-pulse intervals of 3.5 +/- 1.8 s. Songs were detected throughout the diel cycle, occuring roughly 40% during the day and 60% during the night. The humpback non-song vocalizations, dominated by shorter duration (≤3 s) downsweep and bow-shaped moans, as well as a small fraction of longer duration (˜5 s) cries, have significantly larger mean and more variable inter-pulse intervals of 14.2 +/- 11 s. The non-song vocalizations were detected at night with negligible detections during the day, implying they probably function as nighttime communication signals. The humpback song and non-song vocalizations are separately localized using the moving array triangulation and array invariant techniques. The humpback song and non-song moan calls are both consistently localized to a dense area on northeastern Georges Bank and a less dense region extended from Franklin Basin to the Great South Channel. Humpback cries occur exclusively on northeastern Georges Bank and during nights with coincident dense Atlantic herring shoaling populations, implying the cries are feeding-related. Sperm whales in the New England continental shelf and slope were passively localized and classified from their vocalizations received using a single low-frequency (<2500 Hz) densely-sampled horizontal coherent hydrophone array deployed in Spring 2013 in Gulf of Maine. Whale bearings were estimated using time-domain beamforming that provided high coherent array gain in sperm whale click signal-to-noise ratio. Whale ranges from the receiver array center were estimated using the moving array triangulation technique from a sequence of whale bearing measurements. Multiple concurrently vocalizing sperm whales, in the far-field of the horizontal receiver array, were distinguished and classified based on their horizontal spatial locations and the inter-pulse intervals of their vocalized click signals. We provide detailed analysis of over 15,000 fin whale 20 Hz vocalizations received on Oct 1-3, 2006 in the Gulf of Maine. These vocalizations are separated into 16 clusters following the clustering approaches. Seven of these types are prominent, each acounting for between 8% to 16% and together comprise roughly 85% of all the analyzed vocalizations. The 7 prominent clusters are each more abundant during nighttime hours by a factor of roughly 2.5 times than that of the daytime. The diel-spatial correlation of the 7 prominent clusters to the simultaneously observed densities of their fish prey, the Atlantic herring in the Gulf of Maine, is provided which implies that the factor of roughly 2.5 increase in call rate during night-time hours can be attributed to increased fish-feeding activities. (Abstract shortened by ProQuest.).

  10. Processing of frequency-modulated sounds in the lateral auditory belt cortex of the rhesus monkey.

    PubMed

    Tian, Biao; Rauschecker, Josef P

    2004-11-01

    Single neurons were recorded from the lateral belt areas, anterolateral (AL), mediolateral (ML), and caudolateral (CL), of nonprimary auditory cortex in 4 adult rhesus monkeys under gas anesthesia, while the neurons were stimulated with frequency-modulated (FM) sweeps. Responses to FM sweeps, measured as the firing rate of the neurons, were invariably greater than those to tone bursts. In our stimuli, frequency changed linearly from low to high frequencies (FM direction "up") or high to low frequencies ("down") at varying speeds (FM rates). Neurons were highly selective to the rate and direction of the FM sweep. Significant differences were found between the 3 lateral belt areas with regard to their FM rate preferences: whereas neurons in ML responded to the whole range of FM rates, AL neurons responded better to slower FM rates in the range of naturally occurring communication sounds. CL neurons generally responded best to fast FM rates at a speed of several hundred Hz/ms, which have the broadest frequency spectrum. These selectivities are consistent with a role of AL in the decoding of communication sounds and of CL in the localization of sounds, which works best with broader bandwidths. Together, the results support the hypothesis of parallel streams for the processing of different aspects of sounds, including auditory objects and auditory space.

  11. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction

    NASA Astrophysics Data System (ADS)

    Ródenas, Josep A.; Aarts, Ronald M.; Janssen, A. J. E. M.

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern opt that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived opt. Informal listening tests have shown that the opt worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as ``Position-Independent (PI) stereo.''

  12. Hearing in three dimensions

    NASA Astrophysics Data System (ADS)

    Shinn-Cunningham, Barbara

    2003-04-01

    One of the key functions of hearing is to help us monitor and orient to events in our environment (including those outside the line of sight). The ability to compute the spatial location of a sound source is also important for detecting, identifying, and understanding the content of a sound source, especially in the presence of competing sources from other positions. Determining the spatial location of a sound source poses difficult computational challenges; however, we perform this complex task with proficiency, even in the presence of noise and reverberation. This tutorial will review the acoustic, psychoacoustic, and physiological processes underlying spatial auditory perception. First, the tutorial will examine how the many different features of the acoustic signals reaching a listener's ears provide cues for source direction and distance, both in anechoic and reverberant space. Then we will discuss psychophysical studies of three-dimensional sound localization in different environments and the basic neural mechanisms by which spatial auditory cues are extracted. Finally, ``virtual reality'' approaches for simulating sounds at different directions and distances under headphones will be reviewed. The tutorial will be structured to appeal to a diverse audience with interests in all fields of acoustics and will incorporate concepts from many areas, such as psychological and physiological acoustics, architectural acoustics, and signal processing.

  13. Factors Regulating Early Life History Dispersal of Atlantic Cod (Gadus morhua) from Coastal Newfoundland

    PubMed Central

    Stanley, Ryan R. E.; deYoung, Brad; Snelgrove, Paul V. R.; Gregory, Robert S.

    2013-01-01

    To understand coastal dispersal dynamics of Atlantic cod (Gadus morhua), we examined spatiotemporal egg and larval abundance patterns in coastal Newfoundland. In recent decades, Smith Sound, Trinity Bay has supported the largest known overwintering spawning aggregation of Atlantic cod in the region. We estimated spawning and dispersal characteristics for the Smith Sound-Trinity Bay system by fitting ichthyoplankton abundance data to environmentally-driven, simplified box models. Results show protracted spawning, with sharply increased egg production in early July, and limited dispersal from the Sound. The model for the entire spawning season indicates egg export from Smith Sound is 13%•day−1 with a net mortality of 27%•day–1. Eggs and larvae are consistently found in western Trinity Bay with little advection from the system. These patterns mirror particle tracking models that suggest residence times of 10–20 days, and circulation models indicating local gyres in Trinity Bay that act in concert with upwelling dynamics to retain eggs and larvae. Our results are among the first quantitative dispersal estimates from Smith Sound, linking this spawning stock to the adjacent coastal waters. These results illustrate the biophysical interplay regulating dispersal and connectivity originating from inshore spawning of coastal northwest Atlantic. PMID:24058707

  14. Somatotopic Semantic Priming and Prediction in the Motor System

    PubMed Central

    Grisoni, Luigi; Dreyer, Felix R.; Pulvermüller, Friedemann

    2016-01-01

    The recognition of action-related sounds and words activates motor regions, reflecting the semantic grounding of these symbols in action information; in addition, motor cortex exerts causal influences on sound perception and language comprehension. However, proponents of classic symbolic theories still dispute the role of modality-preferential systems such as the motor cortex in the semantic processing of meaningful stimuli. To clarify whether the motor system carries semantic processes, we investigated neurophysiological indexes of semantic relationships between action-related sounds and words. Event-related potentials revealed that action-related words produced significantly larger stimulus-evoked (Mismatch Negativity-like) and predictive brain responses (Readiness Potentials) when presented in body-part-incongruent sound contexts (e.g., “kiss” in footstep sound context; “kick” in whistle context) than in body-part-congruent contexts, a pattern reminiscent of neurophysiological correlates of semantic priming. Cortical generators of the semantic relatedness effect were localized in areas traditionally associated with semantic memory, including left inferior frontal cortex and temporal pole, and, crucially, in motor areas, where body-part congruency of action sound–word relationships was indexed by a somatotopic pattern of activation. As our results show neurophysiological manifestations of action-semantic priming in the motor cortex, they prove semantic processing in the motor system and thus in a modality-preferential system of the human brain. PMID:26908635

  15. Maps and documentation of seismic CPT soundings in the central, eastern, and western United States

    USGS Publications Warehouse

    Holzer, Thomas L.; Noce, Thomas E.; Bennett, Michael J.

    2010-01-01

    Nine hundred twenty seven seismic cone penetration tests (CPT) in a variety of geologic deposits and geographic locations were conducted by the U.S. Geological Survey (USGS) primarily between 1998 and 2008 for the purpose of collecting penetration test data to evaluate the liquefaction potential of different types of surficial geologic deposits (table 1). The evaluation is described in Holzer and others (in press). This open-file report summarizes the seismic CPT and geotechnical data that were collected for the evaluation, outlines the general conditions under which the data were acquired, and briefly describes the geographic location of each study area and local geologic conditions. This report also describes the field methods used to obtain the seismic CPT data and summarizes the results of shear-wave velocities measurements at 2-m intervals in each sounding. Although the average depth of the 927 soundings was 18.5 m, we estimated a time-averaged shear-wave velocity to depths of 20 m and 30 m, VS20 and VS30, respectively, for soundings deeper than 10 m and 20 m. Soil sampling also was selectively conducted in many of the study areas at representative seismic CPT soundings. These data are described and laboratory analyses of geotechnical properties of these samples are summarized in table 2.

  16. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    PubMed Central

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  17. Derivation of an optimal directivity pattern for sweet spot widening in stereo sound reproduction.

    PubMed

    Ródenas, Josep A; Aarts, Ronald M; Janssen, A J E M

    2003-01-01

    In this paper the correction of the degradation of the stereophonic illusion during sound reproduction due to off-center listening is investigated. The main idea is that the directivity pattern of a loudspeaker array should have a well-defined shape such that a good stereo reproduction is achieved in a large listening area. Therefore, a mathematical description to derive an optimal directivity pattern l(opt) that achieves sweet spot widening in a large listening area for stereophonic sound applications is described. This optimal directivity pattern is based on parametrized time/intensity trading data coming from psycho-acoustic experiments within a wide listening area. After the study, the required digital FIR filters are determined by means of a least-squares optimization method for a given stereo base setup (two pair of drivers for the loudspeaker arrays and 2.5-m distance between loudspeakers), which radiate sound in a broad range of listening positions in accordance with the derived l(opt). Informal listening tests have shown that the l(opt) worked as predicted by the theoretical simulations. They also demonstrated the correct central sound localization for speech and music for a number of listening positions. This application is referred to as "Position-Independent (PI) stereo."

  18. An integrated environmental and human systems modeling framework for Puget Sound restoration planning.

    EPA Science Inventory

    Local, state, federal, tribal and private stakeholders have committed significant resources to restoring Puget Sound’s terrestrial-marine ecosystem. Though jurisdictional issues have promoted a fragmented approach to restoration planning, there is growing recognition that a...

  19. An integrated environmental and human systems modeling framework for Puget Sound restoration planning

    EPA Science Inventory

    Local, state, federal, tribal and private stakeholders have committed significant resources to restoring Puget Sound’s terrestrial-marine ecosystem. Though jurisdictional issues have promoted a fragmented approach to restoration planning, there is growing recognition that a...

  20. Hi-C First Results

    NASA Technical Reports Server (NTRS)

    Cirtain, Jonathan

    2013-01-01

    Hi-C obtained the highest spatial and temporal resolution observatoins ever taken in the solar corona. Hi-C reveals dynamics and structure at the limit of its temporal and spatial resolution. Hi-C observed ubiquitous fine-scale flows consistent with the local sound speed.

  1. Seismic and Biological Sources of Ambient Ocean Sound

    NASA Astrophysics Data System (ADS)

    Freeman, Simon Eric

    Sound is the most efficient radiation in the ocean. Sounds of seismic and biological origin contain information regarding the underlying processes that created them. A single hydrophone records summary time-frequency information from the volume within acoustic range. Beamforming using a hydrophone array additionally produces azimuthal estimates of sound sources. A two-dimensional array and acoustic focusing produce an unambiguous two-dimensional `image' of sources. This dissertation describes the application of these techniques in three cases. The first utilizes hydrophone arrays to investigate T-phases (water-borne seismic waves) in the Philippine Sea. Ninety T-phases were recorded over a 12-day period, implying a greater number of seismic events occur than are detected by terrestrial seismic monitoring in the region. Observation of an azimuthally migrating T-phase suggests that reverberation of such sounds from bathymetric features can occur over megameter scales. In the second case, single hydrophone recordings from coral reefs in the Line Islands archipelago reveal that local ambient reef sound is spectrally similar to sounds produced by small, hard-shelled benthic invertebrates in captivity. Time-lapse photography of the reef reveals an increase in benthic invertebrate activity at sundown, consistent with an increase in sound level. The dominant acoustic phenomenon on these reefs may thus originate from the interaction between a large number of small invertebrates and the substrate. Such sounds could be used to take census of hard-shelled benthic invertebrates that are otherwise extremely difficult to survey. A two-dimensional `map' of sound production over a coral reef in the Hawaiian Islands was obtained using two-dimensional hydrophone array in the third case. Heterogeneously distributed bio-acoustic sources were generally co-located with rocky reef areas. Acoustically dominant snapping shrimp were largely restricted to one location within the area surveyed. This distribution of sources could reveal small-scale spatial ecological limitations, such as the availability of food and shelter. While array-based passive acoustic sensing is well established in seismoacoustics, the technique is little utilized in the study of ambient biological sound. With the continuance of Moore's law and advances in battery and memory technology, inferring biological processes from ambient sound may become a more accessible tool in underwater ecological evaluation and monitoring.

  2. Degraded speech sound processing in a rat model of fragile X syndrome

    PubMed Central

    Engineer, Crystal T.; Centanni, Tracy M.; Im, Kwok W.; Rahebi, Kimiya C.; Buell, Elizabeth P.; Kilgard, Michael P.

    2014-01-01

    Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies. PMID:24713347

  3. Auditory Distance Coding in Rabbit Midbrain Neurons and Human Perception: Monaural Amplitude Modulation Depth as a Cue

    PubMed Central

    Zahorik, Pavel; Carney, Laurel H.; Bishop, Brian B.; Kuwada, Shigeyuki

    2015-01-01

    Mechanisms underlying sound source distance localization are not well understood. Here we tested the hypothesis that a novel mechanism can create monaural distance sensitivity: a combination of auditory midbrain neurons' sensitivity to amplitude modulation (AM) depth and distance-dependent loss of AM in reverberation. We used virtual auditory space (VAS) methods for sounds at various distances in anechoic and reverberant environments. Stimulus level was constant across distance. With increasing modulation depth, some rabbit inferior colliculus neurons increased firing rates whereas others decreased. These neurons exhibited monotonic relationships between firing rates and distance for monaurally presented noise when two conditions were met: (1) the sound had AM, and (2) the environment was reverberant. The firing rates as a function of distance remained approximately constant without AM in either environment and, in an anechoic condition, even with AM. We corroborated this finding by reproducing the distance sensitivity using a neural model. We also conducted a human psychophysical study using similar methods. Normal-hearing listeners reported perceived distance in response to monaural 1 octave 4 kHz noise source sounds presented at distances of 35–200 cm. We found parallels between the rabbit neural and human responses. In both, sound distance could be discriminated only if the monaural sound in reverberation had AM. These observations support the hypothesis. When other cues are available (e.g., in binaural hearing), how much the auditory system actually uses the AM as a distance cue remains to be determined. PMID:25834060

  4. Brain-based decoding of mentally imagined film clips and sounds reveals experience-based information patterns in film professionals.

    PubMed

    de Borst, Aline W; Valente, Giancarlo; Jääskeläinen, Iiro P; Tikka, Pia

    2016-04-01

    In the perceptual domain, it has been shown that the human brain is strongly shaped through experience, leading to expertise in highly-skilled professionals. What has remained unclear is whether specialization also shapes brain networks underlying mental imagery. In our fMRI study, we aimed to uncover modality-specific mental imagery specialization of film experts. Using multi-voxel pattern analysis we decoded from brain activity of professional cinematographers and sound designers whether they were imagining sounds or images of particular film clips. In each expert group distinct multi-voxel patterns, specific for the modality of their expertise, were found during classification of imagery modality. These patterns were mainly localized in the occipito-temporal and parietal cortex for cinematographers and in the auditory cortex for sound designers. We also found generalized patterns across perception and imagery that were distinct for the two expert groups: they involved frontal cortex for the cinematographers and temporal cortex for the sound designers. Notably, the mental representations of film clips and sounds of cinematographers contained information that went beyond modality-specificity. We were able to successfully decode the implicit presence of film genre from brain activity during mental imagery in cinematographers. The results extend existing neuroimaging literature on expertise into the domain of mental imagery and show that experience in visual versus auditory imagery can alter the representation of information in modality-specific association cortices. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Local Cochlear Correlations of Perceived Pitch

    NASA Astrophysics Data System (ADS)

    Martignoli, Stefan; Stoop, Ruedi

    2010-07-01

    Pitch is one of the most salient attributes of the human perception of sound, but is still not well understood. This difficulty originates in the entwined nature of the phenomenon, in which a physical stimulus as well as a psychophysiological signal receiver are involved. In an electronic realization of a biophysically detailed nonlinear model of the cochlea, we find local cochlear correlates of the perceived pitch that explain all essential pitch-shifting phenomena from physical grounds.

  6. Extended abstracts from the Coastal Habitats in Puget Sound (CHIPS) 2006 Workshop

    USGS Publications Warehouse

    Gelfenbaum, Guy R.; Fuentes, Tracy L.; Duda, Jeffrey J.; Grossman, Eric E.; Takesue, Renee K.

    2010-01-01

    Puget Sound is the second largest estuary in the United States. Its unique geology, climate, and nutrient-rich waters produce and sustain biologically productive coastal habitats. These same natural characteristics also contribute to a high quality of life that has led to a significant growth in human population and associated development. This population growth, and the accompanying rural and urban development, has played a role in degrading Puget Sound ecosystems, including declines in fish and wildlife populations, water-quality issues, and loss and degradation of coastal habitats.In response to these ecosystem declines and the potential for strategic large-scale preservation and restoration, a coalition of local, State, and Federal agencies, including the private sector, Tribes, and local universities, initiated the Puget Sound Nearshore Ecosystem Restoration Project (PSNERP). The Nearshore Science Team (NST) of PSNERP, along with the U.S. Geological Survey, developed a Science Strategy and Research Plan (Gelfenbaum and others, 2006) to help guide science activities associated with nearshore ecosystem restoration. Implementation of the Research Plan includes a call for State and Federal agencies to direct scientific studies to support PSNERP information needs. In addition, the overall Science Strategy promotes greater communication with decision makers and dissemination of scientific results to the broader scientific community.On November 14–16, 2006, the U.S. Geological Survey sponsored an interdisciplinary Coastal Habitats in Puget Sound (CHIPS) Research Workshop at Fort Worden State Park, Port Townsend, Washington. The main goals of the workshop were to coordinate, integrate, and link research on the nearshore of Puget Sound. Presented research focused on three themes: (1) restoration of large river deltas; (2) recovery of the nearshore ecosystem of the Elwha River; and (3) effects of urbanization on nearshore ecosystems. The more than 35 presentations covered a wide range of ongoing inter-disciplinary research, including studies of sediment geochemistry of aquatic environments, sediment budgets, tracking fish pathways, expansion of invasive forams, beach and nearshore sedimentary environments, using influence diagrams as a decision support tool, forage fish, submarine groundwater, and much, much more.The primary focus within these themes was on developing information on the physical, chemical, and biological processes, as well as the human dimensions, associated with the restoration or rehabilitation of the nearshore environment. The workshop was an excellent opportunity for USGS scientists and collaborators who are working on Puget Sound coastal habitats to present their preliminary findings, discuss upcoming research, and to identify opportunities for interdisciplinary collaboration.A compilation of extended abstracts from workshop participants, this proceedings volume serves as a useful reference for attendees of the workshop and for those unable to attend. Taken together, the abstracts in this report provide a view of the current status of USGS multidisciplinary research on Puget Sound coastal habitats.

  7. Effects of high sound speed confiners on ANFO detonations

    NASA Astrophysics Data System (ADS)

    Kiyanda, Charles; Jackson, Scott; Short, Mark

    2011-06-01

    The interaction between high explosive (HE) detonations and high sound speed confiners, where the confiner sound speed exceeds the HE's detonation speed, has not been thoroughly studied. The subsonic nature of the flow in the confiner allows stress waves to travel ahead of the main detonation front and influence the upstream HE state. The interaction between the detonation wave and the confiner is also no longer a local interaction, so that the confiner thickness now plays a significant role in the detonation dynamics. We report here on larger scale experiments in which a mixture of ammonium nitrate and fuel oil (ANFO) is detonated in aluminium confiners with varying charge diameter and confiner thickness. The results of these large-scale experiments are compared with previous large-scale ANFO experiments in cardboard, as well as smaller-scale aluminium confined ANFO experiments, to characterize the effects of confiner thickness.

  8. On the estimation of sound speed in two-dimensional Yukawa fluids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Semenov, I. L., E-mail: Igor.Semenov@dlr.de; Thomas, H. M.; Khrapak, S. A.

    2015-11-15

    The longitudinal sound speed in two-dimensional Yukawa fluids is estimated using the conventional hydrodynamic expression supplemented by appropriate thermodynamic functions proposed recently by Khrapak et al. [Phys. Plasmas 22, 083706 (2015)]. In contrast to the existing approaches, such as quasi-localized charge approximation (QLCA) and molecular dynamics simulations, our model provides a relatively simple estimate for the sound speed over a wide range of parameters of interest. At strong coupling, our results are shown to be in good agreement with the results obtained using the QLCA approach and those derived from the phonon spectrum for the triangular lattice. On the othermore » hand, our model is also expected to remain accurate at moderate values of the coupling strength. In addition, the obtained results are used to discuss the influence of the strong coupling effects on the adiabatic index of two-dimensional Yukawa fluids.« less

  9. On the electrophonic generation of audio frequency sound by meteors

    NASA Astrophysics Data System (ADS)

    Kelley, Michael C.; Price, Colin

    2017-04-01

    Recorded for centuries, people can hear and see meteors nearly concurrently. Electromagnetic energy clearly propagates at the speed of light and converts to sound (called electrophonics) when coupled to metals. An explanation for the electromagnetic energy source is suggested. Coma ions around the meteor head can easily travel across magnetic field lines up to 120 km. The electrons, however, are tied to magnetic field lines, since they must gyrate around the field above 75 km. A large ambipolar electric field must be generated to conserve charge neutrality. This localized electric field maps to the E region then drives a large Hall current that launches the electromagnetic wave. Using antenna theory and following, a power flux of over 10-8 W/m2 at the ground is found. Electrophonic conversion to sound efficiency then needs to be only 0.1% to explain why humans can hear and see meteors nearly concurrently.

  10. Correlation Factors Describing Primary and Spatial Sensations of Sound Fields

    NASA Astrophysics Data System (ADS)

    ANDO, Y.

    2002-11-01

    The theory of subjective preference of the sound field in a concert hall is established based on the model of human auditory-brain system. The model consists of the autocorrelation function (ACF) mechanism and the interaural crosscorrelation function (IACF) mechanism for signals arriving at two ear entrances, and the specialization of human cerebral hemispheres. This theory can be developed to describe primary sensations such as pitch or missing fundamental, loudness, timbre and, in addition, duration sensation which is introduced here as a fourth. These four primary sensations may be formulated by the temporal factors extracted from the ACF associated with the left hemisphere and, spatial sensations such as localization in the horizontal plane, apparent source width and subjective diffuseness are described by the spatial factors extracted from the IACF associated with the right hemisphere. Any important subjective responses of sound fields may be described by both temporal and spatial factors.

  11. Acoustic Remote Sensing

    NASA Astrophysics Data System (ADS)

    Dowling, David R.; Sabra, Karim G.

    2015-01-01

    Acoustic waves carry information about their source and collect information about their environment as they propagate. This article reviews how these information-carrying and -collecting features of acoustic waves that travel through fluids can be exploited for remote sensing. In nearly all cases, modern acoustic remote sensing involves array-recorded sounds and array signal processing to recover multidimensional results. The application realm for acoustic remote sensing spans an impressive range of signal frequencies (10-2 to 107 Hz) and distances (10-2 to 107 m) and involves biomedical ultrasound imaging, nondestructive evaluation, oil and gas exploration, military systems, and Nuclear Test Ban Treaty monitoring. In the past two decades, approaches have been developed to robustly localize remote sources; remove noise and multipath distortion from recorded signals; and determine the acoustic characteristics of the environment through which the sound waves have traveled, even when the recorded sounds originate from uncooperative sources or are merely ambient noise.

  12. Evaluating land-use change scenarios for the Puget Sound Basin, Washington, within the ecosystem recovery target model-based framework

    USGS Publications Warehouse

    Villarreal, Miguel; Labiosa, Bill; Aiello, Danielle

    2017-05-23

    The Puget Sound Basin, Washington, has experienced rapid urban growth in recent decades, with varying impacts to local ecosystems and natural resources. To plan for future growth, land managers often use scenarios to assess how the pattern and volume of growth may affect natural resources. Using three different land-management scenarios for the years 2000–2060, we assessed various spatial patterns of urban growth relative to maps depicting a model-based characterization of the ecological integrity and recent development pressure of individual land parcels. The three scenarios depict future trajectories of land-use change under alternative management strategies—status quo, managed growth, and unconstrained growth. The resulting analysis offers a preliminary assessment of how future growth patterns in the Puget Sound Basin may impact land targeted for conservation and how short-term metrics of land-development pressure compare to longer term growth projections.

  13. The Dual-channel Extreme Ultraviolet Continuum Experiment: Sounding Rocket EUV Observations of Local B Stars to Determine Their Potential for Supplying Intergalactic Ionizing Radiation

    NASA Astrophysics Data System (ADS)

    Erickson, Nicholas; Green, James C.; France, Kevin; Stocke, John T.; Nell, Nicholas

    2018-06-01

    We describe the scientific motivation and technical development of the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE). DEUCE is a sounding rocket payload designed to obtain the first flux-calibrated spectra of two nearby B stars in the EUV 650-1150Å bandpass. This measurement will help in understanding the ionizing flux output of hot B stars, calibrating stellar models and commenting on the potential contribution of such stars to reionization. DEUCE consists of a grazing incidence Wolter II telescope, a normal incidence holographic grating, and the largest (8” x 8”) microchannel plate detector ever flown in space, covering the 650-1150Å band in medium and low resolution channels. DEUCE will launch on December 1, 2018 as NASA/CU sounding rocket mission 36.331 UG, observing Epsilon Canis Majoris, a B2 II star.

  14. Investigating acoustic-induced deformations in a foam using multiple light scattering.

    PubMed

    Erpelding, M; Guillermic, R M; Dollet, B; Saint-Jalmes, A; Crassous, J

    2010-08-01

    We have studied the effect of an external acoustic wave on bubble displacements inside an aqueous foam. The signature of the acoustic-induced bubble displacements is found using a multiple light scattering technique, and occurs as a modulation on the photon correlation curve. Measurements for various sound frequencies and amplitudes are compared to analytical predictions and numerical simulations. These comparisons finally allow us to elucidate the nontrivial acoustic displacement profile inside the foam; in particular, we find that the acoustic wave creates a localized shear in the vicinity of the solid walls holding the foam, as a consequence of inertial contributions. This study of how bubbles "dance" inside a foam as a response to sound turns out to provide new insights on foam acoustics and sound transmission into a foam, foam deformation at high frequencies, and analysis of light scattering data in samples undergoing nonhomogeneous deformations.

  15. Temporal processing and adaptation in the songbird auditory forebrain.

    PubMed

    Nagel, Katherine I; Doupe, Allison J

    2006-09-21

    Songbird auditory neurons must encode the dynamics of natural sounds at many volumes. We investigated how neural coding depends on the distribution of stimulus intensities. Using reverse-correlation, we modeled responses to amplitude-modulated sounds as the output of a linear filter and a nonlinear gain function, then asked how filters and nonlinearities depend on the stimulus mean and variance. Filter shape depended strongly on mean amplitude (volume): at low mean, most neurons integrated sound over many milliseconds, while at high mean, neurons responded more to local changes in amplitude. Increasing the variance (contrast) of amplitude modulations had less effect on filter shape but decreased the gain of firing in most cells. Both filter and gain changes occurred rapidly after a change in statistics, suggesting that they represent nonlinearities in processing. These changes may permit neurons to signal effectively over a wider dynamic range and are reminiscent of findings in other sensory systems.

  16. Sound absorption enhancement of nonwoven felt by using coupled membrane - sonic crystal inclusion

    NASA Astrophysics Data System (ADS)

    Fitriani, M. C.; Yahya, I.; Harjana; Ubaidillah; Aditya, F.; Siregar, Y.; Moeliono, M.; Sulaksono, S.

    2016-11-01

    The experimental results from laboratory test on the sound absorption performance of nonwoven felt with an array thin tubes and sonic crystal inclusions reported in this paper. The nonwoven felt sample was produced by a local company with 15 mm in its thickness and 900 gsm. The 6.4 mm diameter plastic straw was used to construct the thin tubes array while the sonic crystal is arranged in a 4 × 4 lattice crystal formation. It made from a PVC cylinder with 17 mm and 50 mm in diameter and length respectively. All cylinders have two holes positioned on 10 mm and 25 mm from the base. The results show that both treatments, array of thin tube and sonic crystal inclusions are effectively increased the sound absorption coefficient of the nonwoven felt significantly especially in the low frequency range starting from 200Hz.

  17. A FPGA Implementation of the CAR-FAC Cochlear Model.

    PubMed

    Xu, Ying; Thakur, Chetan S; Singh, Ram K; Hamilton, Tara Julia; Wang, Runchun M; van Schaik, André

    2018-01-01

    This paper presents a digital implementation of the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The CAR part simulates the basilar membrane's (BM) response to sound. The FAC part models the outer hair cell (OHC), the inner hair cell (IHC), and the medial olivocochlear efferent system functions. The FAC feeds back to the CAR by moving the poles and zeros of the CAR resonators automatically. We have implemented a 70-section, 44.1 kHz sampling rate CAR-FAC system on an Altera Cyclone V Field Programmable Gate Array (FPGA) with 18% ALM utilization by using time-multiplexing and pipeline parallelizing techniques and present measurement results here. The fully digital reconfigurable CAR-FAC system is stable, scalable, easy to use, and provides an excellent input stage to more complex machine hearing tasks such as sound localization, sound segregation, speech recognition, and so on.

  18. A FPGA Implementation of the CAR-FAC Cochlear Model

    PubMed Central

    Xu, Ying; Thakur, Chetan S.; Singh, Ram K.; Hamilton, Tara Julia; Wang, Runchun M.; van Schaik, André

    2018-01-01

    This paper presents a digital implementation of the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The CAR part simulates the basilar membrane's (BM) response to sound. The FAC part models the outer hair cell (OHC), the inner hair cell (IHC), and the medial olivocochlear efferent system functions. The FAC feeds back to the CAR by moving the poles and zeros of the CAR resonators automatically. We have implemented a 70-section, 44.1 kHz sampling rate CAR-FAC system on an Altera Cyclone V Field Programmable Gate Array (FPGA) with 18% ALM utilization by using time-multiplexing and pipeline parallelizing techniques and present measurement results here. The fully digital reconfigurable CAR-FAC system is stable, scalable, easy to use, and provides an excellent input stage to more complex machine hearing tasks such as sound localization, sound segregation, speech recognition, and so on. PMID:29692700

  19. DIY EOS: Experimentally Validated Equations of State for Planetary Fluids to GPa Pressures, Tools for Understanding Planetary Processes and Habitability

    NASA Astrophysics Data System (ADS)

    Vance, Steven; Brown, J. Michael; Bollengier, Olivier

    2016-10-01

    Sound speeds are fundamental to seismology, and provide a path allowing the accurate determination of thermodynamic potentials. Prior equations of state (EOS) for pure ammonia (Harr and Gallagher 1978, Tillner-Roth et al. 1993) are based primarily on measured densities and heat capacities. Sound speeds, not included in the fitting, are poorly predicted.We couple recent high pressure sound speed data with prior densities and heat capacities to generate a new equation of state. Our representation fits both the earlier lower pressure work as well as measured sound speeds to 4 GPa and 700 K and the Hugoniot to 70 GPa and 6000 K.In contrast to the damped polynomial representation previously used, our equation of state is based on local basis functions in the form of tensor b-splines. Regularization allows the thermodynamic surface to be continued into regimes poorly sampled by experiments. We discuss application of this framework for aqueous equations of state validated by experimental measurements. Preliminary equations of state have been prepared applying the local basis function methodology to aqueous NH3, Mg2SO4, NaCl, and Na2SO4. We describe its use for developing new equations of state, and provide some applications of the new thermodynamic data to the interior structures of gas giant planets and ocean worlds.References:L. Haar and J. S. Gallagher. Thermodynamic properties of ammonia. American Chemical Society and the American Institute of Physics for the National Bureau of Standards, 1978.R. Tillner-Roth, F. Harms-Watzenberg, and H. Baehr. Eine neue fundamentalgleichung fuer ammoniak. DKV TAGUNGSBERICHT, 20:67-67, 1993.

  20. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language

    PubMed Central

    Poliva, Oren

    2016-01-01

    The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences). PMID:27445676

Top